All of the interesting technological, artistic or just plain fun subjects I'd investigate if I had an infinite number of lifetimes. In other words, a dumping ground...

Thursday, 27 September 2007

ccna module 6 & 7

Overview



Ethernet is now the dominant LAN technology in the world. Ethernet is a family of LAN technologies that may be best understood with the OSI reference model. All LANs must deal with the basic issue of how individual stations, or nodes, are named. Ethernet specifications support different media, bandwidths, and other Layer 1 and 2 variations. However, the basic frame format and address scheme is the same for all varieties of Ethernet.

Various MAC strategies have been invented to allow multiple stations to access physical media and network devices. It is important to understand how network devices gain access to the network media before students can comprehend and troubleshoot the entire network.

This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.

Students who complete this module should be able to perform the following tasks:

* Describe the basics of Ethernet technology
* Explain naming rules of Ethernet technology
* Explain how Ethernet relates to the OSI model
* Describe the Ethernet framing process and frame structure
* List Ethernet frame field names and purposes
* Identify the characteristics of CSMA/CD
* Describe Ethernet timing, interframe spacing, and backoff time after a collision
* Define Ethernet errors and collisions
* Explain the concept of auto-negotiation in relation to speed and duplex

6.1
Ethernet Fundamentals
6.1.1
Introduction to Ethernet



This page provides an introduction to Ethernet. Most of the traffic on the Internet originates and ends with Ethernet connections. Since it began in the 1970s, Ethernet has evolved to meet the increased demand for high-speed LANs. When optical fiber media was introduced, Ethernet adapted to take advantage of the superior bandwidth and low error rate that fiber offers. Now the same protocol that transported data at 3 Mbps in 1973 can carry data at 10 Gbps.

The success of Ethernet is due to the following factors:

* Simplicity and ease of maintenance
* Ability to incorporate new technologies
* Reliability
* Low cost of installation and upgrade

The introduction of Gigabit Ethernet has extended the original LAN technology to distances that make Ethernet a MAN and WAN standard.

The original idea for Ethernet was to allow two or more hosts to use the same medium with no interference between the signals. This problem of multiple user access to a shared medium was studied in the early 1970s at the University of Hawaii. A system called Alohanet was developed to allow various stations on the Hawaiian Islands structured access to the shared radio frequency band in the atmosphere. This work later formed the basis for the Ethernet access method known as CSMA/CD.

The first LAN in the world was the original version of Ethernet. Robert Metcalfe and his coworkers at Xerox designed it more than thirty years ago. The first Ethernet standard was published in 1980 by a consortium of Digital Equipment Corporation, Intel, and Xerox (DIX). Metcalfe wanted Ethernet to be a shared standard from which everyone could benefit, so it was released as an open standard. The first products that were developed from the Ethernet standard were sold in the early 1980s. Ethernet transmitted at up to 10 Mbps over thick coaxial cable up to a distance of 2 kilometers (km). This type of coaxial cable was referred to as thicknet and was about the width of a small finger.

In 1985, the IEEE standards committee for Local and Metropolitan Networks published standards for LANs. These standards start with the number 802. The standard for Ethernet is 802.3. The IEEE wanted to make sure that its standards were compatible with the International Standards Organization (ISO) and OSI model. To do this, the IEEE 802.3 standard had to address the needs of Layer 1 and the lower portion of Layer 2 of the OSI model. As a result, some small modifications to the original Ethernet standard were made in 802.3.

The differences between the two standards were so minor that any Ethernet NIC can transmit and receive both Ethernet and 802.3 frames. Essentially, Ethernet and IEEE 802.3 are the same standards.

The 10-Mbps bandwidth of Ethernet was more than enough for the slow PCs of the 1980s. By the early 1990s PCs became much faster, file sizes increased, and data flow bottlenecks occurred. Most were caused by the low availability of bandwidth. In 1995, IEEE announced a standard for a 100-Mbps Ethernet. This was followed by standards for Gigabit Ethernet in 1998 and 1999.

All the standards are essentially compatible with the original Ethernet standard. An Ethernet frame could leave an older coax 10-Mbps NIC in a PC, be placed onto a 10-Gbps Ethernet fiber link, and end up at a 100-Mbps NIC. As long as the frame stays on Ethernet networks it is not changed. For this reason Ethernet is considered very scalable. The bandwidth of the network could be increased many times while the Ethernet technology remains the same.

The original Ethernet standard has been amended many times to manage new media and higher transmission rates. These amendments provide standards for new technologies and maintain compatibility between Ethernet variations.

The next page explains the naming rules for the Ethernet family of networks.
6.1
Ethernet Fundamentals
6.1.2
IEEE Ethernet naming rules



This page focuses on the Ethernet naming rules developed by IEEE.

Ethernet is not one networking technology, but a family of networking technologies that includes Legacy, Fast Ethernet, and Gigabit Ethernet. Ethernet speeds can be 10, 100, 1000, or 10,000 Mbps. The basic frame format and the IEEE sublayers of OSI Layers 1 and 2 remain consistent across all forms of Ethernet.

When Ethernet needs to be expanded to add a new medium or capability, the IEEE issues a new supplement to the 802.3 standard. The new supplements are given a one or two letter designation such as 802.3u. An abbreviated description, called an identifier, is also assigned to the supplement.

The abbreviated description consists of the following elements:

* A number that indicates the number of Mbps transmitted
* The word base to indicate that baseband signaling is used
* One or more letters of the alphabet indicating the type of medium used. For example, F = fiber optical cable and T = copper unshielded twisted pair

Ethernet relies on baseband signaling, which uses the entire bandwidth of the transmission medium. The data signal is transmitted directly over the transmission medium.

In broadband signaling, the data signal is no longer placed directly on the transmission medium. Ethernet used broadband signaling in the 10BROAD36 standard. 10BROAD36 is the IEEE standard for an 802.3 Ethernet network using broadband transmission with thick coaxial cable running at 10 Mbps. 10BROAD36 is now considered obsolete. An analog or carrier signal is modulated by the data signal and then transmitted. Radio broadcasts and cable TV use broadband signaling.

IEEE cannot force manufacturers to fully comply with any standard. IEEE has two main objectives:

* Supply the information necessary to build devices that comply with Ethernet standards
* Promote innovation among manufacturers

Students will identify the IEEE 802 standards in the Interactive Media Activity.

The next page explains Ethernet and the OSI model.
6.1
Ethernet Fundamentals
6.1.3
Ethernet and the OSI model



This page will explain how Ethernet relates to the OSI model.

Ethernet operates in two areas of the OSI model. These are the lower half of the data link layer, which is known as the MAC sublayer, and the physical layer.

Data that moves from one Ethernet station to another often passes through a repeater. All stations in the same collision domain see traffic that passes through a repeater. A collision domain is a shared resource. Problems that originate in one part of a collision domain will usually impact the entire collision domain.

A repeater forwards traffic to all other ports. A repeater never sends traffic out the same port from which it was received. Any signal detected by a repeater will be forwarded. If the signal is degraded through attenuation or noise, the repeater will attempt to reconstruct and regenerate the signal.

To guarantee minimum bandwidth and operability, standards specify the maximum number of stations per segment, maximum segment length, and maximum number of repeaters between stations. Stations separated by bridges or routers are in different collision domains.

Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. Ethernet at Layer 1 involves signals, bit streams that travel on the media, components that put signals on media, and various topologies. Ethernet Layer 1 performs a key role in the communication that takes place between devices, but each of its functions has limitations. Layer 2 addresses these limitations.

Data link sublayers contribute significantly to technological compatibility and computer communications. The MAC sublayer is concerned with the physical components that will be used to communicate the information. The Logical Link Control (LLC) sublayer remains relatively independent of the physical equipment that will be used for the communication process.

Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. While there are other varieties of Ethernet, the ones shown are the most widely used.

The Interactive Media Activity reviews the layers of the OSI model.

The next page explains the address system used by Ethernet networks.
6.1
Ethernet Fundamentals
6.1.4
Naming



This page will discuss the MAC addresses used by Ethernet networks.

An address system is required to uniquely identify computers and interfaces to allow for local delivery of frames on the Ethernet. Ethernet uses MAC addresses that are 48 bits in length and expressed as 12 hexadecimal digits. The first six hexadecimal digits, which are administered by the IEEE, identify the manufacturer or vendor. This portion of the MAC address is known as the Organizational Unique Identifier (OUI). The remaining six hexadecimal digits represent the interface serial number or another value administered by the manufacturer. MAC addresses are sometimes referred to as burned-in MAC addresses (BIAs) because they are burned into ROM and are copied into RAM when the NIC initializes.

At the data link layer MAC headers and trailers are added to upper layer data. The header and trailer contain control information intended for the data link layer in the destination system. The data from upper layers is encapsulated within the data link frame, between the header and trailer, and then sent out on the network.

The NIC uses the MAC address to determine if a message should be passed on to the upper layers of the OSI model. The NIC does not use CPU processing time to make this assessment. This enables better communication times on an Ethernet network.

When a device sends data on an Ethernet network, it can use the destination MAC address to open a communication pathway to the other device. The source device attaches a header with the MAC address of the intended destination and sends data through the network. As this data travels along the network media the NIC in each device checks to see if the MAC address matches the physical destination address carried by the data frame. If there is no match, the NIC discards the data frame. When the data reaches the destination node, the NIC makes a copy and passes the frame up the OSI layers. On an Ethernet network, all nodes must examine the MAC header.

All devices that are connected to the Ethernet LAN have MAC addressed interfaces. This includes workstations, printers, routers, and switches.

The next page will focus on Layer 2 frames.
6.1
Ethernet Fundamentals
6.1.5
Layer 2 framing



This page will explain how frames are created at Layer 2 of the OSI model.

Encoded bit streams, or data, on physical media represent a tremendous technological accomplishment, but they, alone, are not enough to make communication happen. Framing provides essential information that could not be obtained from coded bit streams alone. This information includes the following:

* Which computers are in communication with each other
* When communication between individual computers begins and when it ends
* Which errors occurred while the computers communicated
* Which computer will communicate next

Framing is the Layer 2 encapsulation process. A frame is the Layer 2 protocol data unit.

A voltage versus time graph could be used to visualize bits. However, it may be too difficult to graph address and control information for larger units of data. Another type of diagram that could be used is the frame format diagram, which is based on voltage versus time graphs. Frame format diagrams are read from left to right, just like an oscilloscope graph. The frame format diagram shows different groupings of bits, or fields, that perform other functions.

There are many different types of frames described by various standards.A single generic frame has sections called fields. Each field is composed of bytes. The names of the fields are as follows:

* Start Frame field
* Address field
* Length/Type field
* Data field
* Frame Check Sequence (FCS) field

When computers are connected to a physical medium, there must be a way to inform other computers when they are about to transmit a frame. Various technologies do this in different ways. Regardless of the technology, all frames begin with a sequence of bytes to signal the data transmission.

All frames contain naming information, such as the name of the source node, or source MAC address, and the name of the destination node, or destination MAC address.

Most frames have some specialized fields. In some technologies, a Length field specifies the exact length of a frame in bytes. Some frames have a Type field, which specifies the Layer 3 protocol used by the device that wants to send data.

Frames are used to send upper-layer data and ultimately the user application data from a source to a destination. The data package includes the message to be sent, or user application data. Extra bytes may be added so frames have a minimum length for timing purposes. LLC bytes are also included with the Data field in the IEEE standard frames. The LLC sublayer takes the network protocol data, which is an IP packet, and adds control information to help deliver the packet to the destination node. Layer 2 communicates with the upper layers through LLC.

All frames and the bits, bytes, and fields contained within them, are susceptible to errors from a variety of sources. The FCS field contains a number that is calculated by the source node based on the data in the frame. This number is added to the end of a frame that is sent. When the destination node receives the frame the FCS number is recalculated and compared with the FCS number included in the frame. If the two numbers are different, an error is assumed, the frame is discarded.

Because the source cannot detect that the frame has been discarded, retransmission has to be initiated by higher layer connection-oriented protocols providing data flow control. Because these protocols, such as TCP, expect frame acknowledgment, ACK, to be sent by the peer station within a certain time, retransmission usually occurs.

There are three primary ways to calculate the FCS number:

* Cyclic redundancy check (CRC) – performs calculations on the data.
* Two-dimensional parity – places individual bytes in a two-dimensional array and performs redundancy checks vertically and horizontally on the array, creating an extra byte resulting in an even or odd number of binary 1s.
* Internet checksum – adds the values of all of the data bits to arrive at a sum.

The node that transmits data must get the attention of other devices to start and end a frame. The Length field indicates where the frame ends. The frame ends after the FCS. Sometimes there is a formal byte sequence referred to as an end-frame delimiter.

The next page will discuss the frame structure of an Ethernet network.
6.1
Ethernet Fundamentals
6.1.6
Ethernet frame structure



This page will describe the frame structure of Ethernet networks.

At the data link layer the frame structure is nearly identical for all speeds of Ethernet from 10 Mbps to 10,000 Mbps. However, at the physical layer almost all versions of Ethernet are very different. Each speed has a distinct set of architecture design rules.

In the version of Ethernet that was developed by DIX prior to the adoption of the IEEE 802.3 version of Ethernet, the Preamble and Start-of-Frame (SOF) Delimiter were combined into a single field. The binary pattern was identical. The field labeled Length/Type was only listed as Length in the early IEEE versions and only as Type in the DIX version. These two uses of the field were officially combined in a later IEEE version since both uses were common.

The Ethernet II Type field is incorporated into the current 802.3 frame definition. When a node receives a frame it must examine the Length/Type field to determine which higher-layer protocol is present. If the two-octet value is equal to or greater than 0x0600 hexadecimal, 1536 decimal, then the contents of the Data Field are decoded according to the protocol indicated. Ethernet II is the Ethernet frame format that is used in TCP/IP networks.

The next page will discuss the information included in a frame.
6.1
Ethernet Fundamentals
6.1.7
Ethernet frame fields



This page defines the fields that are used in a frame.

Some of the fields permitted or required in an 802.3 Ethernet frame are as follows:

* Preamble
* SOF Delimiter
* Destination Address
* Source Address
* Length/Type
* Header and Data
* FCS
* Extension

The preamble is an alternating pattern of ones and zeros used to time synchronization in 10 Mbps and slower implementations of Ethernet. Faster versions of Ethernet are synchronous so this timing information is unnecessary but retained for compatibility.

A SOF delimiter consists of a one-octet field that marks the end of the timing information and contains the bit sequence 10101011.

The destination address can be unicast, multicast, or broadcast.

The Source Address field contains the MAC source address. The source address is generally the unicast address of the Ethernet node that transmitted the frame. However, many virtual protocols use and sometimes share a specific source MAC address to identify the virtual entity.

The Length/Type field supports two different uses. If the value is less than 1536 decimal, 0x600 hexadecimal, then the value indicates length. The length interpretation is used when the LLC layer provides the protocol identification. The type value indicates which upper-layer protocol will receive the data after the Ethernet process is complete. The length indicates the number of bytes of data that follows this field.

The Data field and padding if necessary, may be of any length that does not cause the frame to exceed the maximum frame size. The maximum transmission unit (MTU) for Ethernet is 1500 octets, so the data should not exceed that size. The content of this field is unspecified. An unspecified amount of data is inserted immediately after the user data when there is not enough user data for the frame to meet the minimum frame length. This extra data is called a pad. Ethernet requires each frame to be between 64 and 1518 octets.

A FCS contains a 4-byte CRC value that is created by the device that sends data and is recalculated by the destination device to check for damaged frames. The corruption of a single bit anywhere from the start of the Destination Address through the end of the FCS field will cause the checksum to be different. Therefore, the coverage of the FCS includes itself. It is not possible to distinguish between corruption of the FCS and corruption of any other field used in the calculation.

This page concludes this lesson. The next lesson will discuss the functions of an Ethernet network. The first page will introduce the concept of MAC.
6.2
Ethernet Operation
6.2.1
MAC



This page will define MAC and provide examples of deterministic and non-deterministic MAC protocols.

MAC refers to protocols that determine which computer in a shared-media environment, or collision domain, is allowed to transmit data. MAC and LLC comprise the IEEE version of the OSI Layer 2. MAC and LLC are sublayers of Layer 2. The two broad categories of MAC are deterministic and non-deterministic.

Examples of deterministic protocols include Token Ring and FDDI. In a Token Ring network, hosts are arranged in a ring and a special data token travels around the ring to each host in sequence. When a host wants to transmit, it seizes the token, transmits the data for a limited time, and then forwards the token to the next host in the ring. Token Ring is a collisionless environment since only one host can transmit at a time.

Non-deterministic MAC protocols use a first-come, first-served approach. Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a simple system. The NIC listens for the absence of a signal on the media and begins to transmit. If two nodes transmit at the same time a collision occurs and none of the nodes are able to transmit.

Three common Layer 2 technologies are Token Ring, FDDI, and Ethernet. All three specify Layer 2 issues, LLC, naming, framing, and MAC, as well as Layer 1 signaling components and media issues. The specific technologies for each are as follows:

* Ethernet – uses a logical bus topology to control information flow on a linear bus and a physical star or extended star topology for the cables
* Token Ring – uses a logical ring topology to control information flow and a physical star topology
* FDDI – uses a logical ring topology to control information flow and a physical dual-ring topology

The next page explains how collisions are avoided in an Ethernet network.
6.2
Ethernet Operation
6.2.2
MAC rules and collision detection/backoff



This page describes collision detection and avoidance in a CSMA/CD network.

Ethernet is a shared-media broadcast technology. The access method CSMA/CD used in Ethernet performs three functions:

* Transmitting and receiving data frames
* Decoding data frames and checking them for valid addresses before passing them to the upper layers of the OSI model
* Detecting errors within data frames or on the network

In the CSMA/CD access method, networking devices with data to transmit work in a listen-before-transmit mode. This means when a node wants to send data, it must first check to see whether the networking media is busy. If the node determines the network is busy, the node will wait a random amount of time before retrying. If the node determines the networking media is not busy, the node will begin transmitting and listening. The node listens to ensure no other stations are transmitting at the same time. After completing data transmission the device will return to listening mode.

Networking devices detect a collision has occurred when the amplitude of the signal on the networking media increases. When a collision occurs, each node that is transmitting will continue to transmit for a short time to ensure that all nodes detect the collision. When all nodes have detected the collision, the backoff algorithm is invoked and transmission stops. The nodes stop transmitting for a random period of time, determined by the backoff algorithm. When the delay periods expire, each node can attempt to access the networking media. The devices that were involved in the collision do not have transmission priority.

The Interactive Media Activity shows the procedure for collision detection in an Ethernet network.

The next page will discuss Ethernet timing.
6.2
Ethernet Operation
6.2.3
Ethernet timing



This page explains the importance of slot times in an Ethernet network.

The basic rules and specifications for proper operation of Ethernet are not particularly complicated, though some of the faster physical layer implementations are becoming so. Despite the basic simplicity, when a problem occurs in Ethernet it is often quite difficult to isolate the source. Because of the common bus architecture of Ethernet, also described as a distributed single point of failure, the scope of the problem usually encompasses all devices within the collision domain. In situations where repeaters are used, this can include devices up to four segments away.

Any station on an Ethernet network wishing to transmit a message first "listens" to ensure that no other station is currently transmitting. If the cable is quiet, the station will begin transmitting immediately. The electrical signal takes time to travel down the cable (delay), and each subsequent repeater introduces a small amount of latency in forwarding the frame from one port to the next. Because of the delay and latency, it is possible for more than one station to begin transmitting at or near the same time. This results in a collision.

If the attached station is operating in full duplex then the station may send and receive simultaneously and collisions should not occur. Full-duplex operation also changes the timing considerations and eliminates the concept of slot time. Full-duplex operation allows for larger network architecture designs since the timing restriction for collision detection is removed.

In half duplex, assuming that a collision does not occur, the sending station will transmit 64 bits of timing synchronization information that is known as the preamble. The sending station will then transmit the following information:

* Destination and source MAC addressing information
* Certain other header information
* The actual data payload
* Checksum (FCS) used to ensure that the message was not corrupted along the way

Stations receiving the frame recalculate the FCS to determine if the incoming message is valid and then pass valid messages to the next higher layer in the protocol stack.

10 Mbps and slower versions of Ethernet are asynchronous. Asynchronous means that each receiving station will use the eight octets of timing information to synchronize the receive circuit to the incoming data, and then discard it. 100 Mbps and higher speed implementations of Ethernet are synchronous. Synchronous means the timing information is not required, however for compatibility reasons the Preamble and Start Frame Delimiter (SFD) are present.

For all speeds of Ethernet transmission at or below 1000 Mbps, the standard describes how a transmission may be no smaller than the slot time. Slot time for 10 and 100-Mbps Ethernet is 512 bit-times, or 64 octets. Slot time for 1000-Mbps Ethernet is 4096 bit-times, or 512 octets. Slot time is calculated assuming maximum cable lengths on the largest legal network architecture. All hardware propagation delay times are at the legal maximum and the 32-bit jam signal is used when collisions are detected.

The actual calculated slot time is just longer than the theoretical amount of time required to travel between the furthest points of the collision domain, collide with another transmission at the last possible instant, and then have the collision fragments return to the sending station and be detected. For the system to work the first station must learn about the collision before it finishes sending the smallest legal frame size. To allow 1000-Mbps Ethernet to operate in half duplex the extension field was added when sending small frames purely to keep the transmitter busy long enough for a collision fragment to return. This field is present only on 1000-Mbps, half-duplex links and allows minimum-sized frames to be long enough to meet slot time requirements. Extension bits are discarded by the receiving station.

On 10-Mbps Ethernet one bit at the MAC layer requires 100 nanoseconds (ns) to transmit. At 100 Mbps that same bit requires 10 ns to transmit and at 1000 Mbps only takes 1 ns. As a rough estimate, 20.3 cm (8 in) per nanosecond is often used for calculating propagation delay down a UTP cable. For 100 meters of UTP, this means that it takes just under 5 bit-times for a 10BASE-T signal to travel the length the cable.

For CSMA/CD Ethernet to operate, the sending station must become aware of a collision before it has completed transmission of a minimum-sized frame. At 100 Mbps the system timing is barely able to accommodate 100 meter cables. At 1000 Mbps special adjustments are required as nearly an entire minimum-sized frame would be transmitted before the first bit reached the end of the first 100 meters of UTP cable. For this reason half duplex is not permitted in 10-Gigabit Ethernet.

The Interactive Media Activity will help students identify the bit time of different Ethernet speeds.

The next page defines interframe spacing and backoff.
6.2
Ethernet Operation
6.2.4
Interframe spacing and backoff



This page explains how spacing is used in an Ethernet network for data transmission.

The minimum spacing between two non-colliding frames is also called the interframe spacing. This is measured from the last bit of the FCS field of the first frame to the first bit of the preamble of the second frame.

After a frame has been sent, all stations on a 10-Mbps Ethernet are required to wait a minimum of 96 bit-times (9.6 microseconds) before any station may legally transmit the next frame. On faster versions of Ethernet the spacing remains the same, 96 bit-times, but the time required for that interval grows correspondingly shorter. This interval is referred to as the spacing gap. The gap is intended to allow slow stations time to process the previous frame and prepare for the next frame.

A repeater is expected to regenerate the full 64 bits of timing information, which is the preamble and SFD, at the start of any frame. This is despite the potential loss of some of the beginning preamble bits because of slow synchronization. Because of this forced reintroduction of timing bits, some minor reduction of the interframe gap is not only possible but expected. Some Ethernet chipsets are sensitive to a shortening of the interframe spacing, and will begin failing to see frames as the gap is reduced. With the increase in processing power at the desktop, it would be very easy for a personal computer to saturate an Ethernet segment with traffic and to begin transmitting again before the interframe spacing delay time is satisfied.

After a collision occurs and all stations allow the cable to become idle (each waits the full interframe spacing), then the stations that collided must wait an additional and potentially progressively longer period of time before attempting to retransmit the collided frame. The waiting period is intentionally designed to be random so that two stations do not delay for the same amount of time before retransmitting, which would result in more collisions. This is accomplished in part by expanding the interval from which the random retransmission time is selected on each retransmission attempt. The waiting period is measured in increments of the parameter slot time.

If the MAC layer is unable to send the frame after sixteen attempts, it gives up and generates an error to the network layer. Such an occurrence is fairly rare and would happen only under extremely heavy network loads, or when a physical problem exists on the network.

The next page will discuss collisions.
6.2
Ethernet Operation
6.2.5
Error handling



This page will describe collisions and how they are handled on a network.

The most common error condition on Ethernet networks are collisions. Collisions are the mechanism for resolving contention for network access. A few collisions provide a smooth, simple, low overhead way for network nodes to arbitrate contention for the network resource. When network contention becomes too great, collisions can become a significant impediment to useful network operation.

Collisions result in network bandwidth loss that is equal to the initial transmission and the collision jam signal. This is consumption delay and affects all network nodes possibly causing significant reduction in network throughput.

The considerable majority of collisions occur very early in the frame, often before the SFD. Collisions occurring before the SFD are usually not reported to the higher layers, as if the collision did not occur. As soon as a collision is detected, the sending stations transmit a 32-bit "jam" signal that will enforce the collision. This is done so that any data being transmitted is thoroughly corrupted and all stations have a chance to detect the collision.

In Figure two stations listen to ensure that the cable is idle, then transmit. Station 1 was able to transmit a significant percentage of the frame before the signal even reached the last cable segment. Station 2 had not received the first bit of the transmission prior to beginning its own transmission and was only able to send several bits before the NIC sensed the collision. Station 2 immediately truncated the current transmission, substituted the 32-bit jam signal and ceased all transmissions. During the collision and jam event that Station 2 was experiencing, the collision fragments were working their way back through the repeated collision domain toward Station 1. Station 2 completed transmission of the 32-bit jam signal and became silent before the collision propagated back to Station 1 which was still unaware of the collision and continued to transmit. When the collision fragments finally reached Station 1, it also truncated the current transmission and substituted a 32-bit jam signal in place of the remainder of the frame it was transmitting. Upon sending the 32-bit jam signal Station 1 ceased all transmissions.

A jam signal may be composed of any binary data so long as it does not form a proper checksum for the portion of the frame already transmitted. The most commonly observed data pattern for a jam signal is simply a repeating one, zero, one, zero pattern, the same as Preamble. When viewed by a protocol analyzer this pattern appears as either a repeating hexadecimal 5 or A sequence. The corrupted, partially transmitted messages are often referred to as collision fragments or runts. Normal collisions are less than 64 octets in length and therefore fail both the minimum length test and the FCS checksum test.

The next page will define different types of collisions.
6.2
Ethernet Operation
6.2.6
Types of collisions



This page covers the different types of collisions and their characteristics.

Collisions typically take place when two or more Ethernet stations transmit simultaneously within a collision domain. A single collision is a collision that was detected while trying to transmit a frame, but on the next attempt the frame was transmitted successfully. Multiple collisions indicate that the same frame collided repeatedly before being successfully transmitted. The results of collisions, collision fragments, are partial or corrupted frames that are less than 64 octets and have an invalid FCS. Three types of collisions are:

* Local
* Remote
* Late

To create a local collision on coax cable (10BASE2 and 10BASE5), the signal travels down the cable until it encounters a signal from the other station. The waveforms then overlap, canceling some parts of the signal out and reinforcing or doubling other parts. The doubling of the signal pushes the voltage level of the signal beyond the allowed maximum. This over-voltage condition is then sensed by all of the stations on the local cable segment as a collision.

In the beginning the waveform in Figure represents normal Manchester encoded data. A few cycles into the sample the amplitude of the wave doubles. That is the beginning of the collision, where the two waveforms are overlapping. Just prior to the end of the sample the amplitude returns to normal. This happens when the first station to detect the collision quits transmitting, and the jam signal from the second colliding station is still observed.

On UTP cable, such as 10BASE-T, 100BASE-TX and 1000BASE-T, a collision is detected on the local segment only when a station detects a signal on the RX pair at the same time it is sending on the TX pair. Since the two signals are on different pairs there is no characteristic change in the signal. Collisions are only recognized on UTP when the station is operating in half duplex. The only functional difference between half and full duplex operation in this regard is whether or not the transmit and receive pairs are permitted to be used simultaneously. If the station is not engaged in transmitting it cannot detect a local collision. Conversely, a cable fault such as excessive crosstalk can cause a station to perceive its own transmission as a local collision.

The characteristics of a remote collision are a frame that is less than the minimum length, has an invalid FCS checksum, but does not exhibit the local collision symptom of over-voltage or simultaneous RX/TX activity. This sort of collision usually results from collisions occurring on the far side of a repeated connection. A repeater will not forward an over-voltage state, and cannot cause a station to have both the TX and RX pairs active at the same time. The station would have to be transmitting to have both pairs active, and that would constitute a local collision. On UTP networks this is the most common sort of collision observed.

There is no possibility remaining for a normal or legal collision after the first 64 octets of data has been transmitted by the sending stations. Collisions occurring after the first 64 octets are called "late collisions". The most significant difference between late collisions and collisions occurring before the first 64 octets is that the Ethernet NIC will retransmit a normally collided frame automatically, but will not automatically retransmit a frame that was collided late. As far as the NIC is concerned everything went out fine, and the upper layers of the protocol stack must determine that the frame was lost. Other than retransmission, a station detecting a late collision handles it in exactly the same way as a normal collision.

The Interactive Media Activity will require students to identify the different types of collisions.

The next page will discuss the sources of Ethernet errors.
6.2
Ethernet Operation
6.2.7
Ethernet errors



This page will define common Ethernet errors.

Knowledge of typical errors is invaluable for understanding both the operation and troubleshooting of Ethernet networks.

The following are the sources of Ethernet error:

* Collision or runt – Simultaneous transmission occurring before slot time has elapsed
* Late collision – Simultaneous transmission occurring after slot time has elapsed
* Jabber, long frame and range errors – Excessively or illegally long transmission
* Short frame, collision fragment or runt – Illegally short transmission
* FCS error – Corrupted transmission
* Alignment error – Insufficient or excessive number of bits transmitted
* Range error – Actual and reported number of octets in frame do not match
* Ghost or jabber – Unusually long Preamble or Jam event

While local and remote collisions are considered to be a normal part of Ethernet operation, late collisions are considered to be an error. The presence of errors on a network always suggests that further investigation is warranted. The severity of the problem indicates the troubleshooting urgency related to the detected errors. A handful of errors detected over many minutes or over hours would be a low priority. Thousands detected over a few minutes suggest that urgent attention is warranted.

Jabber is defined in several places in the 802.3 standard as being a transmission of at least 20,000 to 50,000 bit times in duration. However, most diagnostic tools report jabber whenever a detected transmission exceeds the maximum legal frame size, which is considerably smaller than 20,000 to 50,000 bit times. Most references to jabber are more properly called long frames.

A long frame is one that is longer than the maximum legal size, and takes into consideration whether or not the frame was tagged. It does not consider whether or not the frame had a valid FCS checksum. This error usually means that jabber was detected on the network.

A short frame is a frame smaller than the minimum legal size of 64 octets, with a good frame check sequence. Some protocol analyzers and network monitors call these frames "runts". In general the presence of short frames is not a guarantee that the network is failing.

The term runt is generally an imprecise slang term that means something less than a legal frame size. It may refer to short frames with a valid FCS checksum although it usually refers to collision fragments.

The Interactive Media Activity will help students become familiar with Ethernet errors.

The next page will continue the discussion of Ethernet frame errors.
6.2
Ethernet Operation
6.2.8
FCS and beyond



This page will focus on additional errors that occur on an Ethernet network.

A received frame that has a bad Frame Check Sequence, also referred to as a checksum or CRC error, differs from the original transmission by at least one bit. In an FCS error frame the header information is probably correct, but the checksum calculated by the receiving station does not match the checksum appended to the end of the frame by the sending station. The frame is then discarded.

High numbers of FCS errors from a single station usually indicates a faulty NIC and/or faulty or corrupted software drivers, or a bad cable connecting that station to the network. If FCS errors are associated with many stations, they are generally traceable to bad cabling, a faulty version of the NIC driver, a faulty hub port, or induced noise in the cable system.

A message that does not end on an octet boundary is known as an alignment error. Instead of the correct number of binary bits forming complete octet groupings, there are additional bits left over (less than eight). Such a frame is truncated to the nearest octet boundary, and if the FCS checksum fails, then an alignment error is reported. This is often caused by bad software drivers, or a collision, and is frequently accompanied by a failure of the FCS checksum.

A frame with a valid value in the Length field but did not match the actual number of octets counted in the data field of the received frame is known as a range error. This error also appears when the length field value is less than the minimum legal unpadded size of the data field. A similar error, Out of Range, is reported when the value in the Length field indicates a data size that is too large to be legal.

Fluke Networks has coined the term ghost to mean energy (noise) detected on the cable that appears to be a frame, but is lacking a valid SFD. To qualify as a ghost, the frame must be at least 72 octets long, including the preamble. Otherwise, it is classified as a remote collision. Because of the peculiar nature of ghosts, it is important to note that test results are largely dependent upon where on the segment the measurement is made.

Ground loops and other wiring problems are usually the cause of ghosting. Most network monitoring tools do not recognize the existence of ghosts for the same reason that they do not recognize preamble collisions. The tools rely entirely on what the chipset tells them. Software-only protocol analyzers, many hardware-based protocol analyzers, hand held diagnostic tools, as well as most remote monitoring (RMON) probes do not report these events.

The Interactive Media Activity will help students become familiar with the terms and definitions of Ethernet errors.

The next page will describe Auto-Negotiation.
6.2
Ethernet Operation
6.2.9
Ethernet auto-negotiation



This page explains auto-negotiation and how it is accomplished.

As Ethernet grew from 10 to 100 and 1000 Mbps, one requirement was to make each technology interoperable, even to the point that 10, 100, and 1000 interfaces could be directly connected. A process called Auto-Negotiation of speeds at half or full duplex was developed. Specifically, at the time that Fast Ethernet was introduced, the standard included a method of automatically configuring a given interface to match the speed and capabilities of the link partner. This process defines how two link partners may automatically negotiate a configuration offering the best common performance level. It has the additional advantage of only involving the lowest part of the physical layer.

10BASE-T required each station to transmit a link pulse about every 16 milliseconds, whenever the station was not engaged in transmitting a message. Auto-Negotiation adopted this signal and renamed it a Normal Link Pulse (NLP). When a series of NLPs are sent in a group for the purpose of Auto-Negotiation, the group is called a Fast Link Pulse (FLP) burst. Each FLP burst is sent at the same timing interval as an NLP, and is intended to allow older 10BASE-T devices to operate normally in the event they should receive an FLP burst.

Auto-Negotiation is accomplished by transmitting a burst of 10BASE-T Link Pulses from each of the two link partners. The burst communicates the capabilities of the transmitting station to its link partner. After both stations have interpreted what the other partner is offering, both switch to the highest performance common configuration and establish a link at that speed. If anything interrupts communications and the link is lost, the two link partners first attempt to link again at the last negotiated speed. If that fails, or if it has been too long since the link was lost, the Auto-Negotiation process starts over. The link may be lost due to external influences, such as a cable fault, or due to one of the partners issuing a reset.

The next page will discuss half and full duplex modes.
6.2
Ethernet Operation
6.2.10
Link establishment and full and half duplex



This page will explain how links are established through Auto-Negotiation and introduce the two duplex modes.

Link partners are allowed to skip offering configurations of which they are capable. This allows the network administrator to force ports to a selected speed and duplex setting, without disabling Auto-Negotiation.

Auto-Negotiation is optional for most Ethernet implementations. Gigabit Ethernet requires its implementation, though the user may disable it. Auto-Negotiation was originally defined for UTP implementations of Ethernet and has been extended to work with other fiber optic implementations.

When an Auto-Negotiating station first attempts to link it is supposed to enable 100BASE-TX to attempt to immediately establish a link. If 100BASE-TX signaling is present, and the station supports 100BASE-TX, it will attempt to establish a link without negotiating. If either signaling produces a link or FLP bursts are received, the station will proceed with that technology. If a link partner does not offer an FLP burst, but instead offers NLPs, then that device is automatically assumed to be a 10BASE-T station. During this initial interval of testing for other technologies, the transmit path is sending FLP bursts. The standard does not permit parallel detection of any other technologies.

If a link is established through parallel detection, it is required to be half duplex. There are only two methods of achieving a full-duplex link. One method is through a completed cycle of Auto-Negotiation, and the other is to administratively force both link partners to full duplex. If one link partner is forced to full duplex, but the other partner attempts to Auto-Negotiate, then there is certain to be a duplex mismatch. This will result in collisions and errors on that link. Additionally if one end is forced to full duplex the other must also be forced. The exception to this is 10-Gigabit Ethernet, which does not support half duplex.

Many vendors implement hardware in such a way that it cycles through the various possible states. It transmits FLP bursts to Auto-Negotiate for a while, then it configures for Fast Ethernet, attempts to link for a while, and then just listens. Some vendors do not offer any transmitted attempt to link until the interface first hears an FLP burst or some other signaling scheme.

There are two duplex modes, half and full. For shared media, the half-duplex mode is mandatory. All coaxial implementations are half duplex in nature and cannot operate in full duplex. UTP and fiber implementations may be operated in half duplex. 10-Gbps implementations are specified for full duplex only.

In half duplex only one station may transmit at a time. For the coaxial implementations a second station transmitting will cause the signals to overlap and become corrupted. Since UTP and fiber generally transmit on separate pairs the signals have no opportunity to overlap and become corrupted. Ethernet has established arbitration rules for resolving conflicts arising from instances when more than one station attempts to transmit at the same time. Both stations in a point-to-point full-duplex link are permitted to transmit at any time, regardless of whether the other station is transmitting.

Auto-Negotiation avoids most situations where one station in a point-to-point link is transmitting under half-duplex rules and the other under full-duplex rules.

In the event that link partners are capable of sharing more than one common technology, refer to the list in Figure . This list is used to determine which technology should be chosen from the offered configurations.

Fiber-optic Ethernet implementations are not included in this priority resolution list because the interface electronics and optics do not permit easy reconfiguration between implementations. It is assumed that the interface configuration is fixed. If the two interfaces are able to Auto-Negotiate then they are already using the same Ethernet implementation. However, there remain a number of configuration choices such as the duplex setting, or which station will act as the Master for clocking purposes, that must be determined.

The Interactive Media Activity will help students understand the link establishment process.

This page concludes this lesson. The next page will summarize the main points from the module.
Summary



This page summarizes the topics discussed in this module.

Ethernet is not one networking technology, but a family of LAN technologies that includes Legacy, Fast Ethernet, and Gigabit Ethernet. When Ethernet needs to be expanded to add a new medium or capability, the IEEE issues a new supplement to the 802.3 standard. The new supplements are given a one or two letter designation such as 802.3u. Ethernet relies on baseband signaling, which uses the entire bandwidth of the transmission medium. Ethernet operates at two layers of the OSI model, the lower half of the data link layer, known as the MAC sublayer and the physical layer. Ethernet at Layer 1 involves interfacing with media, signals, bit streams that travel on the media, components that put signals on media, and various physical topologies. Layer 1 bits need structure so OSI Layer 2 frames are used. The MAC sublayer of Layer 2 determines the type of frame appropriate for the physical media.

The one thing common to all forms of Ethernet is the frame structure. This is what allows the interoperability of the different types of Ethernet.

Some of the fields permitted or required in an 802.3 Ethernet Frame are:

* Preamble
* Start Frame Delimiter
* Destination Address
* Source Address
* Length/Type
* Data and Pad
* Frame Check Sequence

In 10 Mbps and slower versions of Ethernet, the Preamble provides timing information the receiving node needs in order to interpret the electrical signals it is receiving. The Start Frame Delimiter marks the end of the timing information. 10 Mbps and slower versions of Ethernet are asynchronous. That is, they will use the preamble timing information to synchronize the receive circuit to the incoming data. 100 Mbps and higher speed implementations of Ethernet are synchronous. Synchronous means the timing information is not required, however for compatibility reasons the Preamble and SFD are present.

The address fields of the Ethernet frame contain Layer 2, or MAC, addresses.

All frames are susceptible to errors from a variety of sources. The Frame Check Sequence (FCS) field of an Ethernet frame contains a number that is calculated by the source node based on the data in the frame. At the destination it is recalculated and compared to determine that the data received is complete and error free.

Once the data is framed the Media Access Control (MAC) sublayer is also responsible to determine which computer on a shared-medium environment, or collision domain, is allowed to transmit the data. There are two broad categories of Media Access Control, deterministic (taking turns) and non-deterministic (first come, first served).

Examples of deterministic protocols include Token Ring and FDDI. The carrier sense multiple access with collision detection (CSMA/CD) access method is a simple non-deterministic system. The NIC listens for an absence of a signal on the media and starts transmitting. If two nodes or more nodes transmit at the same time a collision occurs. If a collision is detected the nodes wait a random amount of time and retransmit.

The minimum spacing between two non-colliding frames is also called the interframe spacing. Interframe spacing is required to insure that all stations have time to process the previous frame and prepare for the next frame.

Collisions can occur at various points during transmission. A collision where a signal is detected on the receive and transmit circuits at the same time is referred to as a local collision. A collision that occurs before the minimum number of bytes can be transmitted is called a remote collision. A collision that occurs after the first sixty-four octets of data have been sent is considered a late collision. The NIC will not automatically retransmit for this type of collision.

While local and remote collisions are considered to be a normal part of Ethernet operation, late collisions are considered to be an error. Ethernet errors result from detection of frames sizes that are longer or shorter than standards allow or excessively long or illegal transmissions called jabber. Runt is a slang term that refers to something less than the legal frame size.

Auto-Negotiation detects the speed and duplex mode, half-duplex or full-duplex, of the device on the other end of the wire and adjusts to match those settings.

Overview



Ethernet has been the most successful LAN technology mainly because of how easy it is to implement. Ethernet has also been successful because it is a flexible technology that has evolved as needs and media capabilities have changed. This module will provide details about the most important types of Ethernet. The goal is to help students understand what is common to all forms of Ethernet.

Changes in Ethernet have resulted in major improvements over the 10-Mbps Ethernet of the early 1980s. The 10-Mbps Ethernet standard remained virtually unchanged until 1995 when IEEE announced a standard for a 100-Mbps Fast Ethernet. In recent years, an even more rapid growth in media speed has moved the transition from Fast Ethernet to Gigabit Ethernet. The standards for Gigabit Ethernet emerged in only three years. A faster Ethernet version called 10-Gigabit Ethernet is now widely available and faster versions will be developed.

MAC addresses, CSMA/CD, and the frame format have not been changed from earlier versions of Ethernet. However, other aspects of the MAC sublayer, physical layer, and medium have changed. Copper-based NICs capable of 10, 100, or 1000 Mbps are now common. Gigabit switch and router ports are becoming the standard for wiring closets. Optical fiber to support Gigabit Ethernet is considered a standard for backbone cables in most new installations.

This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.

Students who complete this module should be able to perform the following tasks:

* Describe the differences and similarities among 10BASE5, 10BASE2, and 10BASE-T Ethernet
* Define Manchester encoding
* List the factors that affect Ethernet timing limits
* List 10BASE-T wiring parameters
* Describe the key characteristics and varieties of 100-Mbps Ethernet
* Describe the evolution of Ethernet
* Explain the MAC methods, frame formats, and transmission process of Gigabit Ethernet
* Describe the uses of specific media and encoding with Gigabit Ethernet
* Identify the pinouts and wiring typical to the various implementations of Gigabit Ethernet
* Describe the similarities and differences between Gigabit and 10-Gigabit Ethernet
* Describe the basic architectural considerations of Gigabit and 10-Gigabit Ethernet

7.1
10-Mbps and 100-Mbps Ethernet
7.1.1
10-Mbps Ethernet



This page will discuss 10-Mbps Ethernet technologies.

10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four common features of Legacy Ethernet are timing parameters, the frame format, transmission processes, and a basic design rule.

Figure displays the parameters for 10-Mbps Ethernet operation. 10-Mbps Ethernet and slower versions are asynchronous. Each receiving station uses eight octets of timing information to synchronize its receive circuit to the incoming data. 10BASE5, 10BASE2, and 10BASE-T all share the same timing parameters. For example, 1 bit time at 10 Mbps = 100 nanoseconds (ns) = 0.1 microseconds = 1 10-millionth of a second. This means that on a 10-Mbps Ethernet network, 1 bit at the MAC sublayer requires 100 ns to transmit.

For all speeds of Ethernet transmission 1000 Mbps or slower, transmission can be no slower than the slot time. Slot time is just longer than the time it theoretically can take to go from one extreme end of the largest legal Ethernet collision domain to the other extreme end, collide with another transmission at the last possible instant, and then have the collision fragments return to the sending station to be detected.

10BASE5, 10BASE2, and 10BASE-T also have a common frame format.

The Legacy Ethernet transmission process is identical until the lower part of the OSI physical layer. As the frame passes from the MAC sublayer to the physical layer, other processes occur before the bits move from the physical layer onto the medium. One important process is the signal quality error (SQE) signal. The SQE is a transmission sent by a transceiver back to the controller to let the controller know whether the collision circuitry is functional. The SQE is also called a heartbeat. The SQE signal is designed to fix the problem in earlier versions of Ethernet where a host does not know if a transceiver is connected. SQE is always used in half-duplex. SQE can be used in full-duplex operation but is not required. SQE is active in the following instances:

* Within 4 to 8 microseconds after a normal transmission to indicate that the outbound frame was successfully transmitted
* Whenever there is a collision on the medium
* Whenever there is an improper signal on the medium, such as jabber, or reflections that result from a cable short
* Whenever a transmission has been interrupted

All 10-Mbps forms of Ethernet take octets received from the MAC sublayer and perform a process called line encoding. Line encoding describes how the bits are actually signaled on the wire. The simplest encodings have undesirable timing and electrical characteristics. Therefore, line codes have been designed with desirable transmission properties. This form of encoding used in 10-Mbps systems is called Manchester encoding.

Manchester encoding uses the transition in the middle of the timing window to determine the binary value for that bit period. In Figure , the top waveform moves to a lower position so it is interpreted as a binary zero. The second waveform moves to a higher position and is interpreted as a binary one. The third waveform has an alternating binary sequence. When binary data alternates, there is no need to return to the previous voltage level before the next bit period. The wave forms in the graphic show that the binary bit values are determined based on the direction of change in a bit period. The voltage levels at the start or end of any bit period are not used to determine binary values.

Legacy Ethernet has common architectural features. Networks usually contain multiple types of media. The standard ensures that interoperability is maintained. The overall architectural design is most important in mixed-media networks. It becomes easier to violate maximum delay limits as the network grows. The timing limits are based on the following types of parameters:

* Cable length and propagation delay
* Delay of repeaters
* Delay of transceivers
* Interframe gap shrinkage
* Delays within the station

10-Mbps Ethernet operates within the timing limits for a series of up to five segments separated by up to four repeaters. This is known as the 5-4-3 rule. No more than four repeaters can be used in series between any two stations. There can also be no more than three populated segments between any two stations.

The next page will describe 10BASE5.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.2
10BASE5



This page will discuss the original 1980 Ethernet product, which is 10BASE5. 10BASE5 transmitted 10 Mbps over a single thin coaxial cable bus.

10BASE5 is important because it was the first medium used for Ethernet. 10BASE5 was part of the original 802.3 standard. The primary benefit of 10BASE5 was length. 10BASE5 may be found in legacy installations. It is not recommended for new installations. 10BASE5 systems are inexpensive and require no configuration. Two disadvantages are that basic components like NICs are very difficult to find and it is sensitive to signal reflections on the cable. 10BASE5 systems also represent a single point of failure.

10BASE5 uses Manchester encoding. It has a solid central conductor. Each segment of thick coax may be up to 500 m (1640.4 ft) in length. The cable is large, heavy, and difficult to install. However, the distance limitations were favorable and this prolonged its use in certain applications.

When the medium is a single coaxial cable, only one station can transmit at a time or a collision will occur. Therefore, 10BASE5 only runs in half-duplex with a maximum transmission rate of 10 Mbps.

Figure illustrates a configuration for an end-to-end collision domain with the maximum number of segments and repeaters. Remember that only three segments can have stations connected to them. The other two repeated segments are used to extend the network.

The Lab Activity will help students decode a waveform.

The Interactive Media Activity will help students learn the features of 10BASE5 technology.

The next page will discuss 10BASE2.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.3
10BASE2



This page covers 10BASE2, which was introduced in 1985.

Installation was easier because of its smaller size, lighter weight, and greater flexibility. 10BASE2 still exists in legacy networks. Like 10BASE5, it is no longer recommended for network installations. It has a low cost and does not require hubs.

10BASE2 also uses Manchester encoding. Computers on a 10BASE2 LAN are linked together by an unbroken series of coaxial cable lengths. These lengths are attached to a T-shaped connector on the NIC with BNC connectors.

10BASE2 has a stranded central conductor. Each of the maximum five segments of thin coaxial cable may be up to 185 m (607 ft) long and each station is connected directly to the BNC T-shaped connector on the coaxial cable.

Only one station can transmit at a time or a collision will occur. 10BASE2 also uses half-duplex. The maximum transmission rate of 10BASE2 is 10 Mbps.

There may be up to 30 stations on a 10BASE2 segment. Only three out of five consecutive segments between any two stations can be populated.

The Interactive Media Activity will help students learn the features of 10BASE2 technology.

The next page will discuss 10BASE-T.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.4
10BASE-T



This page covers 10BASE-T, which was introduced in 1990.

10BASE-T used cheaper and easier to install Category 3 UTP copper cable instead of coax cable. The cable plugged into a central connection device that contained the shared bus. This device was a hub. It was at the center of a set of cables that radiated out to the PCs like the spokes on a wheel. This is referred to as a star topology. As additional stars were added and the cable distances grew, this formed an extended star topology. Originally 10BASE-T was a half-duplex protocol, but full-duplex features were added later. The explosion in the popularity of Ethernet in the mid-to-late 1990s was when Ethernet came to dominate LAN technology.

10BASE-T also uses Manchester encoding. A 10BASE-T UTP cable has a solid conductor for each wire. The maximum cable length is 90 m (295 ft). UTP cable uses eight-pin RJ-45 connectors. Though Category 3 cable is adequate for 10BASE-T networks, new cable installations should be made with Category 5e or better. All four pairs of wires should be used either with the T568-A or T568-B cable pinout arrangement. This type of cable installation supports the use of multiple protocols without the need to rewire. Figure shows the pinout arrangement for a 10BASE-T connection. The pair that transmits data on one device is connected to the pair that receives data on the other device.

Half duplex or full duplex is a configuration choice. 10BASE-T carries 10 Mbps of traffic in half-duplex mode and 20 Mbps in full-duplex mode.

The Interactive Media Activity will help students learn the features of 10BASE-T technology.

The next page describes the wiring and architecture of 10BASE-T.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.5
10BASE-T wiring and architecture



This page explains the wiring and architecture of 10BASE-T.

A 10BASE-T link generally connects a station to a hub or switch. Hubs are multi-port repeaters and count toward the limit on repeaters between distant stations. Hubs do not divide network segments into separate collision domains. Bridges and switches divide segments into separate collision domains. The maximum distance between bridges and switches is based on media limitations.

Although hubs may be linked, it is best to avoid this arrangement. A network with linked hubs may exceed the limit for maximum delay between stations. Multiple hubs should be arranged in hierarchical order like a tree structure. Performance is better if fewer repeaters are used between stations.

An architectural example is shown in Figure . The distance from one end of the network to the other places the architecture at its limit. The most important aspect to consider is how to keep the delay between distant stations to a minimum, regardless of the architecture and media types involved. A shorter maximum delay will provide better overall performance.

10BASE-T links can have unrepeated distances of up to 100 m (328 ft). While this may seem like a long distance, it is typically maximized when wiring an actual building. Hubs can solve the distance issue but will allow collisions to propagate. The widespread introduction of switches has made the distance limitation less important. If workstations are located within 100 m (328 ft) of a switch, the 100-m distance starts over at the switch.

The next page will describe Fast Ethernet.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.6
100-Mbps Ethernet



This page will discuss 100-Mbps Ethernet, which is also known as Fast Ethernet. The two technologies that have become important are 100BASE-TX, which is a copper UTP medium and 100BASE-FX, which is a multimode optical fiber medium.

Three characteristics common to 100BASE-TX and 100BASE-FX are the timing parameters, the frame format, and parts of the transmission process. 100BASE-TX and 100BASE-FX both share timing parameters. Note that one bit time at 100-Mbps = 10 ns = .01 microseconds = 1 100-millionth of a second.

The 100-Mbps frame format is the same as the 10-Mbps frame.

Fast Ethernet is ten times faster than 10BASE-T. The bits that are sent are shorter in duration and occur more frequently. These higher frequency signals are more susceptible to noise. In response to these issues, two separate encoding steps are used by 100-Mbps Ethernet. The first part of the encoding uses a technique called 4B/5B, the second part of the encoding is the actual line encoding specific to copper or fiber.

The next page will discuss the 100BASE-TX standard.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.7
100BASE-TX



This page will describe 100BASE-TX.

In 1995, 100BASE-TX was the standard, using Category 5 UTP cable, which became commercially successful.

The original coaxial Ethernet used half-duplex transmission so only one device could transmit at a time. In 1997, Ethernet was expanded to include a full-duplex capability that allowed more than one PC on a network to transmit at the same time. Switches replaced hubs in many networks. These switches had full-duplex capabilities and could handle Ethernet frames quickly.

100BASE-TX uses 4B/5B encoding, which is then scrambled and converted to Multi-Level Transmit (MLT-3) encoding. Figure shows four waveform examples. The top waveform has no transition in the center of the timing window. No transition indicates a binary zero. The second waveform shows a transition in the center of the timing window. A transition represents a binary one. The third waveform shows an alternating binary sequence. The fourth wavelength shows that signal changes indicate ones and horizontal lines indicate zeros.

Figure shows the pinout for a 100BASE-TX connection. Notice that the two separate transmit-receive paths exist. This is identical to the 10BASE-T configuration.

100BASE-TX carries 100 Mbps of traffic in half-duplex mode. In full-duplex mode, 100BASE-TX can exchange 200 Mbps of traffic. The concept of full duplex will become more important as Ethernet speeds increase.

The next page will discuss the fiber optic version of Fast Ethernet.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.8
100BASE-FX



This page covers 100BASE-FX.

When copper-based Fast Ethernet was introduced, a fiber version was also desired. A fiber version could be used for backbone applications, connections between floors, buildings where copper is less desirable, and also in high-noise environments. 100BASE-FX was introduced to satisfy this desire. However, 100BASE-FX was never adopted successfully. This was due to the introduction of Gigabit Ethernet copper and fiber standards. Gigabit Ethernet standards are now the dominant technology for backbone installations, high-speed cross-connects, and general infrastructure needs.

The timing, frame format, and transmission are the same in both copper and fiber versions of 100-Mbps Fast Ethernet. 100BASE-FX, however, uses NRZI encoding, which is shown in Figure . The top waveform has no transition, which indicates a binary 0. In the second waveform, the transition in the center of the timing window indicates a binary 1. In the third waveform, there is an alternating binary sequence. In the third and fourth waveforms it is more obvious that no transition indicates a binary zero and the presence of a transition is a binary one.

Figure summarizes a 100BASE-FX link and pinouts. A fiber pair with either ST or SC connectors is most commonly used.

The separate Transmit (Tx) and Receive (Rx) paths in 100BASE-FX optical fiber allow for 200-Mbps transmission.

The next page will explain the Fast Ethernet architecture.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.9
Fast Ethernet architecture



This page describes the architecture of Fast Ethernet.

Fast Ethernet links generally consist of a connection between a station and a hub or switch. Hubs are considered multi-port repeaters and switches are considered multi-port bridges. These are subject to the 100-m (328 ft) UTP media distance limitation.

A Class I repeater may introduce up to 140 bit-times latency. Any repeater that changes between one Ethernet implementation and another is a Class I repeater. A Class II repeater is restricted to smaller timing delays, 92 bit times, because it immediately repeats the incoming signal to all other ports without a translation process. To achieve a smaller timing delay, Class II repeaters can only connect to segment types that use the same signaling technique.

As with 10-Mbps versions, it is possible to modify some of the architecture rules for 100-Mbps versions. Modification of the architecture rules is strongly discouraged for 100BASE-TX. 100BASE-TX cable between Class II repeaters may not exceed 5 m (16 ft). Links that operate in half duplex are not uncommon in Fast Ethernet. However, half duplex is undesirable because the signaling scheme is inherently full duplex.

Figure shows architecture configuration cable distances. 100BASE-TX links can have unrepeated distances up to 100 m. Switches have made this distance limitation less important. Most Fast Ethernet implementations are switched.

This page concludes this lesson. The next lesson will discuss Gigabit and 10-Gigabit Ethernet. The first page describes 1000-Mbps Ethernet standards.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.1
1000-Mbps Ethernet



This page covers the 1000-Mbps Ethernet or Gigabit Ethernet standards. These standards specify both fiber and copper media for data transmissions. The 1000BASE-T standard, IEEE 802.3ab, uses Category 5, or higher, balanced copper cabling. The 1000BASE-X standard, IEEE 802.3z, specifies 1 Gbps full duplex over optical fiber.

1000BASE-TX, 1000BASE-SX, and 1000BASE-LX use the same timing parameters, as shown in Figure . They use a 1 ns, 0.000000001 of a second, or 1 billionth of a second bit time. The Gigabit Ethernet frame has the same format as is used for 10 and 100-Mbps Ethernet. Some implementations of Gigabit Ethernet may use different processes to convert frames to bits on the cable. Figure shows the Ethernet frame fields.

The differences between standard Ethernet, Fast Ethernet and Gigabit Ethernet occur at the physical layer. Due to the increased speeds of these newer standards, the shorter duration bit times require special considerations. Since the bits are introduced on the medium for a shorter duration and more often, timing is critical. This high-speed transmission requires higher frequencies. This causes the bits to be more susceptible to noise on copper media.

These issues require Gigabit Ethernet to use two separate encoding steps. Data transmission is more efficient when codes are used to represent the binary bit stream. The encoded data provides synchronization, efficient usage of bandwidth, and improved signal-to-noise ratio characteristics.

At the physical layer, the bit patterns from the MAC layer are converted into symbols. The symbols may also be control information such as start frame, end frame, and idle conditions on a link. The frame is coded into control symbols and data symbols to increase in network throughput.

Fiber-based Gigabit Ethernet, or 1000BASE-X, uses 8B/10B encoding, which is similar to the 4B/5B concept. This is followed by the simple nonreturn to zero (NRZ) line encoding of light on optical fiber. This encoding process is possible because the fiber medium can carry higher bandwidth signals.

The next page will discuss the 1000BASE-T standard.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.2
1000BASE-T



This page will describe 1000BASE-T.

As Fast Ethernet was installed to increase bandwidth to workstations, this began to create bottlenecks upstream in the network. The 1000BASE-T standard, which is IEEE 802.3ab, was developed to provide additional bandwidth to help alleviate these bottlenecks. It provided more throughput for devices such as intra-building backbones, inter-switch links, server farms, and other wiring closet applications as well as connections for high-end workstations. Fast Ethernet was designed to function over Category 5 copper cable that passes the Category 5e test. Most installed Category 5 cable can pass the Category 5e certification if properly terminated. It is important for the 1000BASE-T standard to be interoperable with 10BASE-T and 100BASE-TX.

Since Category 5e cable can reliably carry up to 125 Mbps of traffic, 1000 Mbps or 1 Gigabit of bandwidth was a design challenge. The first step to accomplish 1000BASE-T is to use all four pairs of wires instead of the traditional two pairs of wires used by 10BASE-T and 100BASE-TX. This requires complex circuitry that allows full-duplex transmissions on the same wire pair. This provides 250 Mbps per pair. With all four-wire pairs, this provides the desired 1000 Mbps. Since the information travels simultaneously across the four paths, the circuitry has to divide frames at the transmitter and reassemble them at the receiver.

The 1000BASE-T encoding with 4D-PAM5 line encoding is used on Category 5e, or better, UTP. That means the transmission and reception of data happens in both directions on the same wire at the same time. As might be expected, this results in a permanent collision on the wire pairs. These collisions result in complex voltage patterns. With the complex integrated circuits using techniques such as echo cancellation, Layer 1 Forward Error Correction (FEC), and prudent selection of voltage levels, the system achieves the 1-Gigabit throughput.

In idle periods there are nine voltage levels found on the cable, and during data transmission periods there are 17 voltage levels found on the cable. With this large number of states and the effects of noise, the signal on the wire looks more analog than digital. Like analog, the system is more susceptible to noise due to cable and termination problems.

The data from the sending station is carefully divided into four parallel streams, encoded, transmitted and detected in parallel, and then reassembled into one received bit stream. Figure represents the simultaneous full duplex on four-wire pairs. 1000BASE-T supports both half-duplex as well as full-duplex operation. The use of full-duplex 1000BASE-T is widespread.

The next page will introduce 1000BASE-SX and LX
7.2
Gigabit and 10-Gigabit Ethernet
7.2.3
1000BASE-SX and LX



This page will discuss single-mode and multimode optical fiber.

The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone technology.

The timing, frame format, and transmission are common to all versions of 1000 Mbps. Two signal-encoding schemes are defined at the physical layer. The 8B/10B scheme is used for optical fiber and shielded copper media, and the pulse amplitude modulation 5 (PAM5) is used for UTP.

1000BASE-X uses 8B/10B encoding converted to non-return to zero (NRZ) line encoding. NRZ encoding relies on the signal level found in the timing window to determine the binary value for that bit period. Unlike most of the other encoding schemes described, this encoding system is level driven instead of edge driven. That is the determination of whether a bit is a zero or a one is made by the level of the signal rather than when the signal changes levels.

The NRZ signals are then pulsed into the fiber using either short-wavelength or long-wavelength light sources. The short-wavelength uses an 850 nm laser or LED source in multimode optical fiber (1000BASE-SX). It is the lower-cost of the options but has shorter distances. The long-wavelength 1310 nm laser source uses either single-mode or multimode optical fiber (1000BASE-LX). Laser sources used with single-mode fiber can achieve distances of up to 5000 meters. Because of the length of time to completely turn the LED or laser on and off each time, the light is pulsed using low and high power. A logic zero is represented by low power, and a logic one by high power.

The Media Access Control method treats the link as point-to-point. Since separate fibers are used for transmitting (Tx) and receiving (Rx) the connection is inherently full duplex. Gigabit Ethernet permits only a single repeater between two stations. Figure is a 1000BASE Ethernet media comparison chart.

The next page describes the architecture of Gigabit Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.4
Gigabit Ethernet architecture



This page will discuss the architecture of Gigabit Ethernet.

The distance limitations of full-duplex links are only limited by the medium, and not the round-trip delay. Since most Gigabit Ethernet is switched, the values in Figures and are the practical limits between devices. Daisy-chaining, star, and extended star topologies are all allowed. The issue then becomes one of logical topology and data flow, not timing or distance limitations.

A 1000BASE-T UTP cable is the same as 10BASE-T and 100BASE-TX cable, except that link performance must meet the higher quality Category 5e or ISO Class D (2000) requirements.

Modification of the architecture rules is strongly discouraged for 1000BASE-T. At 100 meters, 1000BASE-T is operating close to the edge of the ability of the hardware to recover the transmitted signal. Any cabling problems or environmental noise could render an otherwise compliant cable inoperable even at distances that are within the specification.

It is recommended that all links between a station and a hub or switch be configured for Auto-Negotiation to permit the highest common performance. This will avoid accidental misconfiguration of the other required parameters for proper Gigabit Ethernet operation.

The next page will discuss 10-Gigabit Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.5
10-Gigabit Ethernet



This page will describe 10-Gigabit Ethernet and compare it to other versions of Ethernet.

IEEE 802.3ae was adapted to include 10 Gbps full-duplex transmission over fiber optic cable. The basic similarities between 802.3ae and 802.3, the original Ethernet are remarkable. This 10-Gigabit Ethernet (10GbE) is evolving for not only LANs, but also MANs, and WANs.

With the frame format and other Ethernet Layer 2 specifications compatible with previous standards, 10GbE can provide increased bandwidth needs that are interoperable with existing network infrastructure.

A major conceptual change for Ethernet is emerging with 10GbE. Ethernet is traditionally thought of as a LAN technology, but 10GbE physical layer standards allow both an extension in distance to 40 km over single-mode fiber and compatibility with synchronous optical network (SONET) and synchronous digital hierarchy (SDH) networks. Operation at 40 km distance makes 10GbE a viable MAN technology. Compatibility with SONET/SDH networks operating up to OC-192 speeds (9.584640 Gbps) make 10GbE a viable WAN technology. 10GbE may also compete with ATM for certain applications.

To summarize, how does 10GbE compare to other varieties of Ethernet?

* Frame format is the same, allowing interoperability between all varieties of legacy, fast, gigabit, and 10 gigabit, with no reframing or protocol conversions.
* Bit time is now 0.1 nanoseconds. All other time variables scale accordingly.
* Since only full-duplex fiber connections are used, CSMA/CD is not necessary.
* The IEEE 802.3 sublayers within OSI Layers 1 and 2 are mostly preserved, with a few additions to accommodate 40 km fiber links and interoperability with SONET/SDH technologies.
* Flexible, efficient, reliable, relatively low cost end-to-end Ethernet networks become possible.
* TCP/IP can run over LANs, MANs, and WANs with one Layer 2 transport method.

The basic standard governing CSMA/CD is IEEE 802.3. An IEEE 802.3 supplement, entitled 802.3ae, governs the 10GbE family. As is typical for new technologies, a variety of implementations are being considered, including:

* 10GBASE-SR – Intended for short distances over already-installed multimode fiber, supports a range between 26 m to 82 m
* 10GBASE-LX4 – Uses wavelength division multiplexing (WDM), supports 240 m to 300 m over already-installed multimode fiber and 10 km over single-mode fiber
* 10GBASE-LR and 10GBASE-ER – Support 10 km and 40 km over single-mode fiber
* 10GBASE-SW, 10GBASE-LW, and 10GBASE-EW – Known collectively as 10GBASE-W, intended to work with OC-192 synchronous transport module SONET/SDH WAN equipment

The IEEE 802.3ae Task force and the 10-Gigabit Ethernet Alliance (10 GEA) are working to standardize these emerging technologies.

10-Gbps Ethernet (IEEE 802.3ae) was standardized in June 2002. It is a full-duplex protocol that uses only optic fiber as a transmission medium. The maximum transmission distances depend on the type of fiber being used. When using single-mode fiber as the transmission medium, the maximum transmission distance is 40 kilometers (25 miles). Some discussions between IEEE members have begun that suggest the possibility of standards for 40, 80, and even 100-Gbps Ethernet.

The next page will discuss the architecture of 10-Gigabit Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.6
10-Gigabit Ethernet architectures



This page describes the 10-Gigabit Ethernet architectures.

As with the development of Gigabit Ethernet, the increase in speed comes with extra requirements. The shorter bit time duration because of increased speed requires special considerations. For 10 GbE transmissions, each data bit duration is 0.1 nanosecond. This means there would be 1,000 GbE data bits in the same bit time as one data bit in a 10-Mbps Ethernet data stream. Because of the short duration of the 10 GbE data bit, it is often difficult to separate a data bit from noise. 10 GbE data transmissions rely on exact bit timing to separate the data from the effects of noise on the physical layer. This is the purpose of synchronization.

In response to these issues of synchronization, bandwidth, and Signal-to-Noise Ratio, 10-Gigabit Ethernet uses two separate encoding steps. By using codes to represent the user data, transmission is made more efficient. The encoded data provides synchronization, efficient usage of bandwidth, and improved Signal-to-Noise Ratio characteristics.

Complex serial bit streams are used for all versions of 10GbE except for 10GBASE-LX4, which uses Wide Wavelength Division Multiplex (WWDM) to multiplex four bit simultaneous bit streams as four wavelengths of light launched into the fiber at one time.

Figure represents the particular case of using four slightly different wavelength, laser sources. Upon receipt from the medium, the optical signal stream is demultiplexed into four separate optical signal streams. The four optical signal streams are then converted back into four electronic bit streams as they travel in approximately the reverse process back up through the sublayers to the MAC layer.

Currently, most 10GbE products are in the form of modules, or line cards, for addition to high-end switches and routers. As the 10GbE technologies evolve, an increasing diversity of signaling components can be expected. As optical technologies evolve, improved transmitters and receivers will be incorporated into these products, taking further advantage of modularity. All 10GbE varieties use optical fiber media. Fiber types include 10µ single-mode Fiber, and 50µ and 62.5µ multimode fibers. A range of fiber attenuation and dispersion characteristics is supported, but they limit operating distances.

Even though support is limited to fiber optic media, some of the maximum cable lengths are surprisingly short. No repeater is defined for 10-Gigabit Ethernet since half duplex is explicitly not supported.

As with 10 Mbps, 100 Mbps and 1000 Mbps versions, it is possible to modify some of the architecture rules slightly. Possible architecture adjustments are related to signal loss and distortion along the medium. Due to dispersion of the signal and other issues the light pulse becomes undecipherable beyond certain distances.

The next page will discuss the future of Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.7
Future of Ethernet



This page will teach students about the future of Ethernet.

Ethernet has gone through an evolution from Legacy —> Fast —> Gigabit —> MultiGigabit technologies. While other LAN technologies are still in place (legacy installations), Ethernet dominates new LAN installations. So much so that some have referred to Ethernet as the LAN "dial tone". Ethernet is now the standard for horizontal, vertical, and inter-building connections. Recently developing versions of Ethernet are blurring the distinction between LANs, MANs, and WANs.

While 1-Gigabit Ethernet is now widely available and 10-Gigabit products becoming more available, the IEEE and the 10-Gigabit Ethernet Alliance are working on 40, 100, or even 160 Gbps standards. The technologies that are adopted will depend on a number of factors, including the rate of maturation of the technologies and standards, the rate of adoption in the market, and cost.

Proposals for Ethernet arbitration schemes other than CSMA/CD have been made. The problem of collisions with physical bus topologies of 10BASE5 and 10BASE2 and 10BASE-T and 100BASE-TX hubs is no longer common. Using UTP and optical fiber with separate Tx and Rx paths, and the decreasing costs of switches make single shared media, half-duplex media connections much less important.

The future of networking media is three-fold:

1. Copper (up to 1000 Mbps, perhaps more)
2. Wireless (approaching 100 Mbps, perhaps more)
3. Optical fiber (currently at 10,000 Mbps and soon to be more)

Copper and wireless media have certain physical and practical limitations on the highest frequency signals that can be transmitted. This is not a limiting factor for optical fiber in the foreseeable future. The bandwidth limitations on optical fiber are extremely large and are not yet being threatened. In fiber systems, it is the electronics technology (such as emitters and detectors) and fiber manufacturing processes that most limit the speed. Upcoming developments in Ethernet are likely to be heavily weighted towards Laser light sources and single-mode optical fiber.

When Ethernet was slower, half-duplex, subject to collisions and a "democratic" process for prioritization, was not considered to have the Quality of Service (QoS) capabilities required to handle certain types of traffic. This included such things as IP telephony and video multicast.

The full-duplex high-speed Ethernet technologies that now dominate the market are proving to be sufficient at supporting even QoS-intensive applications. This makes the potential applications of Ethernet even wider. Ironically end-to-end QoS capability helped drive a push for ATM to the desktop and to the WAN in the mid-1990s, but now it is Ethernet, not ATM that is approaching this goal.

This page concludes this lesson. The next page will summarize the main points from the module.
Summary



This page summarizes the topics discussed in this module.

Ethernet is a technology that has increased in speed one thousand times, from 10 Mbps to 10,000 Mbps, in less than a decade. All forms of Ethernet share a similar frame structure and this leads to excellent interoperability. Most Ethernet copper connections are now switched full duplex, and the fastest copper-based Ethernet is 1000BASE-T, or Gigabit Ethernet. 10 Gigabit Ethernet and faster are exclusively optical fiber-based technologies.

10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four common features of Legacy Ethernet are timing parameters, frame format, transmission process, and a basic design rule.

Legacy Ethernet encodes data on an electrical signal. The form of encoding used in 10 Mbps systems is called Manchester encoding. Manchester encoding uses a change in voltage to represent the binary numbers zero and one. An increase or decrease in voltage during a timed period, called the bit period, determines the binary value of the bit.

In addition to a standard bit period, Ethernet standards set limits for slot time and interframe spacing. Different types of media can affect transmission timing and timing standards ensure interoperability. 10 Mbps Ethernet operates within the timing limits offered by a series of no more than five segments separated by no more than four repeaters.

A single thick coaxial cable was the first medium used for Ethernet. 10BASE2, using a thinner coax cable, was introduced in 1985. 10BASE-T, using twisted-pair copper wire, was introduced in 1990. Because it used multiple wires 10BASE-T offered the option of full-duplex signaling. 10BASE-T carries 10 Mbps of traffic in half-duplex mode and 20 Mbps in full-duplex mode.

10BASE-T links can have unrepeated distances up to 100 m. Beyond that network devices such as repeaters, hub, bridges and switches are used to extend the scope of the LAN. With the advent of switches, the 4-repeater rule is not so relevant. You can extend the LAN indefinitely by daisy-chaining switches. Each switch-to-switch connection, with maximum length of 100m, is essentially a point-to-point connection without the media contention or timing issues of using repeaters and hubs.

100-Mbps Ethernet, also known as Fast Ethernet, can be implemented using twisted-pair copper wire, as in 100BASE-TX, or fiber media, as in 100BASE-FX. 100 Mbps forms of Ethernet can transmit 200 Mbps in full duplex.

Because the higher frequency signals used in Fast Ethernet are more susceptible to noise, two separate encoding steps are used by 100-Mbps Ethernet to enhance signal integrity.

Gigabit Ethernet over copper wire is accomplished by the following:

* Category 5e UTP cable and careful improvements in electronics are used to boost 100 Mbps per wire pair to 125 Mbps per wire pair.
* All four wire pairs instead of just two. This allows 125 Mbps per wire pair, or 500 Mbps for the four wire pairs.
* Sophisticated electronics allow permanent collisions on each wire pair and run signals in full duplex, doubling the 500 Mbps to 1000 Mbps.

On Gigabit Ethernet networks bit signals occur in one tenth of the time of 100 Mbps networks and 1/100 of the time of 10 Mbps networks. With signals occurring in less time the bits become more susceptible to noise. The issue becomes how fast the network adapter or interface can change voltage levels to signal bits and still be detected reliably one hundred meters away at the receiving NIC or interface. At this speed encoding and decoding data becomes even more complex.

The fiber versions of Gigabit Ethernet, 1000BASE-SX and 1000BASE-LX offer the following advantages: noise immunity, small size, and increased unrepeated distances and bandwidth. The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone technology.

CCNA module text

No comments:

tim's shared items

Blog Archive

Add to Google Reader or Homepage