Now I just need my new Nokia 6120 to have web access...
mobile.google.com/calendar/
All of the interesting technological, artistic or just plain fun subjects I'd investigate if I had an infinite number of lifetimes. In other words, a dumping ground...
Friday, 28 September 2007
Thursday, 27 September 2007
ccna module 5
Overview
Even though each LAN is unique, there are many design aspects that are common to all LANs. For example, most LANs follow the same standards and use the same components. This module presents information on elements of Ethernet LANs and common LAN devices.
There are several types of WAN connections. They range from dial-up to broadband access and differ in bandwidth, cost, and required equipment. This module presents information on the various types of WAN connections.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.
Students who complete this module should be able to perform the following tasks:
* Identify characteristics of Ethernet networks
* Identify straight-through, crossover, and rollover cables
* Describe the function, advantages, and disadvantages of repeaters, hubs, bridges, switches, and wireless network components
* Describe the function of peer-to-peer networks
* Describe the function, advantages, and disadvantages of client-server networks
* Describe and differentiate between serial, ISDN, DSL, and cable modem WAN connections
* Identify router serial ports, cables, and connectors
* Identify and describe the placement of equipment used in various WAN configurations
5.1
Cabling LANs
5.1.1
LAN physical layer
This page describes the LAN physical layer.
Various symbols are used to represent media types. Token Ring is represented by a circle. FDDI is represented by two concentric circles and the Ethernet symbol is represented by a straight line. Serial connections are represented by a lightning bolt.
Each computer network can be built with many different media types. The function of media is to carry a flow of information through a LAN. Wireless LANs use the atmosphere, or space, as the medium. Other networking media confine network signals to a wire, cable, or fiber. Networking media are considered Layer 1, or physical layer, components of LANs.
Each type of media has advantages and disadvantages. These are based on the following factors:
* Cable length
* Cost
* Ease of installation
* Susceptibility to interference
Coaxial cable, optical fiber, and space can carry network signals. This module will focus on Category 5 UTP, which includes the Category 5e family of cables.
Many topologies support LANs, as well as many different physical media. Figure shows a subset of physical layer implementations that can be deployed to support Ethernet.
The next page explains how Ethernet is implemented in a campus environment.
5.1
Cabling LANs
5.1.2
Ethernet in the campus
This page will discuss Ethernet.
Ethernet is the most widely used LAN technology. Ethernet was first implemented by the Digital, Intel, and Xerox group (DIX). DIX created and implemented the first Ethernet LAN specification, which was used as the basis for the Institute of Electrical and Electronics Engineers (IEEE) 802.3 specification, released in 1980. IEEE extended 802.3 to three new committees known as 802.3u for Fast Ethernet, 802.3z for Gigabit Ethernet over fiber, and 802.3ab for Gigabit Ethernet over UTP.
A network may require an upgrade to one of the faster Ethernet topologies. Most Ethernet networks support speeds of 10 Mbps and 100 Mbps.
The new generation of multimedia, imaging, and database products can easily overwhelm a network that operates at traditional Ethernet speeds of 10 and 100 Mbps. Network administrators may choose to provide Gigabit Ethernet from the backbone to the end user. Installation costs for new cables and adapters can make this prohibitive.
There are several ways that Ethernet technologies can be used in a campus network:
* An Ethernet speed of 10 Mbps can be used at the user level to provide good performance. Clients or servers that require more bandwidth can use 100-Mbps Ethernet.
* Fast Ethernet is used as the link between user and network devices. It can support the combination of all traffic from each Ethernet segment.
* Fast Ethernet can be used to connect enterprise servers. This will enhance client-server performance across the campus network and help prevent bottlenecks.
* Fast Ethernet or Gigabit Ethernet should be implemented between backbone devices, based on affordability.
The media and connector requirements for an Ethernet implementation are discussed on the next page.
5.1
Cabling LANs
5.1.3
Ethernet media and connector requirements
This page provides important considerations for an Ethernet implementation. These include the media and connector requirements and the level of network performance.
The cables and connector specifications used to support Ethernet implementations are derived from the EIA/TIA standards. The categories of cabling defined for Ethernet are derived from the EIA/TIA-568 SP-2840 Commercial Building Telecommunications Wiring Standards.
Figure compares the cable and connector specifications for the most popular Ethernet implementations. It is important to note the difference in the media used for 10-Mbps Ethernet versus 100-Mbps Ethernet. Networks with a combination of 10- and 100-Mbps traffic use Category 5 UTP to support Fast Ethernet.
The next page will discuss the different connection types.
5.1
Cabling LANs
5.1.4
Connection media
This page describes the different connection types used by each physical layer implementation, as shown in Figure . The RJ-45 connector and jack are the most common. RJ-45 connectors are discussed in more detail in the next section.
The connector on a NIC may not match the media to which it needs to connect. As shown in Figure , an interface may exist for the 15-pin attachment unit interface (AUI) connector. The AUI connector allows different media to connect when used with the appropriate transceiver. A transceiver is an adapter that converts one type of connection to another. A transceiver will usually convert an AUI to an RJ-45, a coax, or a fiber optic connector. On 10BASE5 Ethernet, or Thicknet, a short cable is used to connect the AUI with a transceiver on the main cable.
The next page will discuss UTP cables.
5.1
Cabling LANs
5.1.5
UTP implementation
This page provides detailed information for a UTP implementation.
EIA/TIA specifies an RJ-45 connector for UTP cable. The letters RJ stand for registered jack and the number 45 refers to a specific wiring sequence. The RJ-45 transparent end connector shows eight colored wires. Four of the wires, T1 through T4, carry the voltage and are called tip. The other four wires, R1 through R4, are grounded and are called ring. Tip and ring are terms that originated in the early days of the telephone. Today, these terms refer to the positive and the negative wire in a pair. The wires in the first pair in a cable or a connector are designated as T1 and R1. The second pair is T2 and R2, the third is T3 and R3, and the fourth is T4 and R4.
The RJ-45 connector is the male component, which is crimped on the end of the cable. When a male connector is viewed from the front, the pin locations are numbered from 8 on the left to 1 on the right as seen in Figure .
The jack, as seen in Figure , is the female component in a network device, wall outlet, or patch panel. Figure shows the punch-down connections at the back of the jack where the Ethernet UTP cable connects.
For electricity to run between the connector and the jack, the order of the wires must follow T568A or T568B color code found in the EIA/TIA-568-B.1 standard, as shown in Figure . To determine the EIA/TIA category of cable that should be used to connect a device, refer to the documentation for that device or look for a label on the device near the jack. If there are no labels or documentation available, use Category 5E or greater as higher categories can be used in place of lower ones. Then determine whether to use a straight-through cable or a crossover cable.
If the two RJ-45 connectors of a cable are held side by side in the same orientation, the colored wires will be seen in each. If the order of the colored wires is the same at each end, then the cable is a straight-through, as seen in Figure .
In a crossover cable, the RJ-45 connectors on both ends show that some of the wires are connected to different pins on each side of the cable. Figure shows that pins 1 and 2 on one connector connect to pins 3 and 6 on the other.
Figure shows the guidelines that are used to determine the type of cable that is required to connect Cisco devices.
Use straight-through cables for the following connections:
* Switch to router
* Switch to PC or server
* Hub to PC or server
Use crossover cables for the following connections:
* Switch to switch
* Switch to hub
* Hub to hub
* Router to router
* PC to PC
* Router to PC
Figure illustrates how a variety of cable types may be required in a given network. The category of UTP cable required is based on the type of Ethernet that is chosen.
The Lab Activity shows the termination process for an RJ-45 jack.
The Interactive Media Activities provide detailed views of a straight-through and crossover cable.
The next page explains how repeaters work.
5.1
Cabling LANs
5.1.6
Repeaters
This page will discuss how a repeater is used on a network.
The term repeater comes from the early days of long distance communication. A repeater was a person on one hill who would repeat the signal that was just received from the person on the previous hill. The process would repeat until the message arrived at its destination. Telegraph, telephone, microwave, and optical communications use repeaters to strengthen signals sent over long distances.
A repeater receives a signal, regenerates it, and passes it on. It can regenerate and retime network signals at the bit level to allow them to travel a longer distance on the media. Ethernet and IEEE 802.3 implement a rule, known as the 5-4-3 rule, for the number of repeaters and segments on shared access Ethernet backbones in a tree topology. The 5-4-3 rule divides the network into two types of physical segments: populated (user) segments, and unpopulated (link) segments. User segments have users' systems connected to them. Link segments are used to connect the network repeaters together. The rule mandates that between any two nodes on the network, there can only be a maximum of five segments, connected through four repeaters, or concentrators, and only three of the five segments may contain user connections.
The Ethernet protocol requires that a signal sent out over the LAN reach every part of the network within a specified length of time. The 5-4-3 rule ensures this. Each repeater that a signal goes through adds a small amount of time to the process, so the rule is designed to minimize transmission times of the signals. Too much latency on the LAN increases the number of late collisions and makes the LAN less efficient.
The next page will discuss hubs.
5.1
Cabling LANs
5.1.7
Hubs
This page will describe the three types of hubs.
Hubs are actually multiport repeaters. The difference between hubs and repeaters is usually the number of ports that each device provides. A typical repeater usually has two ports. A hub generally has from 4 to 24 ports. Hubs are most commonly used in Ethernet 10BASE-T or 100BASE-T networks.
The use of a hub changes the network from a linear bus with each device plugged directly into the wire to a star topology. Data that arrives over the cables to a hub port is electrically repeated on all the other ports connected to the network segment.
Hubs come in three basic types:
* Passive – A passive hub serves as a physical connection point only. It does not manipulate or view the traffic that crosses it. It does not boost or clean the signal. A passive hub is used only to share the physical media. A passive hub does not need electrical power.
* Active – An active hub must be plugged into an electrical outlet because it needs power to amplify a signal before it is sent to the other ports.
* Intelligent – Intelligent hubs are sometimes called smart hubs. They function like active hubs with microprocessor chips and diagnostic capabilities. Intelligent hubs are more expensive than active hubs. They are also more useful in troubleshooting situations.
Devices attached to a hub receive all traffic that travels through the hub. If many devices are attached to the hub, collisions are more likely to occur. A collision occurs when two or more workstations send data over the network wire at the same time. All data is corrupted when this occurs. All devices that are connected to the same network segment are members of the same collision domain.
Sometimes hubs are called concentrators since they are central connection points for Ethernet LANs.
The Lab Activity will teach students about the price of different network components.
The next page discusses wireless networks.
5.1
Cabling LANs
5.1.8
Wireless
This page will explain how a wireless network can be created with much less cabling than other networks.
Wireless signals are electromagnetic waves that travel through the air. Wireless networks use radio frequency (RF), laser, infrared (IR), satellite, or microwaves to carry signals between computers without a permanent cable connection. The only permanent cabling can be to the access points for the network. Workstations within the range of the wireless network can be moved easily without the need to connect and reconnect network cables.
A common application of wireless data communication is for mobile use. Some examples of mobile use include commuters, airplanes, satellites, remote space probes, space shuttles, and space stations.
At the core of wireless communication are devices called transmitters and receivers. The transmitter converts source data to electromagnetic waves that are sent to the receiver. The receiver then converts these electromagnetic waves back into data for the destination. For two-way communication, each device requires a transmitter and a receiver. Many networking device manufacturers build the transmitter and receiver into a single unit called a transceiver or wireless network card. All devices in a WLAN must have the correct wireless network card installed.
The two most common wireless technologies used for networking are IR and RF. IR technology has its weaknesses. Workstations and digital devices must be in the line of sight of the transmitter to work correctly. An infrared-based network can be used when all the digital devices that require network connectivity are in one room. IR networking technology can be installed quickly. However, the data signals can be weakened or obstructed by people who walk across the room or by moisture in the air. New IR technologies will be able to work out of sight.
RF technology allows devices to be in different rooms or buildings. The limited range of radio signals restricts the use of this kind of network. RF technology can be on single or multiple frequencies. A single radio frequency is subject to outside interference and geographic obstructions. It is also easily monitored by others, which makes the transmissions of data insecure. Spread spectrum uses multiple frequencies to increase the immunity to noise and to make it difficult for outsiders to intercept data transmissions.
Two approaches that are used to implement spread spectrum for WLAN transmissions are Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS). The technical details of how these technologies work are beyond the scope of this course.
A large LAN can be broken into smaller segments. The next page will explain how bridges are used to accomplish this.
5.1
Cabling LANs
5.1.9
Bridges
This page will explain the function of bridges in a LAN.
There are times when it is necessary to break up a large LAN into smaller and more easily managed segments. This decreases the amount of traffic on a single LAN and can extend the geographical area past what a single LAN can support. The devices that are used to connect network segments together include bridges, switches, routers, and gateways. Switches and bridges operate at the data link layer of the OSI model. The function of the bridge is to make intelligent decisions about whether or not to pass signals on to the next segment of a network.
When a bridge receives a frame on the network, the destination MAC address is looked up in the bridge table to determine whether to filter, flood, or copy the frame onto another segment. This decision process occurs as follows:
* If the destination device is on the same segment as the frame, the bridge will not send the frame onto other segments. This process is known as filtering.
* If the destination device is on a different segment, the bridge forwards the frame to the appropriate segment.
* If the destination address is unknown to the bridge, the bridge forwards the frame to all segments except the one on which it was received. This process is known as flooding.
If placed strategically, a bridge can greatly improve network performance.
The next page will describe switches.
5.1
Cabling LANs
5.1.10
Swtiches
This page will explain the function of switches.
A switch is sometimes described as a multiport bridge. A typical bridge may have only two ports that link two network segments. A switch can have multiple ports based on the number of network segments that need to be linked. Like bridges, switches learn information about the data frames that are received from computers on the network. Switches use this information to build tables to determine the destination of data that is sent between computers on the network.
Although there are some similarities between the two, a switch is a more sophisticated device than a bridge. A bridge determines whether the frame should be forwarded to the other network segment based on the destination MAC address. A switch has many ports with many network segments connected to them. A switch chooses the port to which the destination device or workstation is connected. Ethernet switches are popular connectivity solutions because they improve network speed, bandwidth, and performance.
Switching is a technology that alleviates congestion in Ethernet LANs. Switches reduce traffic and increase bandwidth. Switches can easily replace hubs because switches work with the cable infrastructures that are already in place. This improves performance with minimal changes to a network.
All switching equipment perform two basic operations. The first operation is called switching data frames. This is the process by which a frame is received on an input medium and then transmitted to an output medium. The second is the maintenance of switching operations where switches build and maintain switching tables and search for loops.
Switches operate at much higher speeds than bridges and can support new functionality, such as virtual LANs.
An Ethernet switch has many benefits. One benefit is that it allows many users to communicate at the same time through the use of virtual circuits and dedicated network segments in a virtually collision-free environment. This maximizes the bandwidth available on the shared medium. Another benefit is that a switched LAN environment is very cost effective since the hardware and cables in place can be reused.
The Lab activity will help students understand the price of a LAN switch.
The next page will discuss NICs.
5.1
Cabling LANs
5.1.11
Host connectivity
This page will explain how NICs provide network connectivity.
The function of a NIC is to connect a host device to the network medium. A NIC is a printed circuit board that fits into the expansion slot on the motherboard or peripheral device of a computer. The NIC is also referred to as a network adapter. On laptop or notebook computers a NIC is the size of a credit card.
NICs are considered Layer 2 devices because each NIC carries a unique code called a MAC address. This address is used to control data communication for the host on the network. More will be learned about the MAC address later. NICs control host access to the medium.
In some cases the type of connector on the NIC does not match the type of media that needs to be connected to it. A good example is a Cisco 2500 router. This router has an AUI connector. That AUI connector needs to connect to a UTP Category 5 Ethernet cable. A transceiver is used to do this. A transceiver converts one type of signal or connector to another. For example, a transceiver can connect a 15-pin AUI interface to an RJ-45 jack. It is considered a Layer 1 device because it only works with bits and not with any address information or higher-level protocols.
NICs have no standardized symbol. It is implied that, when networking devices are attached to network media, there is a NIC or NIC-like device present. A dot on a topology map represents either a NIC interface or port, which acts like a NIC.
The next page discusses peer-to-peer networks.
5.1
Cabling LANs
5.1.12
Peer-to-peer
This page covers peer-to-peer networks.
When LAN and WAN technologies are used, many computers are interconnected to provide services to their users. To accomplish this, networked computers take on different roles or functions in relation to each other. Some types of applications require computers to function as equal partners. Other types of applications distribute their work so that one computer functions to serve a number of others in an unequal relationship.
Two computers generally use request and response protocols to communicate with each other. One computer issues a request for a service, and a second computer receives and responds to that request. The requestor acts like a client and the responder acts like a server.
In a peer-to-peer network, networked computers act as equal partners, or peers. As peers, each computer can take on the client function or the server function. Computer A may request for a file from Computer B, which then sends the file to Computer A. Computer A acts like the client and Computer B acts like the server. At a later time, Computers A and B can reverse roles.
In a peer-to-peer network, individual users control their own resources. The users may decide to share certain files with other users. The users may also require passwords before they allow others to access their resources. Since individual users make these decisions, there is no central point of control or administration in the network. In addition, individual users must back up their own systems to be able to recover from data loss in case of failures. When a computer acts as a server, the user of that machine may experience reduced performance as the machine serves the requests made by other systems.
Peer-to-peer networks are relatively easy to install and operate. No additional equipment is necessary beyond a suitable operating system installed on each computer. Since users control their own resources, no dedicated administrators are needed.
As networks grow, peer-to-peer relationships become increasingly difficult to coordinate. A peer-to-peer network works well with ten or fewer computers. Since peer-to-peer networks do not scale well, their efficiency decreases rapidly as the number of computers on the network increases. Also, individual users control access to the resources on their computers, which means security may be difficult to maintain. The client/server model of networking can be used to overcome the limitations of the peer-to-peer network.
Students will create a simple peer-to-peer network in the Lab Activity.
The next page discusses a client/server network.
5.1
Cabling LANs
5.1.13
Client/server
This page will describe a client/server environment.
In a client/server arrangement, network services are located on a dedicated computer called a server. The server responds to the requests of clients. The server is a central computer that is continuously available to respond to requests from clients for file, print, application, and other services. Most network operating systems adopt the form of a client/server relationship. Typically, desktop computers function as clients and one or more computers with additional processing power, memory, and specialized software function as servers.
Servers are designed to handle requests from many clients simultaneously. Before a client can access the server resources, the client must be identified and be authorized to use the resource. Each client is assigned an account name and password that is verified by an authentication service. The authentication service guards access to the network. With the centralization of user accounts, security, and access control, server-based networks simplify the administration of large networks.
The concentration of network resources such as files, printers, and applications on servers also makes it easier to back-up and maintain the data. Resources can be located on specialized, dedicated servers for easier access. Most client/server systems also include ways to enhance the network with new services that extend the usefulness of the network.
The centralized functions in a client/server network has substantial advantages and some disadvantages. Although a centralized server enhances security, ease of access, and control, it introduces a single point of failure into the network. Without an operational server, the network cannot function at all. Servers require a trained, expert staff member to administer and maintain. Server systems also require additional hardware and specialized software that add to the cost.
Figures and summarize the advantages and disadvantages of peer-to-peer and client/server networks.
In the Lab Activities, students will build a hub-based network and a switch-based network.
This page concludes this lesson. The next lesson will discuss cabling WANs. The first page focuses on the WAN physical layer.
5.2
Cabling WANs
5.2.1
WAN physical layer
This page describes the WAN physical layer.
The physical layer implementations vary based on the distance of the equipment from each service, the speed, and the type of service. Serial connections are used to support WAN services such as dedicated leased lines that run PPP or Frame Relay. The speed of these connections ranges from 2400 bps to T1 service at 1.544 Mbps and E1 service at 2.048 Mbps.
ISDN offers dial-on-demand connections or dial backup services. An ISDN Basic Rate Interface (BRI) is composed of two 64 kbps bearer channels (B channels) for data, and one delta channel (D channel) at 16 kbps used for signaling and other link-management tasks. PPP is typically used to carry data over the B channels.
As the demand for residential broadband high-speed services has increased, DSL and cable modem connections have become more popular. Typical residential DSL service can achieve T1/E1 speeds over the telephone line. Cable services use the coaxial cable TV line. A coaxial cable line provides high-speed connectivity that matches or exceeds DSL. DSL and cable modem service will be covered in more detail in a later module.
Students can identify the WAN physical layer components in the Interactive Media Activity.
The next page will describe WAN serial connections.
5.2
Cabling WANs
5.2.2
WAN serial connections
This page will discuss WAN serial connections.
For long distance communication, WANs use serial transmission. This is a process by which bits of data are sent over a single channel. This process provides reliable long distance communication and the use of a specific electromagnetic or optical frequency range.
Frequencies are measured in terms of cycles per second and expressed in Hz. Signals transmitted over voice grade telephone lines use 4 kHz. The size of the frequency range is referred to as bandwidth. In networking, bandwidth is a measure of the bits per second that are transmitted.
For a Cisco router, physical connectivity at the customer site is provided by one of two types of serial connections. The first type is a 60-pin connector. The second is a more compact 'smart serial' connector. The provider connector will vary depending on the type of service equipment.
If the connection is made directly to a service provider, or a device that provides signal clocking such as a channel/data service unit (CSU/DSU), the router will be a data terminal equipment (DTE) and use a DTE serial cable. Typically this is the case. However, there are occasions where the local router is required to provide the clocking rate and therefore will use a data communications equipment (DCE) cable. In the curriculum router labs one of the connected routers will need to provide the clocking function. Therefore, the connection will consist of a DCE and a DTE cable.
The next page will discuss routers and serial connections.
5.2
Cabling WANs
5.2.3
Routers and serial connections
This page will describe how routers and serial connections are used in a WAN.
Routers are responsible for routing data packets from source to destination within the LAN, and for providing connectivity to the WAN. Within a LAN environment the router contains broadcasts, provides local address resolution services, such as ARP and RARP, and may segment the network using a subnetwork structure. In order to provide these services the router must be connected to the LAN and WAN.
In addition to determining the cable type, it is necessary to determine whether DTE or DCE connectors are required. The DTE is the endpoint of the user's device on the WAN link. The DCE is typically the point where responsibility for delivering data passes into the hands of the service provider.
When connecting directly to a service provider, or to a device such as a CSU/DSU that will perform signal clocking, the router is a DTE and needs a DTE serial cable. This is typically the case for routers. However, there are cases when the router will need to be the DCE. When performing a back-to-back router scenario in a test environment, one of the routers will be a DTE and the other will be a DCE.
When cabling routers for serial connectivity, the routers will either have fixed or modular ports. The type of port being used will affect the syntax used later to configure each interface.
Interfaces on routers with fixed serial ports are labeled for port type and port number.
Interfaces on routers with modular serial ports are labeled for port type, slot, and port number. The slot is the location of the module. To configure a port on a modular card, it is necessary to specify the interface using the syntax "port type slot number/port number". Use the label "serial 1/0", when the interface is serial, the slot number where the module is installed is slot 1, and the port that is being referenced is port 0.
The first Lab Activity will require students to identify the Ethernet or Fast Ethernet interfaces on a router.
In the next two Lab Activities, students will create and troubleshoot a basic WAN.
The next page discusses routers and ISDN BRI connections.
5.2
Cabling WANs
5.2.4
Routers and ISDN BRI connections
This page will help students understand ISDN BRI connections.
With ISDN BRI, two types of interfaces may be used, BRI S/T and BRI U. Determine who is providing the Network Termination 1 (NT1) device in order to determine which interface type is needed.
An NT1 is an intermediate device located between the router and the service provider ISDN switch. The NT1 is used to connect four-wire subscriber wiring to the conventional two-wire local loop. In North America, the customer typically provides the NT1, while in the rest of the world the service provider provides the NT1 device.
It may be necessary to provide an external NT1 if the device is not already integrated into the router. Reviewing the labeling on the router interfaces is usually the easiest way to determine if the router has an integrated NT1. A BRI interface with an integrated NT1 is labeled BRI U. A BRI interface without an integrated NT1 is labeled BRI S/T. Because routers can have multiple ISDN interface types, determine which interface is needed when the router is purchased. The type of BRI interface may be determined by looking at the port label. To interconnect the ISDN BRI port to the service-provider device, use a UTP Category 5 straight-through cable.
CAUTION:
It is important to insert the cable running from an ISDN BRI port only to an ISDN jack or an ISDN switch. ISDN BRI uses voltages that can seriously damage non-ISDN devices.
The next page discusses DSL for a router.
5.2
Cabling WANs
5.2.5
Routers and DSL connections
This page describes routers and DSL connections.
The Cisco 827 ADSL router has one asymmetric digital subscriber line (ADSL) interface. To connect an ADSL line to the ADSL port on a router, do the following:
* Connect the phone cable to the ADSL port on the router.
* Connect the other end of the phone cable to the phone jack.
To connect a router for DSL service, use a phone cable with RJ-11 connectors. DSL works over standard telephone lines using pins 3 and 4 on a standard RJ-11 connector.
The next page will discuss cable connections.
5.2
Cabling WANs
5.2.6
Routers and cable connections
This page will explain how routers are connected to cable systems.
The Cisco uBR905 cable access router provides high-speed network access on the cable television system to residential and small office, home office (SOHO) subscribers. The uBR905 router has a coaxial cable, or F-connector, interface that connects directly to the cable system. Coaxial cable and an F connector are used to connect the router and cable system.
Use the following steps to connect the Cisco uBR905 cable access router to the cable system:
* Verify that the router is not connected to power.
* Locate the RF coaxial cable coming from the coaxial cable (TV) wall outlet.
* Install a cable splitter/directional coupler, if needed, to separate signals for TV and computer use. If necessary, also install a high-pass filter to prevent interference between the TV and computer signals.
* Connect the coaxial cable to the F connector of the router. Hand-tighten the connector, making sure that it is finger-tight, and then give it a 1/6 turn with a wrench.
* Make sure that all other coaxial cable connectors, all intermediate splitters, couplers, or ground blocks, are securely tightened from the distribution tap to the Cisco uBR905 router.
CAUTION:
Do not over tighten the connector. Over tightening may break off the connector. Do not use a torque wrench because of the danger of tightening the connector more than the recommended 1/6 turns after it is finger-tight.
The next page will discuss console connections.
5.2
Cabling WANs
5.2.7
Setting up console connections
This page will explain how console connections are set up.
To initially configure the Cisco device, a management connection must be directly connected to the device. For Cisco equipment this management attachment is called a console port. The console port allows monitoring and configuration of a Cisco hub, switch, or router.
The cable used between a terminal and a console port is a rollover cable, with RJ-45 connectors. The rollover cable, also known as a console cable, has a different pinout than the straight-through or crossover RJ-45 cables used with Ethernet or the ISDN BRI. The pinout for a rollover is as follows:
1 to 8
2 to 7
3 to 6
4 to 5
5 to 4
6 to 3
7 to 2
8 to 1
To set up a connection between the terminal and the Cisco console port, perform two steps. First, connect the devices using a rollover cable from the router console port to the workstation serial port. An RJ-45-to-DB-9 or an RJ-45-to-DB-25 adapter may be required for the PC or terminal. Next, configure the terminal emulation application with the following common equipment (COM) port settings: 9600 bps, 8 data bits, no parity, 1 stop bit, and no flow control.
The AUX port is used to provide out-of-band management through a modem. The AUX port must be configured by way of the console port before it can be used. The AUX port also uses the settings of 9600 bps, 8 data bits, no parity, 1 stop bit, and no flow control.
In the Lab Activity, students will establish a console connection to a router or switch.
The Interactive Media Activity provides a detailed view of a console cable.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
Ethernet is the most widely used LAN technology and can be implemented on a variety of media. Ethernet technologies provide a variety of network speeds, from 10 Mbps to Gigabit Ethernet, which can be applied to appropriate areas of a network. Media and connector requirements differ for various Ethernet implementations.
The connector on a network interface card (NIC) must match the media. A bayonet nut connector (BNC) connector is required to connect to coaxial cable. A fiber connector is required to connect to fiber media. The registered jack (RJ-45) connector used with twisted-pair wire is the most common type of connector used in LAN implementations. Ethernet
When twisted-pair wire is used to connect devices, the appropriate wire sequence, or pinout, must be determined as well. A crossover cable is used to connect two similar devices, such as two PCs. A straight-through cable is used to connect different devices, such as connections between a switch and a PC. A rollover cable is used to connect a PC to the console port of a router.
Repeaters regenerate and retime network signals and allow them to travel a longer distance on the media. Hubs are multi-port repeaters. Data arriving at a hub port is electrically repeated on all the other ports connected to the same network segment, except for the port on which the data arrived. Sometimes hubs are called concentrators, because hubs often serve as a central connection point for an Ethernet LAN.
A wireless network can be created with much less cabling than other networks. The only permanent cabling might be to the access points for the network. At the core of wireless communication are devices called transmitters and receivers. The transmitter converts source data to electromagnetic (EM) waves that are passed to the receiver. The receiver then converts these electromagnetic waves back into data for the destination. The two most common wireless technologies used for networking are infrared (IR) and radio frequency (RF).
There are times when it is necessary to break up a large LAN into smaller, more easily managed segments. The devices that are used to define and connect network segments include bridges, switches, routers, and gateways.
A bridge uses the destination MAC address to determine whether to filter, flood, or copy the frame onto another segment. If placed strategically, a bridge can greatly improve network performance.
A switch is sometimes described as a multi-port bridge. Although there are some similarities between the two, a switch is a more sophisticated device than a bridge. Switches operate at much higher speeds than bridges and can support new functionality, such as virtual LANs.
Routers are responsible for routing data packets from source to destination within the LAN, and for providing connectivity to the WAN. Within a LAN environment the router controls broadcasts, provides local address resolution services, such as ARP and RARP, and may segment the network using a subnetwork structure.
Computers typically communicate with each other by using request/response protocols. One computer issues a request for a service, and a second computer receives and responds to that request. In a peer-to-peer network, networked computers act as equal partners, or peers. As peers, each computer can take on the client function or the server function. In a client/server arrangement, network services are located on a dedicated computer called a server. The server responds to the requests of clients.
WAN connection types include high-speed serial links, ISDN, DSL, and cable modems. Each of these requires a specific media and connector. To interconnect the ISDN BRI port to the service-provider device, a UTP Category 5 straight-through cable with RJ-45 connectors, is used. A phone cable and an RJ-11 connector are used to connect a router for DSL service. Coaxial cable and a BNC connector are used to connect a router for cable service.
In addition to the connection type, it is necessary to determine whether DTE or DCE connectors are required on internetworking devices. The DTE is the endpoint of the user's private network on the WAN link. The DCE is typically the point where responsibility for delivering data passes to the service provider. When connecting directly to a service provider, or to a device such as a CSU/DSU that will perform signal clocking, the router is a DTE and needs a DTE serial cable. This is typically the case for routers. However, there are cases when the router will need to be the DCE.
ccna module 5 text
Even though each LAN is unique, there are many design aspects that are common to all LANs. For example, most LANs follow the same standards and use the same components. This module presents information on elements of Ethernet LANs and common LAN devices.
There are several types of WAN connections. They range from dial-up to broadband access and differ in bandwidth, cost, and required equipment. This module presents information on the various types of WAN connections.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.
Students who complete this module should be able to perform the following tasks:
* Identify characteristics of Ethernet networks
* Identify straight-through, crossover, and rollover cables
* Describe the function, advantages, and disadvantages of repeaters, hubs, bridges, switches, and wireless network components
* Describe the function of peer-to-peer networks
* Describe the function, advantages, and disadvantages of client-server networks
* Describe and differentiate between serial, ISDN, DSL, and cable modem WAN connections
* Identify router serial ports, cables, and connectors
* Identify and describe the placement of equipment used in various WAN configurations
5.1
Cabling LANs
5.1.1
LAN physical layer
This page describes the LAN physical layer.
Various symbols are used to represent media types. Token Ring is represented by a circle. FDDI is represented by two concentric circles and the Ethernet symbol is represented by a straight line. Serial connections are represented by a lightning bolt.
Each computer network can be built with many different media types. The function of media is to carry a flow of information through a LAN. Wireless LANs use the atmosphere, or space, as the medium. Other networking media confine network signals to a wire, cable, or fiber. Networking media are considered Layer 1, or physical layer, components of LANs.
Each type of media has advantages and disadvantages. These are based on the following factors:
* Cable length
* Cost
* Ease of installation
* Susceptibility to interference
Coaxial cable, optical fiber, and space can carry network signals. This module will focus on Category 5 UTP, which includes the Category 5e family of cables.
Many topologies support LANs, as well as many different physical media. Figure shows a subset of physical layer implementations that can be deployed to support Ethernet.
The next page explains how Ethernet is implemented in a campus environment.
5.1
Cabling LANs
5.1.2
Ethernet in the campus
This page will discuss Ethernet.
Ethernet is the most widely used LAN technology. Ethernet was first implemented by the Digital, Intel, and Xerox group (DIX). DIX created and implemented the first Ethernet LAN specification, which was used as the basis for the Institute of Electrical and Electronics Engineers (IEEE) 802.3 specification, released in 1980. IEEE extended 802.3 to three new committees known as 802.3u for Fast Ethernet, 802.3z for Gigabit Ethernet over fiber, and 802.3ab for Gigabit Ethernet over UTP.
A network may require an upgrade to one of the faster Ethernet topologies. Most Ethernet networks support speeds of 10 Mbps and 100 Mbps.
The new generation of multimedia, imaging, and database products can easily overwhelm a network that operates at traditional Ethernet speeds of 10 and 100 Mbps. Network administrators may choose to provide Gigabit Ethernet from the backbone to the end user. Installation costs for new cables and adapters can make this prohibitive.
There are several ways that Ethernet technologies can be used in a campus network:
* An Ethernet speed of 10 Mbps can be used at the user level to provide good performance. Clients or servers that require more bandwidth can use 100-Mbps Ethernet.
* Fast Ethernet is used as the link between user and network devices. It can support the combination of all traffic from each Ethernet segment.
* Fast Ethernet can be used to connect enterprise servers. This will enhance client-server performance across the campus network and help prevent bottlenecks.
* Fast Ethernet or Gigabit Ethernet should be implemented between backbone devices, based on affordability.
The media and connector requirements for an Ethernet implementation are discussed on the next page.
5.1
Cabling LANs
5.1.3
Ethernet media and connector requirements
This page provides important considerations for an Ethernet implementation. These include the media and connector requirements and the level of network performance.
The cables and connector specifications used to support Ethernet implementations are derived from the EIA/TIA standards. The categories of cabling defined for Ethernet are derived from the EIA/TIA-568 SP-2840 Commercial Building Telecommunications Wiring Standards.
Figure compares the cable and connector specifications for the most popular Ethernet implementations. It is important to note the difference in the media used for 10-Mbps Ethernet versus 100-Mbps Ethernet. Networks with a combination of 10- and 100-Mbps traffic use Category 5 UTP to support Fast Ethernet.
The next page will discuss the different connection types.
5.1
Cabling LANs
5.1.4
Connection media
This page describes the different connection types used by each physical layer implementation, as shown in Figure . The RJ-45 connector and jack are the most common. RJ-45 connectors are discussed in more detail in the next section.
The connector on a NIC may not match the media to which it needs to connect. As shown in Figure , an interface may exist for the 15-pin attachment unit interface (AUI) connector. The AUI connector allows different media to connect when used with the appropriate transceiver. A transceiver is an adapter that converts one type of connection to another. A transceiver will usually convert an AUI to an RJ-45, a coax, or a fiber optic connector. On 10BASE5 Ethernet, or Thicknet, a short cable is used to connect the AUI with a transceiver on the main cable.
The next page will discuss UTP cables.
5.1
Cabling LANs
5.1.5
UTP implementation
This page provides detailed information for a UTP implementation.
EIA/TIA specifies an RJ-45 connector for UTP cable. The letters RJ stand for registered jack and the number 45 refers to a specific wiring sequence. The RJ-45 transparent end connector shows eight colored wires. Four of the wires, T1 through T4, carry the voltage and are called tip. The other four wires, R1 through R4, are grounded and are called ring. Tip and ring are terms that originated in the early days of the telephone. Today, these terms refer to the positive and the negative wire in a pair. The wires in the first pair in a cable or a connector are designated as T1 and R1. The second pair is T2 and R2, the third is T3 and R3, and the fourth is T4 and R4.
The RJ-45 connector is the male component, which is crimped on the end of the cable. When a male connector is viewed from the front, the pin locations are numbered from 8 on the left to 1 on the right as seen in Figure .
The jack, as seen in Figure , is the female component in a network device, wall outlet, or patch panel. Figure shows the punch-down connections at the back of the jack where the Ethernet UTP cable connects.
For electricity to run between the connector and the jack, the order of the wires must follow T568A or T568B color code found in the EIA/TIA-568-B.1 standard, as shown in Figure . To determine the EIA/TIA category of cable that should be used to connect a device, refer to the documentation for that device or look for a label on the device near the jack. If there are no labels or documentation available, use Category 5E or greater as higher categories can be used in place of lower ones. Then determine whether to use a straight-through cable or a crossover cable.
If the two RJ-45 connectors of a cable are held side by side in the same orientation, the colored wires will be seen in each. If the order of the colored wires is the same at each end, then the cable is a straight-through, as seen in Figure .
In a crossover cable, the RJ-45 connectors on both ends show that some of the wires are connected to different pins on each side of the cable. Figure shows that pins 1 and 2 on one connector connect to pins 3 and 6 on the other.
Figure shows the guidelines that are used to determine the type of cable that is required to connect Cisco devices.
Use straight-through cables for the following connections:
* Switch to router
* Switch to PC or server
* Hub to PC or server
Use crossover cables for the following connections:
* Switch to switch
* Switch to hub
* Hub to hub
* Router to router
* PC to PC
* Router to PC
Figure illustrates how a variety of cable types may be required in a given network. The category of UTP cable required is based on the type of Ethernet that is chosen.
The Lab Activity shows the termination process for an RJ-45 jack.
The Interactive Media Activities provide detailed views of a straight-through and crossover cable.
The next page explains how repeaters work.
5.1
Cabling LANs
5.1.6
Repeaters
This page will discuss how a repeater is used on a network.
The term repeater comes from the early days of long distance communication. A repeater was a person on one hill who would repeat the signal that was just received from the person on the previous hill. The process would repeat until the message arrived at its destination. Telegraph, telephone, microwave, and optical communications use repeaters to strengthen signals sent over long distances.
A repeater receives a signal, regenerates it, and passes it on. It can regenerate and retime network signals at the bit level to allow them to travel a longer distance on the media. Ethernet and IEEE 802.3 implement a rule, known as the 5-4-3 rule, for the number of repeaters and segments on shared access Ethernet backbones in a tree topology. The 5-4-3 rule divides the network into two types of physical segments: populated (user) segments, and unpopulated (link) segments. User segments have users' systems connected to them. Link segments are used to connect the network repeaters together. The rule mandates that between any two nodes on the network, there can only be a maximum of five segments, connected through four repeaters, or concentrators, and only three of the five segments may contain user connections.
The Ethernet protocol requires that a signal sent out over the LAN reach every part of the network within a specified length of time. The 5-4-3 rule ensures this. Each repeater that a signal goes through adds a small amount of time to the process, so the rule is designed to minimize transmission times of the signals. Too much latency on the LAN increases the number of late collisions and makes the LAN less efficient.
The next page will discuss hubs.
5.1
Cabling LANs
5.1.7
Hubs
This page will describe the three types of hubs.
Hubs are actually multiport repeaters. The difference between hubs and repeaters is usually the number of ports that each device provides. A typical repeater usually has two ports. A hub generally has from 4 to 24 ports. Hubs are most commonly used in Ethernet 10BASE-T or 100BASE-T networks.
The use of a hub changes the network from a linear bus with each device plugged directly into the wire to a star topology. Data that arrives over the cables to a hub port is electrically repeated on all the other ports connected to the network segment.
Hubs come in three basic types:
* Passive – A passive hub serves as a physical connection point only. It does not manipulate or view the traffic that crosses it. It does not boost or clean the signal. A passive hub is used only to share the physical media. A passive hub does not need electrical power.
* Active – An active hub must be plugged into an electrical outlet because it needs power to amplify a signal before it is sent to the other ports.
* Intelligent – Intelligent hubs are sometimes called smart hubs. They function like active hubs with microprocessor chips and diagnostic capabilities. Intelligent hubs are more expensive than active hubs. They are also more useful in troubleshooting situations.
Devices attached to a hub receive all traffic that travels through the hub. If many devices are attached to the hub, collisions are more likely to occur. A collision occurs when two or more workstations send data over the network wire at the same time. All data is corrupted when this occurs. All devices that are connected to the same network segment are members of the same collision domain.
Sometimes hubs are called concentrators since they are central connection points for Ethernet LANs.
The Lab Activity will teach students about the price of different network components.
The next page discusses wireless networks.
5.1
Cabling LANs
5.1.8
Wireless
This page will explain how a wireless network can be created with much less cabling than other networks.
Wireless signals are electromagnetic waves that travel through the air. Wireless networks use radio frequency (RF), laser, infrared (IR), satellite, or microwaves to carry signals between computers without a permanent cable connection. The only permanent cabling can be to the access points for the network. Workstations within the range of the wireless network can be moved easily without the need to connect and reconnect network cables.
A common application of wireless data communication is for mobile use. Some examples of mobile use include commuters, airplanes, satellites, remote space probes, space shuttles, and space stations.
At the core of wireless communication are devices called transmitters and receivers. The transmitter converts source data to electromagnetic waves that are sent to the receiver. The receiver then converts these electromagnetic waves back into data for the destination. For two-way communication, each device requires a transmitter and a receiver. Many networking device manufacturers build the transmitter and receiver into a single unit called a transceiver or wireless network card. All devices in a WLAN must have the correct wireless network card installed.
The two most common wireless technologies used for networking are IR and RF. IR technology has its weaknesses. Workstations and digital devices must be in the line of sight of the transmitter to work correctly. An infrared-based network can be used when all the digital devices that require network connectivity are in one room. IR networking technology can be installed quickly. However, the data signals can be weakened or obstructed by people who walk across the room or by moisture in the air. New IR technologies will be able to work out of sight.
RF technology allows devices to be in different rooms or buildings. The limited range of radio signals restricts the use of this kind of network. RF technology can be on single or multiple frequencies. A single radio frequency is subject to outside interference and geographic obstructions. It is also easily monitored by others, which makes the transmissions of data insecure. Spread spectrum uses multiple frequencies to increase the immunity to noise and to make it difficult for outsiders to intercept data transmissions.
Two approaches that are used to implement spread spectrum for WLAN transmissions are Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS). The technical details of how these technologies work are beyond the scope of this course.
A large LAN can be broken into smaller segments. The next page will explain how bridges are used to accomplish this.
5.1
Cabling LANs
5.1.9
Bridges
This page will explain the function of bridges in a LAN.
There are times when it is necessary to break up a large LAN into smaller and more easily managed segments. This decreases the amount of traffic on a single LAN and can extend the geographical area past what a single LAN can support. The devices that are used to connect network segments together include bridges, switches, routers, and gateways. Switches and bridges operate at the data link layer of the OSI model. The function of the bridge is to make intelligent decisions about whether or not to pass signals on to the next segment of a network.
When a bridge receives a frame on the network, the destination MAC address is looked up in the bridge table to determine whether to filter, flood, or copy the frame onto another segment. This decision process occurs as follows:
* If the destination device is on the same segment as the frame, the bridge will not send the frame onto other segments. This process is known as filtering.
* If the destination device is on a different segment, the bridge forwards the frame to the appropriate segment.
* If the destination address is unknown to the bridge, the bridge forwards the frame to all segments except the one on which it was received. This process is known as flooding.
If placed strategically, a bridge can greatly improve network performance.
The next page will describe switches.
5.1
Cabling LANs
5.1.10
Swtiches
This page will explain the function of switches.
A switch is sometimes described as a multiport bridge. A typical bridge may have only two ports that link two network segments. A switch can have multiple ports based on the number of network segments that need to be linked. Like bridges, switches learn information about the data frames that are received from computers on the network. Switches use this information to build tables to determine the destination of data that is sent between computers on the network.
Although there are some similarities between the two, a switch is a more sophisticated device than a bridge. A bridge determines whether the frame should be forwarded to the other network segment based on the destination MAC address. A switch has many ports with many network segments connected to them. A switch chooses the port to which the destination device or workstation is connected. Ethernet switches are popular connectivity solutions because they improve network speed, bandwidth, and performance.
Switching is a technology that alleviates congestion in Ethernet LANs. Switches reduce traffic and increase bandwidth. Switches can easily replace hubs because switches work with the cable infrastructures that are already in place. This improves performance with minimal changes to a network.
All switching equipment perform two basic operations. The first operation is called switching data frames. This is the process by which a frame is received on an input medium and then transmitted to an output medium. The second is the maintenance of switching operations where switches build and maintain switching tables and search for loops.
Switches operate at much higher speeds than bridges and can support new functionality, such as virtual LANs.
An Ethernet switch has many benefits. One benefit is that it allows many users to communicate at the same time through the use of virtual circuits and dedicated network segments in a virtually collision-free environment. This maximizes the bandwidth available on the shared medium. Another benefit is that a switched LAN environment is very cost effective since the hardware and cables in place can be reused.
The Lab activity will help students understand the price of a LAN switch.
The next page will discuss NICs.
5.1
Cabling LANs
5.1.11
Host connectivity
This page will explain how NICs provide network connectivity.
The function of a NIC is to connect a host device to the network medium. A NIC is a printed circuit board that fits into the expansion slot on the motherboard or peripheral device of a computer. The NIC is also referred to as a network adapter. On laptop or notebook computers a NIC is the size of a credit card.
NICs are considered Layer 2 devices because each NIC carries a unique code called a MAC address. This address is used to control data communication for the host on the network. More will be learned about the MAC address later. NICs control host access to the medium.
In some cases the type of connector on the NIC does not match the type of media that needs to be connected to it. A good example is a Cisco 2500 router. This router has an AUI connector. That AUI connector needs to connect to a UTP Category 5 Ethernet cable. A transceiver is used to do this. A transceiver converts one type of signal or connector to another. For example, a transceiver can connect a 15-pin AUI interface to an RJ-45 jack. It is considered a Layer 1 device because it only works with bits and not with any address information or higher-level protocols.
NICs have no standardized symbol. It is implied that, when networking devices are attached to network media, there is a NIC or NIC-like device present. A dot on a topology map represents either a NIC interface or port, which acts like a NIC.
The next page discusses peer-to-peer networks.
5.1
Cabling LANs
5.1.12
Peer-to-peer
This page covers peer-to-peer networks.
When LAN and WAN technologies are used, many computers are interconnected to provide services to their users. To accomplish this, networked computers take on different roles or functions in relation to each other. Some types of applications require computers to function as equal partners. Other types of applications distribute their work so that one computer functions to serve a number of others in an unequal relationship.
Two computers generally use request and response protocols to communicate with each other. One computer issues a request for a service, and a second computer receives and responds to that request. The requestor acts like a client and the responder acts like a server.
In a peer-to-peer network, networked computers act as equal partners, or peers. As peers, each computer can take on the client function or the server function. Computer A may request for a file from Computer B, which then sends the file to Computer A. Computer A acts like the client and Computer B acts like the server. At a later time, Computers A and B can reverse roles.
In a peer-to-peer network, individual users control their own resources. The users may decide to share certain files with other users. The users may also require passwords before they allow others to access their resources. Since individual users make these decisions, there is no central point of control or administration in the network. In addition, individual users must back up their own systems to be able to recover from data loss in case of failures. When a computer acts as a server, the user of that machine may experience reduced performance as the machine serves the requests made by other systems.
Peer-to-peer networks are relatively easy to install and operate. No additional equipment is necessary beyond a suitable operating system installed on each computer. Since users control their own resources, no dedicated administrators are needed.
As networks grow, peer-to-peer relationships become increasingly difficult to coordinate. A peer-to-peer network works well with ten or fewer computers. Since peer-to-peer networks do not scale well, their efficiency decreases rapidly as the number of computers on the network increases. Also, individual users control access to the resources on their computers, which means security may be difficult to maintain. The client/server model of networking can be used to overcome the limitations of the peer-to-peer network.
Students will create a simple peer-to-peer network in the Lab Activity.
The next page discusses a client/server network.
5.1
Cabling LANs
5.1.13
Client/server
This page will describe a client/server environment.
In a client/server arrangement, network services are located on a dedicated computer called a server. The server responds to the requests of clients. The server is a central computer that is continuously available to respond to requests from clients for file, print, application, and other services. Most network operating systems adopt the form of a client/server relationship. Typically, desktop computers function as clients and one or more computers with additional processing power, memory, and specialized software function as servers.
Servers are designed to handle requests from many clients simultaneously. Before a client can access the server resources, the client must be identified and be authorized to use the resource. Each client is assigned an account name and password that is verified by an authentication service. The authentication service guards access to the network. With the centralization of user accounts, security, and access control, server-based networks simplify the administration of large networks.
The concentration of network resources such as files, printers, and applications on servers also makes it easier to back-up and maintain the data. Resources can be located on specialized, dedicated servers for easier access. Most client/server systems also include ways to enhance the network with new services that extend the usefulness of the network.
The centralized functions in a client/server network has substantial advantages and some disadvantages. Although a centralized server enhances security, ease of access, and control, it introduces a single point of failure into the network. Without an operational server, the network cannot function at all. Servers require a trained, expert staff member to administer and maintain. Server systems also require additional hardware and specialized software that add to the cost.
Figures and summarize the advantages and disadvantages of peer-to-peer and client/server networks.
In the Lab Activities, students will build a hub-based network and a switch-based network.
This page concludes this lesson. The next lesson will discuss cabling WANs. The first page focuses on the WAN physical layer.
5.2
Cabling WANs
5.2.1
WAN physical layer
This page describes the WAN physical layer.
The physical layer implementations vary based on the distance of the equipment from each service, the speed, and the type of service. Serial connections are used to support WAN services such as dedicated leased lines that run PPP or Frame Relay. The speed of these connections ranges from 2400 bps to T1 service at 1.544 Mbps and E1 service at 2.048 Mbps.
ISDN offers dial-on-demand connections or dial backup services. An ISDN Basic Rate Interface (BRI) is composed of two 64 kbps bearer channels (B channels) for data, and one delta channel (D channel) at 16 kbps used for signaling and other link-management tasks. PPP is typically used to carry data over the B channels.
As the demand for residential broadband high-speed services has increased, DSL and cable modem connections have become more popular. Typical residential DSL service can achieve T1/E1 speeds over the telephone line. Cable services use the coaxial cable TV line. A coaxial cable line provides high-speed connectivity that matches or exceeds DSL. DSL and cable modem service will be covered in more detail in a later module.
Students can identify the WAN physical layer components in the Interactive Media Activity.
The next page will describe WAN serial connections.
5.2
Cabling WANs
5.2.2
WAN serial connections
This page will discuss WAN serial connections.
For long distance communication, WANs use serial transmission. This is a process by which bits of data are sent over a single channel. This process provides reliable long distance communication and the use of a specific electromagnetic or optical frequency range.
Frequencies are measured in terms of cycles per second and expressed in Hz. Signals transmitted over voice grade telephone lines use 4 kHz. The size of the frequency range is referred to as bandwidth. In networking, bandwidth is a measure of the bits per second that are transmitted.
For a Cisco router, physical connectivity at the customer site is provided by one of two types of serial connections. The first type is a 60-pin connector. The second is a more compact 'smart serial' connector. The provider connector will vary depending on the type of service equipment.
If the connection is made directly to a service provider, or a device that provides signal clocking such as a channel/data service unit (CSU/DSU), the router will be a data terminal equipment (DTE) and use a DTE serial cable. Typically this is the case. However, there are occasions where the local router is required to provide the clocking rate and therefore will use a data communications equipment (DCE) cable. In the curriculum router labs one of the connected routers will need to provide the clocking function. Therefore, the connection will consist of a DCE and a DTE cable.
The next page will discuss routers and serial connections.
5.2
Cabling WANs
5.2.3
Routers and serial connections
This page will describe how routers and serial connections are used in a WAN.
Routers are responsible for routing data packets from source to destination within the LAN, and for providing connectivity to the WAN. Within a LAN environment the router contains broadcasts, provides local address resolution services, such as ARP and RARP, and may segment the network using a subnetwork structure. In order to provide these services the router must be connected to the LAN and WAN.
In addition to determining the cable type, it is necessary to determine whether DTE or DCE connectors are required. The DTE is the endpoint of the user's device on the WAN link. The DCE is typically the point where responsibility for delivering data passes into the hands of the service provider.
When connecting directly to a service provider, or to a device such as a CSU/DSU that will perform signal clocking, the router is a DTE and needs a DTE serial cable. This is typically the case for routers. However, there are cases when the router will need to be the DCE. When performing a back-to-back router scenario in a test environment, one of the routers will be a DTE and the other will be a DCE.
When cabling routers for serial connectivity, the routers will either have fixed or modular ports. The type of port being used will affect the syntax used later to configure each interface.
Interfaces on routers with fixed serial ports are labeled for port type and port number.
Interfaces on routers with modular serial ports are labeled for port type, slot, and port number. The slot is the location of the module. To configure a port on a modular card, it is necessary to specify the interface using the syntax "port type slot number/port number". Use the label "serial 1/0", when the interface is serial, the slot number where the module is installed is slot 1, and the port that is being referenced is port 0.
The first Lab Activity will require students to identify the Ethernet or Fast Ethernet interfaces on a router.
In the next two Lab Activities, students will create and troubleshoot a basic WAN.
The next page discusses routers and ISDN BRI connections.
5.2
Cabling WANs
5.2.4
Routers and ISDN BRI connections
This page will help students understand ISDN BRI connections.
With ISDN BRI, two types of interfaces may be used, BRI S/T and BRI U. Determine who is providing the Network Termination 1 (NT1) device in order to determine which interface type is needed.
An NT1 is an intermediate device located between the router and the service provider ISDN switch. The NT1 is used to connect four-wire subscriber wiring to the conventional two-wire local loop. In North America, the customer typically provides the NT1, while in the rest of the world the service provider provides the NT1 device.
It may be necessary to provide an external NT1 if the device is not already integrated into the router. Reviewing the labeling on the router interfaces is usually the easiest way to determine if the router has an integrated NT1. A BRI interface with an integrated NT1 is labeled BRI U. A BRI interface without an integrated NT1 is labeled BRI S/T. Because routers can have multiple ISDN interface types, determine which interface is needed when the router is purchased. The type of BRI interface may be determined by looking at the port label. To interconnect the ISDN BRI port to the service-provider device, use a UTP Category 5 straight-through cable.
CAUTION:
It is important to insert the cable running from an ISDN BRI port only to an ISDN jack or an ISDN switch. ISDN BRI uses voltages that can seriously damage non-ISDN devices.
The next page discusses DSL for a router.
5.2
Cabling WANs
5.2.5
Routers and DSL connections
This page describes routers and DSL connections.
The Cisco 827 ADSL router has one asymmetric digital subscriber line (ADSL) interface. To connect an ADSL line to the ADSL port on a router, do the following:
* Connect the phone cable to the ADSL port on the router.
* Connect the other end of the phone cable to the phone jack.
To connect a router for DSL service, use a phone cable with RJ-11 connectors. DSL works over standard telephone lines using pins 3 and 4 on a standard RJ-11 connector.
The next page will discuss cable connections.
5.2
Cabling WANs
5.2.6
Routers and cable connections
This page will explain how routers are connected to cable systems.
The Cisco uBR905 cable access router provides high-speed network access on the cable television system to residential and small office, home office (SOHO) subscribers. The uBR905 router has a coaxial cable, or F-connector, interface that connects directly to the cable system. Coaxial cable and an F connector are used to connect the router and cable system.
Use the following steps to connect the Cisco uBR905 cable access router to the cable system:
* Verify that the router is not connected to power.
* Locate the RF coaxial cable coming from the coaxial cable (TV) wall outlet.
* Install a cable splitter/directional coupler, if needed, to separate signals for TV and computer use. If necessary, also install a high-pass filter to prevent interference between the TV and computer signals.
* Connect the coaxial cable to the F connector of the router. Hand-tighten the connector, making sure that it is finger-tight, and then give it a 1/6 turn with a wrench.
* Make sure that all other coaxial cable connectors, all intermediate splitters, couplers, or ground blocks, are securely tightened from the distribution tap to the Cisco uBR905 router.
CAUTION:
Do not over tighten the connector. Over tightening may break off the connector. Do not use a torque wrench because of the danger of tightening the connector more than the recommended 1/6 turns after it is finger-tight.
The next page will discuss console connections.
5.2
Cabling WANs
5.2.7
Setting up console connections
This page will explain how console connections are set up.
To initially configure the Cisco device, a management connection must be directly connected to the device. For Cisco equipment this management attachment is called a console port. The console port allows monitoring and configuration of a Cisco hub, switch, or router.
The cable used between a terminal and a console port is a rollover cable, with RJ-45 connectors. The rollover cable, also known as a console cable, has a different pinout than the straight-through or crossover RJ-45 cables used with Ethernet or the ISDN BRI. The pinout for a rollover is as follows:
1 to 8
2 to 7
3 to 6
4 to 5
5 to 4
6 to 3
7 to 2
8 to 1
To set up a connection between the terminal and the Cisco console port, perform two steps. First, connect the devices using a rollover cable from the router console port to the workstation serial port. An RJ-45-to-DB-9 or an RJ-45-to-DB-25 adapter may be required for the PC or terminal. Next, configure the terminal emulation application with the following common equipment (COM) port settings: 9600 bps, 8 data bits, no parity, 1 stop bit, and no flow control.
The AUX port is used to provide out-of-band management through a modem. The AUX port must be configured by way of the console port before it can be used. The AUX port also uses the settings of 9600 bps, 8 data bits, no parity, 1 stop bit, and no flow control.
In the Lab Activity, students will establish a console connection to a router or switch.
The Interactive Media Activity provides a detailed view of a console cable.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
Ethernet is the most widely used LAN technology and can be implemented on a variety of media. Ethernet technologies provide a variety of network speeds, from 10 Mbps to Gigabit Ethernet, which can be applied to appropriate areas of a network. Media and connector requirements differ for various Ethernet implementations.
The connector on a network interface card (NIC) must match the media. A bayonet nut connector (BNC) connector is required to connect to coaxial cable. A fiber connector is required to connect to fiber media. The registered jack (RJ-45) connector used with twisted-pair wire is the most common type of connector used in LAN implementations. Ethernet
When twisted-pair wire is used to connect devices, the appropriate wire sequence, or pinout, must be determined as well. A crossover cable is used to connect two similar devices, such as two PCs. A straight-through cable is used to connect different devices, such as connections between a switch and a PC. A rollover cable is used to connect a PC to the console port of a router.
Repeaters regenerate and retime network signals and allow them to travel a longer distance on the media. Hubs are multi-port repeaters. Data arriving at a hub port is electrically repeated on all the other ports connected to the same network segment, except for the port on which the data arrived. Sometimes hubs are called concentrators, because hubs often serve as a central connection point for an Ethernet LAN.
A wireless network can be created with much less cabling than other networks. The only permanent cabling might be to the access points for the network. At the core of wireless communication are devices called transmitters and receivers. The transmitter converts source data to electromagnetic (EM) waves that are passed to the receiver. The receiver then converts these electromagnetic waves back into data for the destination. The two most common wireless technologies used for networking are infrared (IR) and radio frequency (RF).
There are times when it is necessary to break up a large LAN into smaller, more easily managed segments. The devices that are used to define and connect network segments include bridges, switches, routers, and gateways.
A bridge uses the destination MAC address to determine whether to filter, flood, or copy the frame onto another segment. If placed strategically, a bridge can greatly improve network performance.
A switch is sometimes described as a multi-port bridge. Although there are some similarities between the two, a switch is a more sophisticated device than a bridge. Switches operate at much higher speeds than bridges and can support new functionality, such as virtual LANs.
Routers are responsible for routing data packets from source to destination within the LAN, and for providing connectivity to the WAN. Within a LAN environment the router controls broadcasts, provides local address resolution services, such as ARP and RARP, and may segment the network using a subnetwork structure.
Computers typically communicate with each other by using request/response protocols. One computer issues a request for a service, and a second computer receives and responds to that request. In a peer-to-peer network, networked computers act as equal partners, or peers. As peers, each computer can take on the client function or the server function. In a client/server arrangement, network services are located on a dedicated computer called a server. The server responds to the requests of clients.
WAN connection types include high-speed serial links, ISDN, DSL, and cable modems. Each of these requires a specific media and connector. To interconnect the ISDN BRI port to the service-provider device, a UTP Category 5 straight-through cable with RJ-45 connectors, is used. A phone cable and an RJ-11 connector are used to connect a router for DSL service. Coaxial cable and a BNC connector are used to connect a router for cable service.
In addition to the connection type, it is necessary to determine whether DTE or DCE connectors are required on internetworking devices. The DTE is the endpoint of the user's private network on the WAN link. The DCE is typically the point where responsibility for delivering data passes to the service provider. When connecting directly to a service provider, or to a device such as a CSU/DSU that will perform signal clocking, the router is a DTE and needs a DTE serial cable. This is typically the case for routers. However, there are cases when the router will need to be the DCE.
ccna module 5 text
ccna module 6 & 7
Overview
Ethernet is now the dominant LAN technology in the world. Ethernet is a family of LAN technologies that may be best understood with the OSI reference model. All LANs must deal with the basic issue of how individual stations, or nodes, are named. Ethernet specifications support different media, bandwidths, and other Layer 1 and 2 variations. However, the basic frame format and address scheme is the same for all varieties of Ethernet.
Various MAC strategies have been invented to allow multiple stations to access physical media and network devices. It is important to understand how network devices gain access to the network media before students can comprehend and troubleshoot the entire network.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.
Students who complete this module should be able to perform the following tasks:
* Describe the basics of Ethernet technology
* Explain naming rules of Ethernet technology
* Explain how Ethernet relates to the OSI model
* Describe the Ethernet framing process and frame structure
* List Ethernet frame field names and purposes
* Identify the characteristics of CSMA/CD
* Describe Ethernet timing, interframe spacing, and backoff time after a collision
* Define Ethernet errors and collisions
* Explain the concept of auto-negotiation in relation to speed and duplex
6.1
Ethernet Fundamentals
6.1.1
Introduction to Ethernet
This page provides an introduction to Ethernet. Most of the traffic on the Internet originates and ends with Ethernet connections. Since it began in the 1970s, Ethernet has evolved to meet the increased demand for high-speed LANs. When optical fiber media was introduced, Ethernet adapted to take advantage of the superior bandwidth and low error rate that fiber offers. Now the same protocol that transported data at 3 Mbps in 1973 can carry data at 10 Gbps.
The success of Ethernet is due to the following factors:
* Simplicity and ease of maintenance
* Ability to incorporate new technologies
* Reliability
* Low cost of installation and upgrade
The introduction of Gigabit Ethernet has extended the original LAN technology to distances that make Ethernet a MAN and WAN standard.
The original idea for Ethernet was to allow two or more hosts to use the same medium with no interference between the signals. This problem of multiple user access to a shared medium was studied in the early 1970s at the University of Hawaii. A system called Alohanet was developed to allow various stations on the Hawaiian Islands structured access to the shared radio frequency band in the atmosphere. This work later formed the basis for the Ethernet access method known as CSMA/CD.
The first LAN in the world was the original version of Ethernet. Robert Metcalfe and his coworkers at Xerox designed it more than thirty years ago. The first Ethernet standard was published in 1980 by a consortium of Digital Equipment Corporation, Intel, and Xerox (DIX). Metcalfe wanted Ethernet to be a shared standard from which everyone could benefit, so it was released as an open standard. The first products that were developed from the Ethernet standard were sold in the early 1980s. Ethernet transmitted at up to 10 Mbps over thick coaxial cable up to a distance of 2 kilometers (km). This type of coaxial cable was referred to as thicknet and was about the width of a small finger.
In 1985, the IEEE standards committee for Local and Metropolitan Networks published standards for LANs. These standards start with the number 802. The standard for Ethernet is 802.3. The IEEE wanted to make sure that its standards were compatible with the International Standards Organization (ISO) and OSI model. To do this, the IEEE 802.3 standard had to address the needs of Layer 1 and the lower portion of Layer 2 of the OSI model. As a result, some small modifications to the original Ethernet standard were made in 802.3.
The differences between the two standards were so minor that any Ethernet NIC can transmit and receive both Ethernet and 802.3 frames. Essentially, Ethernet and IEEE 802.3 are the same standards.
The 10-Mbps bandwidth of Ethernet was more than enough for the slow PCs of the 1980s. By the early 1990s PCs became much faster, file sizes increased, and data flow bottlenecks occurred. Most were caused by the low availability of bandwidth. In 1995, IEEE announced a standard for a 100-Mbps Ethernet. This was followed by standards for Gigabit Ethernet in 1998 and 1999.
All the standards are essentially compatible with the original Ethernet standard. An Ethernet frame could leave an older coax 10-Mbps NIC in a PC, be placed onto a 10-Gbps Ethernet fiber link, and end up at a 100-Mbps NIC. As long as the frame stays on Ethernet networks it is not changed. For this reason Ethernet is considered very scalable. The bandwidth of the network could be increased many times while the Ethernet technology remains the same.
The original Ethernet standard has been amended many times to manage new media and higher transmission rates. These amendments provide standards for new technologies and maintain compatibility between Ethernet variations.
The next page explains the naming rules for the Ethernet family of networks.
6.1
Ethernet Fundamentals
6.1.2
IEEE Ethernet naming rules
This page focuses on the Ethernet naming rules developed by IEEE.
Ethernet is not one networking technology, but a family of networking technologies that includes Legacy, Fast Ethernet, and Gigabit Ethernet. Ethernet speeds can be 10, 100, 1000, or 10,000 Mbps. The basic frame format and the IEEE sublayers of OSI Layers 1 and 2 remain consistent across all forms of Ethernet.
When Ethernet needs to be expanded to add a new medium or capability, the IEEE issues a new supplement to the 802.3 standard. The new supplements are given a one or two letter designation such as 802.3u. An abbreviated description, called an identifier, is also assigned to the supplement.
The abbreviated description consists of the following elements:
* A number that indicates the number of Mbps transmitted
* The word base to indicate that baseband signaling is used
* One or more letters of the alphabet indicating the type of medium used. For example, F = fiber optical cable and T = copper unshielded twisted pair
Ethernet relies on baseband signaling, which uses the entire bandwidth of the transmission medium. The data signal is transmitted directly over the transmission medium.
In broadband signaling, the data signal is no longer placed directly on the transmission medium. Ethernet used broadband signaling in the 10BROAD36 standard. 10BROAD36 is the IEEE standard for an 802.3 Ethernet network using broadband transmission with thick coaxial cable running at 10 Mbps. 10BROAD36 is now considered obsolete. An analog or carrier signal is modulated by the data signal and then transmitted. Radio broadcasts and cable TV use broadband signaling.
IEEE cannot force manufacturers to fully comply with any standard. IEEE has two main objectives:
* Supply the information necessary to build devices that comply with Ethernet standards
* Promote innovation among manufacturers
Students will identify the IEEE 802 standards in the Interactive Media Activity.
The next page explains Ethernet and the OSI model.
6.1
Ethernet Fundamentals
6.1.3
Ethernet and the OSI model
This page will explain how Ethernet relates to the OSI model.
Ethernet operates in two areas of the OSI model. These are the lower half of the data link layer, which is known as the MAC sublayer, and the physical layer.
Data that moves from one Ethernet station to another often passes through a repeater. All stations in the same collision domain see traffic that passes through a repeater. A collision domain is a shared resource. Problems that originate in one part of a collision domain will usually impact the entire collision domain.
A repeater forwards traffic to all other ports. A repeater never sends traffic out the same port from which it was received. Any signal detected by a repeater will be forwarded. If the signal is degraded through attenuation or noise, the repeater will attempt to reconstruct and regenerate the signal.
To guarantee minimum bandwidth and operability, standards specify the maximum number of stations per segment, maximum segment length, and maximum number of repeaters between stations. Stations separated by bridges or routers are in different collision domains.
Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. Ethernet at Layer 1 involves signals, bit streams that travel on the media, components that put signals on media, and various topologies. Ethernet Layer 1 performs a key role in the communication that takes place between devices, but each of its functions has limitations. Layer 2 addresses these limitations.
Data link sublayers contribute significantly to technological compatibility and computer communications. The MAC sublayer is concerned with the physical components that will be used to communicate the information. The Logical Link Control (LLC) sublayer remains relatively independent of the physical equipment that will be used for the communication process.
Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. While there are other varieties of Ethernet, the ones shown are the most widely used.
The Interactive Media Activity reviews the layers of the OSI model.
The next page explains the address system used by Ethernet networks.
6.1
Ethernet Fundamentals
6.1.4
Naming
This page will discuss the MAC addresses used by Ethernet networks.
An address system is required to uniquely identify computers and interfaces to allow for local delivery of frames on the Ethernet. Ethernet uses MAC addresses that are 48 bits in length and expressed as 12 hexadecimal digits. The first six hexadecimal digits, which are administered by the IEEE, identify the manufacturer or vendor. This portion of the MAC address is known as the Organizational Unique Identifier (OUI). The remaining six hexadecimal digits represent the interface serial number or another value administered by the manufacturer. MAC addresses are sometimes referred to as burned-in MAC addresses (BIAs) because they are burned into ROM and are copied into RAM when the NIC initializes.
At the data link layer MAC headers and trailers are added to upper layer data. The header and trailer contain control information intended for the data link layer in the destination system. The data from upper layers is encapsulated within the data link frame, between the header and trailer, and then sent out on the network.
The NIC uses the MAC address to determine if a message should be passed on to the upper layers of the OSI model. The NIC does not use CPU processing time to make this assessment. This enables better communication times on an Ethernet network.
When a device sends data on an Ethernet network, it can use the destination MAC address to open a communication pathway to the other device. The source device attaches a header with the MAC address of the intended destination and sends data through the network. As this data travels along the network media the NIC in each device checks to see if the MAC address matches the physical destination address carried by the data frame. If there is no match, the NIC discards the data frame. When the data reaches the destination node, the NIC makes a copy and passes the frame up the OSI layers. On an Ethernet network, all nodes must examine the MAC header.
All devices that are connected to the Ethernet LAN have MAC addressed interfaces. This includes workstations, printers, routers, and switches.
The next page will focus on Layer 2 frames.
6.1
Ethernet Fundamentals
6.1.5
Layer 2 framing
This page will explain how frames are created at Layer 2 of the OSI model.
Encoded bit streams, or data, on physical media represent a tremendous technological accomplishment, but they, alone, are not enough to make communication happen. Framing provides essential information that could not be obtained from coded bit streams alone. This information includes the following:
* Which computers are in communication with each other
* When communication between individual computers begins and when it ends
* Which errors occurred while the computers communicated
* Which computer will communicate next
Framing is the Layer 2 encapsulation process. A frame is the Layer 2 protocol data unit.
A voltage versus time graph could be used to visualize bits. However, it may be too difficult to graph address and control information for larger units of data. Another type of diagram that could be used is the frame format diagram, which is based on voltage versus time graphs. Frame format diagrams are read from left to right, just like an oscilloscope graph. The frame format diagram shows different groupings of bits, or fields, that perform other functions.
There are many different types of frames described by various standards.A single generic frame has sections called fields. Each field is composed of bytes. The names of the fields are as follows:
* Start Frame field
* Address field
* Length/Type field
* Data field
* Frame Check Sequence (FCS) field
When computers are connected to a physical medium, there must be a way to inform other computers when they are about to transmit a frame. Various technologies do this in different ways. Regardless of the technology, all frames begin with a sequence of bytes to signal the data transmission.
All frames contain naming information, such as the name of the source node, or source MAC address, and the name of the destination node, or destination MAC address.
Most frames have some specialized fields. In some technologies, a Length field specifies the exact length of a frame in bytes. Some frames have a Type field, which specifies the Layer 3 protocol used by the device that wants to send data.
Frames are used to send upper-layer data and ultimately the user application data from a source to a destination. The data package includes the message to be sent, or user application data. Extra bytes may be added so frames have a minimum length for timing purposes. LLC bytes are also included with the Data field in the IEEE standard frames. The LLC sublayer takes the network protocol data, which is an IP packet, and adds control information to help deliver the packet to the destination node. Layer 2 communicates with the upper layers through LLC.
All frames and the bits, bytes, and fields contained within them, are susceptible to errors from a variety of sources. The FCS field contains a number that is calculated by the source node based on the data in the frame. This number is added to the end of a frame that is sent. When the destination node receives the frame the FCS number is recalculated and compared with the FCS number included in the frame. If the two numbers are different, an error is assumed, the frame is discarded.
Because the source cannot detect that the frame has been discarded, retransmission has to be initiated by higher layer connection-oriented protocols providing data flow control. Because these protocols, such as TCP, expect frame acknowledgment, ACK, to be sent by the peer station within a certain time, retransmission usually occurs.
There are three primary ways to calculate the FCS number:
* Cyclic redundancy check (CRC) – performs calculations on the data.
* Two-dimensional parity – places individual bytes in a two-dimensional array and performs redundancy checks vertically and horizontally on the array, creating an extra byte resulting in an even or odd number of binary 1s.
* Internet checksum – adds the values of all of the data bits to arrive at a sum.
The node that transmits data must get the attention of other devices to start and end a frame. The Length field indicates where the frame ends. The frame ends after the FCS. Sometimes there is a formal byte sequence referred to as an end-frame delimiter.
The next page will discuss the frame structure of an Ethernet network.
6.1
Ethernet Fundamentals
6.1.6
Ethernet frame structure
This page will describe the frame structure of Ethernet networks.
At the data link layer the frame structure is nearly identical for all speeds of Ethernet from 10 Mbps to 10,000 Mbps. However, at the physical layer almost all versions of Ethernet are very different. Each speed has a distinct set of architecture design rules.
In the version of Ethernet that was developed by DIX prior to the adoption of the IEEE 802.3 version of Ethernet, the Preamble and Start-of-Frame (SOF) Delimiter were combined into a single field. The binary pattern was identical. The field labeled Length/Type was only listed as Length in the early IEEE versions and only as Type in the DIX version. These two uses of the field were officially combined in a later IEEE version since both uses were common.
The Ethernet II Type field is incorporated into the current 802.3 frame definition. When a node receives a frame it must examine the Length/Type field to determine which higher-layer protocol is present. If the two-octet value is equal to or greater than 0x0600 hexadecimal, 1536 decimal, then the contents of the Data Field are decoded according to the protocol indicated. Ethernet II is the Ethernet frame format that is used in TCP/IP networks.
The next page will discuss the information included in a frame.
6.1
Ethernet Fundamentals
6.1.7
Ethernet frame fields
This page defines the fields that are used in a frame.
Some of the fields permitted or required in an 802.3 Ethernet frame are as follows:
* Preamble
* SOF Delimiter
* Destination Address
* Source Address
* Length/Type
* Header and Data
* FCS
* Extension
The preamble is an alternating pattern of ones and zeros used to time synchronization in 10 Mbps and slower implementations of Ethernet. Faster versions of Ethernet are synchronous so this timing information is unnecessary but retained for compatibility.
A SOF delimiter consists of a one-octet field that marks the end of the timing information and contains the bit sequence 10101011.
The destination address can be unicast, multicast, or broadcast.
The Source Address field contains the MAC source address. The source address is generally the unicast address of the Ethernet node that transmitted the frame. However, many virtual protocols use and sometimes share a specific source MAC address to identify the virtual entity.
The Length/Type field supports two different uses. If the value is less than 1536 decimal, 0x600 hexadecimal, then the value indicates length. The length interpretation is used when the LLC layer provides the protocol identification. The type value indicates which upper-layer protocol will receive the data after the Ethernet process is complete. The length indicates the number of bytes of data that follows this field.
The Data field and padding if necessary, may be of any length that does not cause the frame to exceed the maximum frame size. The maximum transmission unit (MTU) for Ethernet is 1500 octets, so the data should not exceed that size. The content of this field is unspecified. An unspecified amount of data is inserted immediately after the user data when there is not enough user data for the frame to meet the minimum frame length. This extra data is called a pad. Ethernet requires each frame to be between 64 and 1518 octets.
A FCS contains a 4-byte CRC value that is created by the device that sends data and is recalculated by the destination device to check for damaged frames. The corruption of a single bit anywhere from the start of the Destination Address through the end of the FCS field will cause the checksum to be different. Therefore, the coverage of the FCS includes itself. It is not possible to distinguish between corruption of the FCS and corruption of any other field used in the calculation.
This page concludes this lesson. The next lesson will discuss the functions of an Ethernet network. The first page will introduce the concept of MAC.
6.2
Ethernet Operation
6.2.1
MAC
This page will define MAC and provide examples of deterministic and non-deterministic MAC protocols.
MAC refers to protocols that determine which computer in a shared-media environment, or collision domain, is allowed to transmit data. MAC and LLC comprise the IEEE version of the OSI Layer 2. MAC and LLC are sublayers of Layer 2. The two broad categories of MAC are deterministic and non-deterministic.
Examples of deterministic protocols include Token Ring and FDDI. In a Token Ring network, hosts are arranged in a ring and a special data token travels around the ring to each host in sequence. When a host wants to transmit, it seizes the token, transmits the data for a limited time, and then forwards the token to the next host in the ring. Token Ring is a collisionless environment since only one host can transmit at a time.
Non-deterministic MAC protocols use a first-come, first-served approach. Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a simple system. The NIC listens for the absence of a signal on the media and begins to transmit. If two nodes transmit at the same time a collision occurs and none of the nodes are able to transmit.
Three common Layer 2 technologies are Token Ring, FDDI, and Ethernet. All three specify Layer 2 issues, LLC, naming, framing, and MAC, as well as Layer 1 signaling components and media issues. The specific technologies for each are as follows:
* Ethernet – uses a logical bus topology to control information flow on a linear bus and a physical star or extended star topology for the cables
* Token Ring – uses a logical ring topology to control information flow and a physical star topology
* FDDI – uses a logical ring topology to control information flow and a physical dual-ring topology
The next page explains how collisions are avoided in an Ethernet network.
6.2
Ethernet Operation
6.2.2
MAC rules and collision detection/backoff
This page describes collision detection and avoidance in a CSMA/CD network.
Ethernet is a shared-media broadcast technology. The access method CSMA/CD used in Ethernet performs three functions:
* Transmitting and receiving data frames
* Decoding data frames and checking them for valid addresses before passing them to the upper layers of the OSI model
* Detecting errors within data frames or on the network
In the CSMA/CD access method, networking devices with data to transmit work in a listen-before-transmit mode. This means when a node wants to send data, it must first check to see whether the networking media is busy. If the node determines the network is busy, the node will wait a random amount of time before retrying. If the node determines the networking media is not busy, the node will begin transmitting and listening. The node listens to ensure no other stations are transmitting at the same time. After completing data transmission the device will return to listening mode.
Networking devices detect a collision has occurred when the amplitude of the signal on the networking media increases. When a collision occurs, each node that is transmitting will continue to transmit for a short time to ensure that all nodes detect the collision. When all nodes have detected the collision, the backoff algorithm is invoked and transmission stops. The nodes stop transmitting for a random period of time, determined by the backoff algorithm. When the delay periods expire, each node can attempt to access the networking media. The devices that were involved in the collision do not have transmission priority.
The Interactive Media Activity shows the procedure for collision detection in an Ethernet network.
The next page will discuss Ethernet timing.
6.2
Ethernet Operation
6.2.3
Ethernet timing
This page explains the importance of slot times in an Ethernet network.
The basic rules and specifications for proper operation of Ethernet are not particularly complicated, though some of the faster physical layer implementations are becoming so. Despite the basic simplicity, when a problem occurs in Ethernet it is often quite difficult to isolate the source. Because of the common bus architecture of Ethernet, also described as a distributed single point of failure, the scope of the problem usually encompasses all devices within the collision domain. In situations where repeaters are used, this can include devices up to four segments away.
Any station on an Ethernet network wishing to transmit a message first "listens" to ensure that no other station is currently transmitting. If the cable is quiet, the station will begin transmitting immediately. The electrical signal takes time to travel down the cable (delay), and each subsequent repeater introduces a small amount of latency in forwarding the frame from one port to the next. Because of the delay and latency, it is possible for more than one station to begin transmitting at or near the same time. This results in a collision.
If the attached station is operating in full duplex then the station may send and receive simultaneously and collisions should not occur. Full-duplex operation also changes the timing considerations and eliminates the concept of slot time. Full-duplex operation allows for larger network architecture designs since the timing restriction for collision detection is removed.
In half duplex, assuming that a collision does not occur, the sending station will transmit 64 bits of timing synchronization information that is known as the preamble. The sending station will then transmit the following information:
* Destination and source MAC addressing information
* Certain other header information
* The actual data payload
* Checksum (FCS) used to ensure that the message was not corrupted along the way
Stations receiving the frame recalculate the FCS to determine if the incoming message is valid and then pass valid messages to the next higher layer in the protocol stack.
10 Mbps and slower versions of Ethernet are asynchronous. Asynchronous means that each receiving station will use the eight octets of timing information to synchronize the receive circuit to the incoming data, and then discard it. 100 Mbps and higher speed implementations of Ethernet are synchronous. Synchronous means the timing information is not required, however for compatibility reasons the Preamble and Start Frame Delimiter (SFD) are present.
For all speeds of Ethernet transmission at or below 1000 Mbps, the standard describes how a transmission may be no smaller than the slot time. Slot time for 10 and 100-Mbps Ethernet is 512 bit-times, or 64 octets. Slot time for 1000-Mbps Ethernet is 4096 bit-times, or 512 octets. Slot time is calculated assuming maximum cable lengths on the largest legal network architecture. All hardware propagation delay times are at the legal maximum and the 32-bit jam signal is used when collisions are detected.
The actual calculated slot time is just longer than the theoretical amount of time required to travel between the furthest points of the collision domain, collide with another transmission at the last possible instant, and then have the collision fragments return to the sending station and be detected. For the system to work the first station must learn about the collision before it finishes sending the smallest legal frame size. To allow 1000-Mbps Ethernet to operate in half duplex the extension field was added when sending small frames purely to keep the transmitter busy long enough for a collision fragment to return. This field is present only on 1000-Mbps, half-duplex links and allows minimum-sized frames to be long enough to meet slot time requirements. Extension bits are discarded by the receiving station.
On 10-Mbps Ethernet one bit at the MAC layer requires 100 nanoseconds (ns) to transmit. At 100 Mbps that same bit requires 10 ns to transmit and at 1000 Mbps only takes 1 ns. As a rough estimate, 20.3 cm (8 in) per nanosecond is often used for calculating propagation delay down a UTP cable. For 100 meters of UTP, this means that it takes just under 5 bit-times for a 10BASE-T signal to travel the length the cable.
For CSMA/CD Ethernet to operate, the sending station must become aware of a collision before it has completed transmission of a minimum-sized frame. At 100 Mbps the system timing is barely able to accommodate 100 meter cables. At 1000 Mbps special adjustments are required as nearly an entire minimum-sized frame would be transmitted before the first bit reached the end of the first 100 meters of UTP cable. For this reason half duplex is not permitted in 10-Gigabit Ethernet.
The Interactive Media Activity will help students identify the bit time of different Ethernet speeds.
The next page defines interframe spacing and backoff.
6.2
Ethernet Operation
6.2.4
Interframe spacing and backoff
This page explains how spacing is used in an Ethernet network for data transmission.
The minimum spacing between two non-colliding frames is also called the interframe spacing. This is measured from the last bit of the FCS field of the first frame to the first bit of the preamble of the second frame.
After a frame has been sent, all stations on a 10-Mbps Ethernet are required to wait a minimum of 96 bit-times (9.6 microseconds) before any station may legally transmit the next frame. On faster versions of Ethernet the spacing remains the same, 96 bit-times, but the time required for that interval grows correspondingly shorter. This interval is referred to as the spacing gap. The gap is intended to allow slow stations time to process the previous frame and prepare for the next frame.
A repeater is expected to regenerate the full 64 bits of timing information, which is the preamble and SFD, at the start of any frame. This is despite the potential loss of some of the beginning preamble bits because of slow synchronization. Because of this forced reintroduction of timing bits, some minor reduction of the interframe gap is not only possible but expected. Some Ethernet chipsets are sensitive to a shortening of the interframe spacing, and will begin failing to see frames as the gap is reduced. With the increase in processing power at the desktop, it would be very easy for a personal computer to saturate an Ethernet segment with traffic and to begin transmitting again before the interframe spacing delay time is satisfied.
After a collision occurs and all stations allow the cable to become idle (each waits the full interframe spacing), then the stations that collided must wait an additional and potentially progressively longer period of time before attempting to retransmit the collided frame. The waiting period is intentionally designed to be random so that two stations do not delay for the same amount of time before retransmitting, which would result in more collisions. This is accomplished in part by expanding the interval from which the random retransmission time is selected on each retransmission attempt. The waiting period is measured in increments of the parameter slot time.
If the MAC layer is unable to send the frame after sixteen attempts, it gives up and generates an error to the network layer. Such an occurrence is fairly rare and would happen only under extremely heavy network loads, or when a physical problem exists on the network.
The next page will discuss collisions.
6.2
Ethernet Operation
6.2.5
Error handling
This page will describe collisions and how they are handled on a network.
The most common error condition on Ethernet networks are collisions. Collisions are the mechanism for resolving contention for network access. A few collisions provide a smooth, simple, low overhead way for network nodes to arbitrate contention for the network resource. When network contention becomes too great, collisions can become a significant impediment to useful network operation.
Collisions result in network bandwidth loss that is equal to the initial transmission and the collision jam signal. This is consumption delay and affects all network nodes possibly causing significant reduction in network throughput.
The considerable majority of collisions occur very early in the frame, often before the SFD. Collisions occurring before the SFD are usually not reported to the higher layers, as if the collision did not occur. As soon as a collision is detected, the sending stations transmit a 32-bit "jam" signal that will enforce the collision. This is done so that any data being transmitted is thoroughly corrupted and all stations have a chance to detect the collision.
In Figure two stations listen to ensure that the cable is idle, then transmit. Station 1 was able to transmit a significant percentage of the frame before the signal even reached the last cable segment. Station 2 had not received the first bit of the transmission prior to beginning its own transmission and was only able to send several bits before the NIC sensed the collision. Station 2 immediately truncated the current transmission, substituted the 32-bit jam signal and ceased all transmissions. During the collision and jam event that Station 2 was experiencing, the collision fragments were working their way back through the repeated collision domain toward Station 1. Station 2 completed transmission of the 32-bit jam signal and became silent before the collision propagated back to Station 1 which was still unaware of the collision and continued to transmit. When the collision fragments finally reached Station 1, it also truncated the current transmission and substituted a 32-bit jam signal in place of the remainder of the frame it was transmitting. Upon sending the 32-bit jam signal Station 1 ceased all transmissions.
A jam signal may be composed of any binary data so long as it does not form a proper checksum for the portion of the frame already transmitted. The most commonly observed data pattern for a jam signal is simply a repeating one, zero, one, zero pattern, the same as Preamble. When viewed by a protocol analyzer this pattern appears as either a repeating hexadecimal 5 or A sequence. The corrupted, partially transmitted messages are often referred to as collision fragments or runts. Normal collisions are less than 64 octets in length and therefore fail both the minimum length test and the FCS checksum test.
The next page will define different types of collisions.
6.2
Ethernet Operation
6.2.6
Types of collisions
This page covers the different types of collisions and their characteristics.
Collisions typically take place when two or more Ethernet stations transmit simultaneously within a collision domain. A single collision is a collision that was detected while trying to transmit a frame, but on the next attempt the frame was transmitted successfully. Multiple collisions indicate that the same frame collided repeatedly before being successfully transmitted. The results of collisions, collision fragments, are partial or corrupted frames that are less than 64 octets and have an invalid FCS. Three types of collisions are:
* Local
* Remote
* Late
To create a local collision on coax cable (10BASE2 and 10BASE5), the signal travels down the cable until it encounters a signal from the other station. The waveforms then overlap, canceling some parts of the signal out and reinforcing or doubling other parts. The doubling of the signal pushes the voltage level of the signal beyond the allowed maximum. This over-voltage condition is then sensed by all of the stations on the local cable segment as a collision.
In the beginning the waveform in Figure represents normal Manchester encoded data. A few cycles into the sample the amplitude of the wave doubles. That is the beginning of the collision, where the two waveforms are overlapping. Just prior to the end of the sample the amplitude returns to normal. This happens when the first station to detect the collision quits transmitting, and the jam signal from the second colliding station is still observed.
On UTP cable, such as 10BASE-T, 100BASE-TX and 1000BASE-T, a collision is detected on the local segment only when a station detects a signal on the RX pair at the same time it is sending on the TX pair. Since the two signals are on different pairs there is no characteristic change in the signal. Collisions are only recognized on UTP when the station is operating in half duplex. The only functional difference between half and full duplex operation in this regard is whether or not the transmit and receive pairs are permitted to be used simultaneously. If the station is not engaged in transmitting it cannot detect a local collision. Conversely, a cable fault such as excessive crosstalk can cause a station to perceive its own transmission as a local collision.
The characteristics of a remote collision are a frame that is less than the minimum length, has an invalid FCS checksum, but does not exhibit the local collision symptom of over-voltage or simultaneous RX/TX activity. This sort of collision usually results from collisions occurring on the far side of a repeated connection. A repeater will not forward an over-voltage state, and cannot cause a station to have both the TX and RX pairs active at the same time. The station would have to be transmitting to have both pairs active, and that would constitute a local collision. On UTP networks this is the most common sort of collision observed.
There is no possibility remaining for a normal or legal collision after the first 64 octets of data has been transmitted by the sending stations. Collisions occurring after the first 64 octets are called "late collisions". The most significant difference between late collisions and collisions occurring before the first 64 octets is that the Ethernet NIC will retransmit a normally collided frame automatically, but will not automatically retransmit a frame that was collided late. As far as the NIC is concerned everything went out fine, and the upper layers of the protocol stack must determine that the frame was lost. Other than retransmission, a station detecting a late collision handles it in exactly the same way as a normal collision.
The Interactive Media Activity will require students to identify the different types of collisions.
The next page will discuss the sources of Ethernet errors.
6.2
Ethernet Operation
6.2.7
Ethernet errors
This page will define common Ethernet errors.
Knowledge of typical errors is invaluable for understanding both the operation and troubleshooting of Ethernet networks.
The following are the sources of Ethernet error:
* Collision or runt – Simultaneous transmission occurring before slot time has elapsed
* Late collision – Simultaneous transmission occurring after slot time has elapsed
* Jabber, long frame and range errors – Excessively or illegally long transmission
* Short frame, collision fragment or runt – Illegally short transmission
* FCS error – Corrupted transmission
* Alignment error – Insufficient or excessive number of bits transmitted
* Range error – Actual and reported number of octets in frame do not match
* Ghost or jabber – Unusually long Preamble or Jam event
While local and remote collisions are considered to be a normal part of Ethernet operation, late collisions are considered to be an error. The presence of errors on a network always suggests that further investigation is warranted. The severity of the problem indicates the troubleshooting urgency related to the detected errors. A handful of errors detected over many minutes or over hours would be a low priority. Thousands detected over a few minutes suggest that urgent attention is warranted.
Jabber is defined in several places in the 802.3 standard as being a transmission of at least 20,000 to 50,000 bit times in duration. However, most diagnostic tools report jabber whenever a detected transmission exceeds the maximum legal frame size, which is considerably smaller than 20,000 to 50,000 bit times. Most references to jabber are more properly called long frames.
A long frame is one that is longer than the maximum legal size, and takes into consideration whether or not the frame was tagged. It does not consider whether or not the frame had a valid FCS checksum. This error usually means that jabber was detected on the network.
A short frame is a frame smaller than the minimum legal size of 64 octets, with a good frame check sequence. Some protocol analyzers and network monitors call these frames "runts". In general the presence of short frames is not a guarantee that the network is failing.
The term runt is generally an imprecise slang term that means something less than a legal frame size. It may refer to short frames with a valid FCS checksum although it usually refers to collision fragments.
The Interactive Media Activity will help students become familiar with Ethernet errors.
The next page will continue the discussion of Ethernet frame errors.
6.2
Ethernet Operation
6.2.8
FCS and beyond
This page will focus on additional errors that occur on an Ethernet network.
A received frame that has a bad Frame Check Sequence, also referred to as a checksum or CRC error, differs from the original transmission by at least one bit. In an FCS error frame the header information is probably correct, but the checksum calculated by the receiving station does not match the checksum appended to the end of the frame by the sending station. The frame is then discarded.
High numbers of FCS errors from a single station usually indicates a faulty NIC and/or faulty or corrupted software drivers, or a bad cable connecting that station to the network. If FCS errors are associated with many stations, they are generally traceable to bad cabling, a faulty version of the NIC driver, a faulty hub port, or induced noise in the cable system.
A message that does not end on an octet boundary is known as an alignment error. Instead of the correct number of binary bits forming complete octet groupings, there are additional bits left over (less than eight). Such a frame is truncated to the nearest octet boundary, and if the FCS checksum fails, then an alignment error is reported. This is often caused by bad software drivers, or a collision, and is frequently accompanied by a failure of the FCS checksum.
A frame with a valid value in the Length field but did not match the actual number of octets counted in the data field of the received frame is known as a range error. This error also appears when the length field value is less than the minimum legal unpadded size of the data field. A similar error, Out of Range, is reported when the value in the Length field indicates a data size that is too large to be legal.
Fluke Networks has coined the term ghost to mean energy (noise) detected on the cable that appears to be a frame, but is lacking a valid SFD. To qualify as a ghost, the frame must be at least 72 octets long, including the preamble. Otherwise, it is classified as a remote collision. Because of the peculiar nature of ghosts, it is important to note that test results are largely dependent upon where on the segment the measurement is made.
Ground loops and other wiring problems are usually the cause of ghosting. Most network monitoring tools do not recognize the existence of ghosts for the same reason that they do not recognize preamble collisions. The tools rely entirely on what the chipset tells them. Software-only protocol analyzers, many hardware-based protocol analyzers, hand held diagnostic tools, as well as most remote monitoring (RMON) probes do not report these events.
The Interactive Media Activity will help students become familiar with the terms and definitions of Ethernet errors.
The next page will describe Auto-Negotiation.
6.2
Ethernet Operation
6.2.9
Ethernet auto-negotiation
This page explains auto-negotiation and how it is accomplished.
As Ethernet grew from 10 to 100 and 1000 Mbps, one requirement was to make each technology interoperable, even to the point that 10, 100, and 1000 interfaces could be directly connected. A process called Auto-Negotiation of speeds at half or full duplex was developed. Specifically, at the time that Fast Ethernet was introduced, the standard included a method of automatically configuring a given interface to match the speed and capabilities of the link partner. This process defines how two link partners may automatically negotiate a configuration offering the best common performance level. It has the additional advantage of only involving the lowest part of the physical layer.
10BASE-T required each station to transmit a link pulse about every 16 milliseconds, whenever the station was not engaged in transmitting a message. Auto-Negotiation adopted this signal and renamed it a Normal Link Pulse (NLP). When a series of NLPs are sent in a group for the purpose of Auto-Negotiation, the group is called a Fast Link Pulse (FLP) burst. Each FLP burst is sent at the same timing interval as an NLP, and is intended to allow older 10BASE-T devices to operate normally in the event they should receive an FLP burst.
Auto-Negotiation is accomplished by transmitting a burst of 10BASE-T Link Pulses from each of the two link partners. The burst communicates the capabilities of the transmitting station to its link partner. After both stations have interpreted what the other partner is offering, both switch to the highest performance common configuration and establish a link at that speed. If anything interrupts communications and the link is lost, the two link partners first attempt to link again at the last negotiated speed. If that fails, or if it has been too long since the link was lost, the Auto-Negotiation process starts over. The link may be lost due to external influences, such as a cable fault, or due to one of the partners issuing a reset.
The next page will discuss half and full duplex modes.
6.2
Ethernet Operation
6.2.10
Link establishment and full and half duplex
This page will explain how links are established through Auto-Negotiation and introduce the two duplex modes.
Link partners are allowed to skip offering configurations of which they are capable. This allows the network administrator to force ports to a selected speed and duplex setting, without disabling Auto-Negotiation.
Auto-Negotiation is optional for most Ethernet implementations. Gigabit Ethernet requires its implementation, though the user may disable it. Auto-Negotiation was originally defined for UTP implementations of Ethernet and has been extended to work with other fiber optic implementations.
When an Auto-Negotiating station first attempts to link it is supposed to enable 100BASE-TX to attempt to immediately establish a link. If 100BASE-TX signaling is present, and the station supports 100BASE-TX, it will attempt to establish a link without negotiating. If either signaling produces a link or FLP bursts are received, the station will proceed with that technology. If a link partner does not offer an FLP burst, but instead offers NLPs, then that device is automatically assumed to be a 10BASE-T station. During this initial interval of testing for other technologies, the transmit path is sending FLP bursts. The standard does not permit parallel detection of any other technologies.
If a link is established through parallel detection, it is required to be half duplex. There are only two methods of achieving a full-duplex link. One method is through a completed cycle of Auto-Negotiation, and the other is to administratively force both link partners to full duplex. If one link partner is forced to full duplex, but the other partner attempts to Auto-Negotiate, then there is certain to be a duplex mismatch. This will result in collisions and errors on that link. Additionally if one end is forced to full duplex the other must also be forced. The exception to this is 10-Gigabit Ethernet, which does not support half duplex.
Many vendors implement hardware in such a way that it cycles through the various possible states. It transmits FLP bursts to Auto-Negotiate for a while, then it configures for Fast Ethernet, attempts to link for a while, and then just listens. Some vendors do not offer any transmitted attempt to link until the interface first hears an FLP burst or some other signaling scheme.
There are two duplex modes, half and full. For shared media, the half-duplex mode is mandatory. All coaxial implementations are half duplex in nature and cannot operate in full duplex. UTP and fiber implementations may be operated in half duplex. 10-Gbps implementations are specified for full duplex only.
In half duplex only one station may transmit at a time. For the coaxial implementations a second station transmitting will cause the signals to overlap and become corrupted. Since UTP and fiber generally transmit on separate pairs the signals have no opportunity to overlap and become corrupted. Ethernet has established arbitration rules for resolving conflicts arising from instances when more than one station attempts to transmit at the same time. Both stations in a point-to-point full-duplex link are permitted to transmit at any time, regardless of whether the other station is transmitting.
Auto-Negotiation avoids most situations where one station in a point-to-point link is transmitting under half-duplex rules and the other under full-duplex rules.
In the event that link partners are capable of sharing more than one common technology, refer to the list in Figure . This list is used to determine which technology should be chosen from the offered configurations.
Fiber-optic Ethernet implementations are not included in this priority resolution list because the interface electronics and optics do not permit easy reconfiguration between implementations. It is assumed that the interface configuration is fixed. If the two interfaces are able to Auto-Negotiate then they are already using the same Ethernet implementation. However, there remain a number of configuration choices such as the duplex setting, or which station will act as the Master for clocking purposes, that must be determined.
The Interactive Media Activity will help students understand the link establishment process.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
Ethernet is not one networking technology, but a family of LAN technologies that includes Legacy, Fast Ethernet, and Gigabit Ethernet. When Ethernet needs to be expanded to add a new medium or capability, the IEEE issues a new supplement to the 802.3 standard. The new supplements are given a one or two letter designation such as 802.3u. Ethernet relies on baseband signaling, which uses the entire bandwidth of the transmission medium. Ethernet operates at two layers of the OSI model, the lower half of the data link layer, known as the MAC sublayer and the physical layer. Ethernet at Layer 1 involves interfacing with media, signals, bit streams that travel on the media, components that put signals on media, and various physical topologies. Layer 1 bits need structure so OSI Layer 2 frames are used. The MAC sublayer of Layer 2 determines the type of frame appropriate for the physical media.
The one thing common to all forms of Ethernet is the frame structure. This is what allows the interoperability of the different types of Ethernet.
Some of the fields permitted or required in an 802.3 Ethernet Frame are:
* Preamble
* Start Frame Delimiter
* Destination Address
* Source Address
* Length/Type
* Data and Pad
* Frame Check Sequence
In 10 Mbps and slower versions of Ethernet, the Preamble provides timing information the receiving node needs in order to interpret the electrical signals it is receiving. The Start Frame Delimiter marks the end of the timing information. 10 Mbps and slower versions of Ethernet are asynchronous. That is, they will use the preamble timing information to synchronize the receive circuit to the incoming data. 100 Mbps and higher speed implementations of Ethernet are synchronous. Synchronous means the timing information is not required, however for compatibility reasons the Preamble and SFD are present.
The address fields of the Ethernet frame contain Layer 2, or MAC, addresses.
All frames are susceptible to errors from a variety of sources. The Frame Check Sequence (FCS) field of an Ethernet frame contains a number that is calculated by the source node based on the data in the frame. At the destination it is recalculated and compared to determine that the data received is complete and error free.
Once the data is framed the Media Access Control (MAC) sublayer is also responsible to determine which computer on a shared-medium environment, or collision domain, is allowed to transmit the data. There are two broad categories of Media Access Control, deterministic (taking turns) and non-deterministic (first come, first served).
Examples of deterministic protocols include Token Ring and FDDI. The carrier sense multiple access with collision detection (CSMA/CD) access method is a simple non-deterministic system. The NIC listens for an absence of a signal on the media and starts transmitting. If two nodes or more nodes transmit at the same time a collision occurs. If a collision is detected the nodes wait a random amount of time and retransmit.
The minimum spacing between two non-colliding frames is also called the interframe spacing. Interframe spacing is required to insure that all stations have time to process the previous frame and prepare for the next frame.
Collisions can occur at various points during transmission. A collision where a signal is detected on the receive and transmit circuits at the same time is referred to as a local collision. A collision that occurs before the minimum number of bytes can be transmitted is called a remote collision. A collision that occurs after the first sixty-four octets of data have been sent is considered a late collision. The NIC will not automatically retransmit for this type of collision.
While local and remote collisions are considered to be a normal part of Ethernet operation, late collisions are considered to be an error. Ethernet errors result from detection of frames sizes that are longer or shorter than standards allow or excessively long or illegal transmissions called jabber. Runt is a slang term that refers to something less than the legal frame size.
Auto-Negotiation detects the speed and duplex mode, half-duplex or full-duplex, of the device on the other end of the wire and adjusts to match those settings.
Overview
Ethernet has been the most successful LAN technology mainly because of how easy it is to implement. Ethernet has also been successful because it is a flexible technology that has evolved as needs and media capabilities have changed. This module will provide details about the most important types of Ethernet. The goal is to help students understand what is common to all forms of Ethernet.
Changes in Ethernet have resulted in major improvements over the 10-Mbps Ethernet of the early 1980s. The 10-Mbps Ethernet standard remained virtually unchanged until 1995 when IEEE announced a standard for a 100-Mbps Fast Ethernet. In recent years, an even more rapid growth in media speed has moved the transition from Fast Ethernet to Gigabit Ethernet. The standards for Gigabit Ethernet emerged in only three years. A faster Ethernet version called 10-Gigabit Ethernet is now widely available and faster versions will be developed.
MAC addresses, CSMA/CD, and the frame format have not been changed from earlier versions of Ethernet. However, other aspects of the MAC sublayer, physical layer, and medium have changed. Copper-based NICs capable of 10, 100, or 1000 Mbps are now common. Gigabit switch and router ports are becoming the standard for wiring closets. Optical fiber to support Gigabit Ethernet is considered a standard for backbone cables in most new installations.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.
Students who complete this module should be able to perform the following tasks:
* Describe the differences and similarities among 10BASE5, 10BASE2, and 10BASE-T Ethernet
* Define Manchester encoding
* List the factors that affect Ethernet timing limits
* List 10BASE-T wiring parameters
* Describe the key characteristics and varieties of 100-Mbps Ethernet
* Describe the evolution of Ethernet
* Explain the MAC methods, frame formats, and transmission process of Gigabit Ethernet
* Describe the uses of specific media and encoding with Gigabit Ethernet
* Identify the pinouts and wiring typical to the various implementations of Gigabit Ethernet
* Describe the similarities and differences between Gigabit and 10-Gigabit Ethernet
* Describe the basic architectural considerations of Gigabit and 10-Gigabit Ethernet
7.1
10-Mbps and 100-Mbps Ethernet
7.1.1
10-Mbps Ethernet
This page will discuss 10-Mbps Ethernet technologies.
10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four common features of Legacy Ethernet are timing parameters, the frame format, transmission processes, and a basic design rule.
Figure displays the parameters for 10-Mbps Ethernet operation. 10-Mbps Ethernet and slower versions are asynchronous. Each receiving station uses eight octets of timing information to synchronize its receive circuit to the incoming data. 10BASE5, 10BASE2, and 10BASE-T all share the same timing parameters. For example, 1 bit time at 10 Mbps = 100 nanoseconds (ns) = 0.1 microseconds = 1 10-millionth of a second. This means that on a 10-Mbps Ethernet network, 1 bit at the MAC sublayer requires 100 ns to transmit.
For all speeds of Ethernet transmission 1000 Mbps or slower, transmission can be no slower than the slot time. Slot time is just longer than the time it theoretically can take to go from one extreme end of the largest legal Ethernet collision domain to the other extreme end, collide with another transmission at the last possible instant, and then have the collision fragments return to the sending station to be detected.
10BASE5, 10BASE2, and 10BASE-T also have a common frame format.
The Legacy Ethernet transmission process is identical until the lower part of the OSI physical layer. As the frame passes from the MAC sublayer to the physical layer, other processes occur before the bits move from the physical layer onto the medium. One important process is the signal quality error (SQE) signal. The SQE is a transmission sent by a transceiver back to the controller to let the controller know whether the collision circuitry is functional. The SQE is also called a heartbeat. The SQE signal is designed to fix the problem in earlier versions of Ethernet where a host does not know if a transceiver is connected. SQE is always used in half-duplex. SQE can be used in full-duplex operation but is not required. SQE is active in the following instances:
* Within 4 to 8 microseconds after a normal transmission to indicate that the outbound frame was successfully transmitted
* Whenever there is a collision on the medium
* Whenever there is an improper signal on the medium, such as jabber, or reflections that result from a cable short
* Whenever a transmission has been interrupted
All 10-Mbps forms of Ethernet take octets received from the MAC sublayer and perform a process called line encoding. Line encoding describes how the bits are actually signaled on the wire. The simplest encodings have undesirable timing and electrical characteristics. Therefore, line codes have been designed with desirable transmission properties. This form of encoding used in 10-Mbps systems is called Manchester encoding.
Manchester encoding uses the transition in the middle of the timing window to determine the binary value for that bit period. In Figure , the top waveform moves to a lower position so it is interpreted as a binary zero. The second waveform moves to a higher position and is interpreted as a binary one. The third waveform has an alternating binary sequence. When binary data alternates, there is no need to return to the previous voltage level before the next bit period. The wave forms in the graphic show that the binary bit values are determined based on the direction of change in a bit period. The voltage levels at the start or end of any bit period are not used to determine binary values.
Legacy Ethernet has common architectural features. Networks usually contain multiple types of media. The standard ensures that interoperability is maintained. The overall architectural design is most important in mixed-media networks. It becomes easier to violate maximum delay limits as the network grows. The timing limits are based on the following types of parameters:
* Cable length and propagation delay
* Delay of repeaters
* Delay of transceivers
* Interframe gap shrinkage
* Delays within the station
10-Mbps Ethernet operates within the timing limits for a series of up to five segments separated by up to four repeaters. This is known as the 5-4-3 rule. No more than four repeaters can be used in series between any two stations. There can also be no more than three populated segments between any two stations.
The next page will describe 10BASE5.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.2
10BASE5
This page will discuss the original 1980 Ethernet product, which is 10BASE5. 10BASE5 transmitted 10 Mbps over a single thin coaxial cable bus.
10BASE5 is important because it was the first medium used for Ethernet. 10BASE5 was part of the original 802.3 standard. The primary benefit of 10BASE5 was length. 10BASE5 may be found in legacy installations. It is not recommended for new installations. 10BASE5 systems are inexpensive and require no configuration. Two disadvantages are that basic components like NICs are very difficult to find and it is sensitive to signal reflections on the cable. 10BASE5 systems also represent a single point of failure.
10BASE5 uses Manchester encoding. It has a solid central conductor. Each segment of thick coax may be up to 500 m (1640.4 ft) in length. The cable is large, heavy, and difficult to install. However, the distance limitations were favorable and this prolonged its use in certain applications.
When the medium is a single coaxial cable, only one station can transmit at a time or a collision will occur. Therefore, 10BASE5 only runs in half-duplex with a maximum transmission rate of 10 Mbps.
Figure illustrates a configuration for an end-to-end collision domain with the maximum number of segments and repeaters. Remember that only three segments can have stations connected to them. The other two repeated segments are used to extend the network.
The Lab Activity will help students decode a waveform.
The Interactive Media Activity will help students learn the features of 10BASE5 technology.
The next page will discuss 10BASE2.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.3
10BASE2
This page covers 10BASE2, which was introduced in 1985.
Installation was easier because of its smaller size, lighter weight, and greater flexibility. 10BASE2 still exists in legacy networks. Like 10BASE5, it is no longer recommended for network installations. It has a low cost and does not require hubs.
10BASE2 also uses Manchester encoding. Computers on a 10BASE2 LAN are linked together by an unbroken series of coaxial cable lengths. These lengths are attached to a T-shaped connector on the NIC with BNC connectors.
10BASE2 has a stranded central conductor. Each of the maximum five segments of thin coaxial cable may be up to 185 m (607 ft) long and each station is connected directly to the BNC T-shaped connector on the coaxial cable.
Only one station can transmit at a time or a collision will occur. 10BASE2 also uses half-duplex. The maximum transmission rate of 10BASE2 is 10 Mbps.
There may be up to 30 stations on a 10BASE2 segment. Only three out of five consecutive segments between any two stations can be populated.
The Interactive Media Activity will help students learn the features of 10BASE2 technology.
The next page will discuss 10BASE-T.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.4
10BASE-T
This page covers 10BASE-T, which was introduced in 1990.
10BASE-T used cheaper and easier to install Category 3 UTP copper cable instead of coax cable. The cable plugged into a central connection device that contained the shared bus. This device was a hub. It was at the center of a set of cables that radiated out to the PCs like the spokes on a wheel. This is referred to as a star topology. As additional stars were added and the cable distances grew, this formed an extended star topology. Originally 10BASE-T was a half-duplex protocol, but full-duplex features were added later. The explosion in the popularity of Ethernet in the mid-to-late 1990s was when Ethernet came to dominate LAN technology.
10BASE-T also uses Manchester encoding. A 10BASE-T UTP cable has a solid conductor for each wire. The maximum cable length is 90 m (295 ft). UTP cable uses eight-pin RJ-45 connectors. Though Category 3 cable is adequate for 10BASE-T networks, new cable installations should be made with Category 5e or better. All four pairs of wires should be used either with the T568-A or T568-B cable pinout arrangement. This type of cable installation supports the use of multiple protocols without the need to rewire. Figure shows the pinout arrangement for a 10BASE-T connection. The pair that transmits data on one device is connected to the pair that receives data on the other device.
Half duplex or full duplex is a configuration choice. 10BASE-T carries 10 Mbps of traffic in half-duplex mode and 20 Mbps in full-duplex mode.
The Interactive Media Activity will help students learn the features of 10BASE-T technology.
The next page describes the wiring and architecture of 10BASE-T.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.5
10BASE-T wiring and architecture
This page explains the wiring and architecture of 10BASE-T.
A 10BASE-T link generally connects a station to a hub or switch. Hubs are multi-port repeaters and count toward the limit on repeaters between distant stations. Hubs do not divide network segments into separate collision domains. Bridges and switches divide segments into separate collision domains. The maximum distance between bridges and switches is based on media limitations.
Although hubs may be linked, it is best to avoid this arrangement. A network with linked hubs may exceed the limit for maximum delay between stations. Multiple hubs should be arranged in hierarchical order like a tree structure. Performance is better if fewer repeaters are used between stations.
An architectural example is shown in Figure . The distance from one end of the network to the other places the architecture at its limit. The most important aspect to consider is how to keep the delay between distant stations to a minimum, regardless of the architecture and media types involved. A shorter maximum delay will provide better overall performance.
10BASE-T links can have unrepeated distances of up to 100 m (328 ft). While this may seem like a long distance, it is typically maximized when wiring an actual building. Hubs can solve the distance issue but will allow collisions to propagate. The widespread introduction of switches has made the distance limitation less important. If workstations are located within 100 m (328 ft) of a switch, the 100-m distance starts over at the switch.
The next page will describe Fast Ethernet.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.6
100-Mbps Ethernet
This page will discuss 100-Mbps Ethernet, which is also known as Fast Ethernet. The two technologies that have become important are 100BASE-TX, which is a copper UTP medium and 100BASE-FX, which is a multimode optical fiber medium.
Three characteristics common to 100BASE-TX and 100BASE-FX are the timing parameters, the frame format, and parts of the transmission process. 100BASE-TX and 100BASE-FX both share timing parameters. Note that one bit time at 100-Mbps = 10 ns = .01 microseconds = 1 100-millionth of a second.
The 100-Mbps frame format is the same as the 10-Mbps frame.
Fast Ethernet is ten times faster than 10BASE-T. The bits that are sent are shorter in duration and occur more frequently. These higher frequency signals are more susceptible to noise. In response to these issues, two separate encoding steps are used by 100-Mbps Ethernet. The first part of the encoding uses a technique called 4B/5B, the second part of the encoding is the actual line encoding specific to copper or fiber.
The next page will discuss the 100BASE-TX standard.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.7
100BASE-TX
This page will describe 100BASE-TX.
In 1995, 100BASE-TX was the standard, using Category 5 UTP cable, which became commercially successful.
The original coaxial Ethernet used half-duplex transmission so only one device could transmit at a time. In 1997, Ethernet was expanded to include a full-duplex capability that allowed more than one PC on a network to transmit at the same time. Switches replaced hubs in many networks. These switches had full-duplex capabilities and could handle Ethernet frames quickly.
100BASE-TX uses 4B/5B encoding, which is then scrambled and converted to Multi-Level Transmit (MLT-3) encoding. Figure shows four waveform examples. The top waveform has no transition in the center of the timing window. No transition indicates a binary zero. The second waveform shows a transition in the center of the timing window. A transition represents a binary one. The third waveform shows an alternating binary sequence. The fourth wavelength shows that signal changes indicate ones and horizontal lines indicate zeros.
Figure shows the pinout for a 100BASE-TX connection. Notice that the two separate transmit-receive paths exist. This is identical to the 10BASE-T configuration.
100BASE-TX carries 100 Mbps of traffic in half-duplex mode. In full-duplex mode, 100BASE-TX can exchange 200 Mbps of traffic. The concept of full duplex will become more important as Ethernet speeds increase.
The next page will discuss the fiber optic version of Fast Ethernet.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.8
100BASE-FX
This page covers 100BASE-FX.
When copper-based Fast Ethernet was introduced, a fiber version was also desired. A fiber version could be used for backbone applications, connections between floors, buildings where copper is less desirable, and also in high-noise environments. 100BASE-FX was introduced to satisfy this desire. However, 100BASE-FX was never adopted successfully. This was due to the introduction of Gigabit Ethernet copper and fiber standards. Gigabit Ethernet standards are now the dominant technology for backbone installations, high-speed cross-connects, and general infrastructure needs.
The timing, frame format, and transmission are the same in both copper and fiber versions of 100-Mbps Fast Ethernet. 100BASE-FX, however, uses NRZI encoding, which is shown in Figure . The top waveform has no transition, which indicates a binary 0. In the second waveform, the transition in the center of the timing window indicates a binary 1. In the third waveform, there is an alternating binary sequence. In the third and fourth waveforms it is more obvious that no transition indicates a binary zero and the presence of a transition is a binary one.
Figure summarizes a 100BASE-FX link and pinouts. A fiber pair with either ST or SC connectors is most commonly used.
The separate Transmit (Tx) and Receive (Rx) paths in 100BASE-FX optical fiber allow for 200-Mbps transmission.
The next page will explain the Fast Ethernet architecture.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.9
Fast Ethernet architecture
This page describes the architecture of Fast Ethernet.
Fast Ethernet links generally consist of a connection between a station and a hub or switch. Hubs are considered multi-port repeaters and switches are considered multi-port bridges. These are subject to the 100-m (328 ft) UTP media distance limitation.
A Class I repeater may introduce up to 140 bit-times latency. Any repeater that changes between one Ethernet implementation and another is a Class I repeater. A Class II repeater is restricted to smaller timing delays, 92 bit times, because it immediately repeats the incoming signal to all other ports without a translation process. To achieve a smaller timing delay, Class II repeaters can only connect to segment types that use the same signaling technique.
As with 10-Mbps versions, it is possible to modify some of the architecture rules for 100-Mbps versions. Modification of the architecture rules is strongly discouraged for 100BASE-TX. 100BASE-TX cable between Class II repeaters may not exceed 5 m (16 ft). Links that operate in half duplex are not uncommon in Fast Ethernet. However, half duplex is undesirable because the signaling scheme is inherently full duplex.
Figure shows architecture configuration cable distances. 100BASE-TX links can have unrepeated distances up to 100 m. Switches have made this distance limitation less important. Most Fast Ethernet implementations are switched.
This page concludes this lesson. The next lesson will discuss Gigabit and 10-Gigabit Ethernet. The first page describes 1000-Mbps Ethernet standards.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.1
1000-Mbps Ethernet
This page covers the 1000-Mbps Ethernet or Gigabit Ethernet standards. These standards specify both fiber and copper media for data transmissions. The 1000BASE-T standard, IEEE 802.3ab, uses Category 5, or higher, balanced copper cabling. The 1000BASE-X standard, IEEE 802.3z, specifies 1 Gbps full duplex over optical fiber.
1000BASE-TX, 1000BASE-SX, and 1000BASE-LX use the same timing parameters, as shown in Figure . They use a 1 ns, 0.000000001 of a second, or 1 billionth of a second bit time. The Gigabit Ethernet frame has the same format as is used for 10 and 100-Mbps Ethernet. Some implementations of Gigabit Ethernet may use different processes to convert frames to bits on the cable. Figure shows the Ethernet frame fields.
The differences between standard Ethernet, Fast Ethernet and Gigabit Ethernet occur at the physical layer. Due to the increased speeds of these newer standards, the shorter duration bit times require special considerations. Since the bits are introduced on the medium for a shorter duration and more often, timing is critical. This high-speed transmission requires higher frequencies. This causes the bits to be more susceptible to noise on copper media.
These issues require Gigabit Ethernet to use two separate encoding steps. Data transmission is more efficient when codes are used to represent the binary bit stream. The encoded data provides synchronization, efficient usage of bandwidth, and improved signal-to-noise ratio characteristics.
At the physical layer, the bit patterns from the MAC layer are converted into symbols. The symbols may also be control information such as start frame, end frame, and idle conditions on a link. The frame is coded into control symbols and data symbols to increase in network throughput.
Fiber-based Gigabit Ethernet, or 1000BASE-X, uses 8B/10B encoding, which is similar to the 4B/5B concept. This is followed by the simple nonreturn to zero (NRZ) line encoding of light on optical fiber. This encoding process is possible because the fiber medium can carry higher bandwidth signals.
The next page will discuss the 1000BASE-T standard.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.2
1000BASE-T
This page will describe 1000BASE-T.
As Fast Ethernet was installed to increase bandwidth to workstations, this began to create bottlenecks upstream in the network. The 1000BASE-T standard, which is IEEE 802.3ab, was developed to provide additional bandwidth to help alleviate these bottlenecks. It provided more throughput for devices such as intra-building backbones, inter-switch links, server farms, and other wiring closet applications as well as connections for high-end workstations. Fast Ethernet was designed to function over Category 5 copper cable that passes the Category 5e test. Most installed Category 5 cable can pass the Category 5e certification if properly terminated. It is important for the 1000BASE-T standard to be interoperable with 10BASE-T and 100BASE-TX.
Since Category 5e cable can reliably carry up to 125 Mbps of traffic, 1000 Mbps or 1 Gigabit of bandwidth was a design challenge. The first step to accomplish 1000BASE-T is to use all four pairs of wires instead of the traditional two pairs of wires used by 10BASE-T and 100BASE-TX. This requires complex circuitry that allows full-duplex transmissions on the same wire pair. This provides 250 Mbps per pair. With all four-wire pairs, this provides the desired 1000 Mbps. Since the information travels simultaneously across the four paths, the circuitry has to divide frames at the transmitter and reassemble them at the receiver.
The 1000BASE-T encoding with 4D-PAM5 line encoding is used on Category 5e, or better, UTP. That means the transmission and reception of data happens in both directions on the same wire at the same time. As might be expected, this results in a permanent collision on the wire pairs. These collisions result in complex voltage patterns. With the complex integrated circuits using techniques such as echo cancellation, Layer 1 Forward Error Correction (FEC), and prudent selection of voltage levels, the system achieves the 1-Gigabit throughput.
In idle periods there are nine voltage levels found on the cable, and during data transmission periods there are 17 voltage levels found on the cable. With this large number of states and the effects of noise, the signal on the wire looks more analog than digital. Like analog, the system is more susceptible to noise due to cable and termination problems.
The data from the sending station is carefully divided into four parallel streams, encoded, transmitted and detected in parallel, and then reassembled into one received bit stream. Figure represents the simultaneous full duplex on four-wire pairs. 1000BASE-T supports both half-duplex as well as full-duplex operation. The use of full-duplex 1000BASE-T is widespread.
The next page will introduce 1000BASE-SX and LX
7.2
Gigabit and 10-Gigabit Ethernet
7.2.3
1000BASE-SX and LX
This page will discuss single-mode and multimode optical fiber.
The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone technology.
The timing, frame format, and transmission are common to all versions of 1000 Mbps. Two signal-encoding schemes are defined at the physical layer. The 8B/10B scheme is used for optical fiber and shielded copper media, and the pulse amplitude modulation 5 (PAM5) is used for UTP.
1000BASE-X uses 8B/10B encoding converted to non-return to zero (NRZ) line encoding. NRZ encoding relies on the signal level found in the timing window to determine the binary value for that bit period. Unlike most of the other encoding schemes described, this encoding system is level driven instead of edge driven. That is the determination of whether a bit is a zero or a one is made by the level of the signal rather than when the signal changes levels.
The NRZ signals are then pulsed into the fiber using either short-wavelength or long-wavelength light sources. The short-wavelength uses an 850 nm laser or LED source in multimode optical fiber (1000BASE-SX). It is the lower-cost of the options but has shorter distances. The long-wavelength 1310 nm laser source uses either single-mode or multimode optical fiber (1000BASE-LX). Laser sources used with single-mode fiber can achieve distances of up to 5000 meters. Because of the length of time to completely turn the LED or laser on and off each time, the light is pulsed using low and high power. A logic zero is represented by low power, and a logic one by high power.
The Media Access Control method treats the link as point-to-point. Since separate fibers are used for transmitting (Tx) and receiving (Rx) the connection is inherently full duplex. Gigabit Ethernet permits only a single repeater between two stations. Figure is a 1000BASE Ethernet media comparison chart.
The next page describes the architecture of Gigabit Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.4
Gigabit Ethernet architecture
This page will discuss the architecture of Gigabit Ethernet.
The distance limitations of full-duplex links are only limited by the medium, and not the round-trip delay. Since most Gigabit Ethernet is switched, the values in Figures and are the practical limits between devices. Daisy-chaining, star, and extended star topologies are all allowed. The issue then becomes one of logical topology and data flow, not timing or distance limitations.
A 1000BASE-T UTP cable is the same as 10BASE-T and 100BASE-TX cable, except that link performance must meet the higher quality Category 5e or ISO Class D (2000) requirements.
Modification of the architecture rules is strongly discouraged for 1000BASE-T. At 100 meters, 1000BASE-T is operating close to the edge of the ability of the hardware to recover the transmitted signal. Any cabling problems or environmental noise could render an otherwise compliant cable inoperable even at distances that are within the specification.
It is recommended that all links between a station and a hub or switch be configured for Auto-Negotiation to permit the highest common performance. This will avoid accidental misconfiguration of the other required parameters for proper Gigabit Ethernet operation.
The next page will discuss 10-Gigabit Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.5
10-Gigabit Ethernet
This page will describe 10-Gigabit Ethernet and compare it to other versions of Ethernet.
IEEE 802.3ae was adapted to include 10 Gbps full-duplex transmission over fiber optic cable. The basic similarities between 802.3ae and 802.3, the original Ethernet are remarkable. This 10-Gigabit Ethernet (10GbE) is evolving for not only LANs, but also MANs, and WANs.
With the frame format and other Ethernet Layer 2 specifications compatible with previous standards, 10GbE can provide increased bandwidth needs that are interoperable with existing network infrastructure.
A major conceptual change for Ethernet is emerging with 10GbE. Ethernet is traditionally thought of as a LAN technology, but 10GbE physical layer standards allow both an extension in distance to 40 km over single-mode fiber and compatibility with synchronous optical network (SONET) and synchronous digital hierarchy (SDH) networks. Operation at 40 km distance makes 10GbE a viable MAN technology. Compatibility with SONET/SDH networks operating up to OC-192 speeds (9.584640 Gbps) make 10GbE a viable WAN technology. 10GbE may also compete with ATM for certain applications.
To summarize, how does 10GbE compare to other varieties of Ethernet?
* Frame format is the same, allowing interoperability between all varieties of legacy, fast, gigabit, and 10 gigabit, with no reframing or protocol conversions.
* Bit time is now 0.1 nanoseconds. All other time variables scale accordingly.
* Since only full-duplex fiber connections are used, CSMA/CD is not necessary.
* The IEEE 802.3 sublayers within OSI Layers 1 and 2 are mostly preserved, with a few additions to accommodate 40 km fiber links and interoperability with SONET/SDH technologies.
* Flexible, efficient, reliable, relatively low cost end-to-end Ethernet networks become possible.
* TCP/IP can run over LANs, MANs, and WANs with one Layer 2 transport method.
The basic standard governing CSMA/CD is IEEE 802.3. An IEEE 802.3 supplement, entitled 802.3ae, governs the 10GbE family. As is typical for new technologies, a variety of implementations are being considered, including:
* 10GBASE-SR – Intended for short distances over already-installed multimode fiber, supports a range between 26 m to 82 m
* 10GBASE-LX4 – Uses wavelength division multiplexing (WDM), supports 240 m to 300 m over already-installed multimode fiber and 10 km over single-mode fiber
* 10GBASE-LR and 10GBASE-ER – Support 10 km and 40 km over single-mode fiber
* 10GBASE-SW, 10GBASE-LW, and 10GBASE-EW – Known collectively as 10GBASE-W, intended to work with OC-192 synchronous transport module SONET/SDH WAN equipment
The IEEE 802.3ae Task force and the 10-Gigabit Ethernet Alliance (10 GEA) are working to standardize these emerging technologies.
10-Gbps Ethernet (IEEE 802.3ae) was standardized in June 2002. It is a full-duplex protocol that uses only optic fiber as a transmission medium. The maximum transmission distances depend on the type of fiber being used. When using single-mode fiber as the transmission medium, the maximum transmission distance is 40 kilometers (25 miles). Some discussions between IEEE members have begun that suggest the possibility of standards for 40, 80, and even 100-Gbps Ethernet.
The next page will discuss the architecture of 10-Gigabit Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.6
10-Gigabit Ethernet architectures
This page describes the 10-Gigabit Ethernet architectures.
As with the development of Gigabit Ethernet, the increase in speed comes with extra requirements. The shorter bit time duration because of increased speed requires special considerations. For 10 GbE transmissions, each data bit duration is 0.1 nanosecond. This means there would be 1,000 GbE data bits in the same bit time as one data bit in a 10-Mbps Ethernet data stream. Because of the short duration of the 10 GbE data bit, it is often difficult to separate a data bit from noise. 10 GbE data transmissions rely on exact bit timing to separate the data from the effects of noise on the physical layer. This is the purpose of synchronization.
In response to these issues of synchronization, bandwidth, and Signal-to-Noise Ratio, 10-Gigabit Ethernet uses two separate encoding steps. By using codes to represent the user data, transmission is made more efficient. The encoded data provides synchronization, efficient usage of bandwidth, and improved Signal-to-Noise Ratio characteristics.
Complex serial bit streams are used for all versions of 10GbE except for 10GBASE-LX4, which uses Wide Wavelength Division Multiplex (WWDM) to multiplex four bit simultaneous bit streams as four wavelengths of light launched into the fiber at one time.
Figure represents the particular case of using four slightly different wavelength, laser sources. Upon receipt from the medium, the optical signal stream is demultiplexed into four separate optical signal streams. The four optical signal streams are then converted back into four electronic bit streams as they travel in approximately the reverse process back up through the sublayers to the MAC layer.
Currently, most 10GbE products are in the form of modules, or line cards, for addition to high-end switches and routers. As the 10GbE technologies evolve, an increasing diversity of signaling components can be expected. As optical technologies evolve, improved transmitters and receivers will be incorporated into these products, taking further advantage of modularity. All 10GbE varieties use optical fiber media. Fiber types include 10µ single-mode Fiber, and 50µ and 62.5µ multimode fibers. A range of fiber attenuation and dispersion characteristics is supported, but they limit operating distances.
Even though support is limited to fiber optic media, some of the maximum cable lengths are surprisingly short. No repeater is defined for 10-Gigabit Ethernet since half duplex is explicitly not supported.
As with 10 Mbps, 100 Mbps and 1000 Mbps versions, it is possible to modify some of the architecture rules slightly. Possible architecture adjustments are related to signal loss and distortion along the medium. Due to dispersion of the signal and other issues the light pulse becomes undecipherable beyond certain distances.
The next page will discuss the future of Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.7
Future of Ethernet
This page will teach students about the future of Ethernet.
Ethernet has gone through an evolution from Legacy —> Fast —> Gigabit —> MultiGigabit technologies. While other LAN technologies are still in place (legacy installations), Ethernet dominates new LAN installations. So much so that some have referred to Ethernet as the LAN "dial tone". Ethernet is now the standard for horizontal, vertical, and inter-building connections. Recently developing versions of Ethernet are blurring the distinction between LANs, MANs, and WANs.
While 1-Gigabit Ethernet is now widely available and 10-Gigabit products becoming more available, the IEEE and the 10-Gigabit Ethernet Alliance are working on 40, 100, or even 160 Gbps standards. The technologies that are adopted will depend on a number of factors, including the rate of maturation of the technologies and standards, the rate of adoption in the market, and cost.
Proposals for Ethernet arbitration schemes other than CSMA/CD have been made. The problem of collisions with physical bus topologies of 10BASE5 and 10BASE2 and 10BASE-T and 100BASE-TX hubs is no longer common. Using UTP and optical fiber with separate Tx and Rx paths, and the decreasing costs of switches make single shared media, half-duplex media connections much less important.
The future of networking media is three-fold:
1. Copper (up to 1000 Mbps, perhaps more)
2. Wireless (approaching 100 Mbps, perhaps more)
3. Optical fiber (currently at 10,000 Mbps and soon to be more)
Copper and wireless media have certain physical and practical limitations on the highest frequency signals that can be transmitted. This is not a limiting factor for optical fiber in the foreseeable future. The bandwidth limitations on optical fiber are extremely large and are not yet being threatened. In fiber systems, it is the electronics technology (such as emitters and detectors) and fiber manufacturing processes that most limit the speed. Upcoming developments in Ethernet are likely to be heavily weighted towards Laser light sources and single-mode optical fiber.
When Ethernet was slower, half-duplex, subject to collisions and a "democratic" process for prioritization, was not considered to have the Quality of Service (QoS) capabilities required to handle certain types of traffic. This included such things as IP telephony and video multicast.
The full-duplex high-speed Ethernet technologies that now dominate the market are proving to be sufficient at supporting even QoS-intensive applications. This makes the potential applications of Ethernet even wider. Ironically end-to-end QoS capability helped drive a push for ATM to the desktop and to the WAN in the mid-1990s, but now it is Ethernet, not ATM that is approaching this goal.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
Ethernet is a technology that has increased in speed one thousand times, from 10 Mbps to 10,000 Mbps, in less than a decade. All forms of Ethernet share a similar frame structure and this leads to excellent interoperability. Most Ethernet copper connections are now switched full duplex, and the fastest copper-based Ethernet is 1000BASE-T, or Gigabit Ethernet. 10 Gigabit Ethernet and faster are exclusively optical fiber-based technologies.
10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four common features of Legacy Ethernet are timing parameters, frame format, transmission process, and a basic design rule.
Legacy Ethernet encodes data on an electrical signal. The form of encoding used in 10 Mbps systems is called Manchester encoding. Manchester encoding uses a change in voltage to represent the binary numbers zero and one. An increase or decrease in voltage during a timed period, called the bit period, determines the binary value of the bit.
In addition to a standard bit period, Ethernet standards set limits for slot time and interframe spacing. Different types of media can affect transmission timing and timing standards ensure interoperability. 10 Mbps Ethernet operates within the timing limits offered by a series of no more than five segments separated by no more than four repeaters.
A single thick coaxial cable was the first medium used for Ethernet. 10BASE2, using a thinner coax cable, was introduced in 1985. 10BASE-T, using twisted-pair copper wire, was introduced in 1990. Because it used multiple wires 10BASE-T offered the option of full-duplex signaling. 10BASE-T carries 10 Mbps of traffic in half-duplex mode and 20 Mbps in full-duplex mode.
10BASE-T links can have unrepeated distances up to 100 m. Beyond that network devices such as repeaters, hub, bridges and switches are used to extend the scope of the LAN. With the advent of switches, the 4-repeater rule is not so relevant. You can extend the LAN indefinitely by daisy-chaining switches. Each switch-to-switch connection, with maximum length of 100m, is essentially a point-to-point connection without the media contention or timing issues of using repeaters and hubs.
100-Mbps Ethernet, also known as Fast Ethernet, can be implemented using twisted-pair copper wire, as in 100BASE-TX, or fiber media, as in 100BASE-FX. 100 Mbps forms of Ethernet can transmit 200 Mbps in full duplex.
Because the higher frequency signals used in Fast Ethernet are more susceptible to noise, two separate encoding steps are used by 100-Mbps Ethernet to enhance signal integrity.
Gigabit Ethernet over copper wire is accomplished by the following:
* Category 5e UTP cable and careful improvements in electronics are used to boost 100 Mbps per wire pair to 125 Mbps per wire pair.
* All four wire pairs instead of just two. This allows 125 Mbps per wire pair, or 500 Mbps for the four wire pairs.
* Sophisticated electronics allow permanent collisions on each wire pair and run signals in full duplex, doubling the 500 Mbps to 1000 Mbps.
On Gigabit Ethernet networks bit signals occur in one tenth of the time of 100 Mbps networks and 1/100 of the time of 10 Mbps networks. With signals occurring in less time the bits become more susceptible to noise. The issue becomes how fast the network adapter or interface can change voltage levels to signal bits and still be detected reliably one hundred meters away at the receiving NIC or interface. At this speed encoding and decoding data becomes even more complex.
The fiber versions of Gigabit Ethernet, 1000BASE-SX and 1000BASE-LX offer the following advantages: noise immunity, small size, and increased unrepeated distances and bandwidth. The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone technology.
CCNA module text
Ethernet is now the dominant LAN technology in the world. Ethernet is a family of LAN technologies that may be best understood with the OSI reference model. All LANs must deal with the basic issue of how individual stations, or nodes, are named. Ethernet specifications support different media, bandwidths, and other Layer 1 and 2 variations. However, the basic frame format and address scheme is the same for all varieties of Ethernet.
Various MAC strategies have been invented to allow multiple stations to access physical media and network devices. It is important to understand how network devices gain access to the network media before students can comprehend and troubleshoot the entire network.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.
Students who complete this module should be able to perform the following tasks:
* Describe the basics of Ethernet technology
* Explain naming rules of Ethernet technology
* Explain how Ethernet relates to the OSI model
* Describe the Ethernet framing process and frame structure
* List Ethernet frame field names and purposes
* Identify the characteristics of CSMA/CD
* Describe Ethernet timing, interframe spacing, and backoff time after a collision
* Define Ethernet errors and collisions
* Explain the concept of auto-negotiation in relation to speed and duplex
6.1
Ethernet Fundamentals
6.1.1
Introduction to Ethernet
This page provides an introduction to Ethernet. Most of the traffic on the Internet originates and ends with Ethernet connections. Since it began in the 1970s, Ethernet has evolved to meet the increased demand for high-speed LANs. When optical fiber media was introduced, Ethernet adapted to take advantage of the superior bandwidth and low error rate that fiber offers. Now the same protocol that transported data at 3 Mbps in 1973 can carry data at 10 Gbps.
The success of Ethernet is due to the following factors:
* Simplicity and ease of maintenance
* Ability to incorporate new technologies
* Reliability
* Low cost of installation and upgrade
The introduction of Gigabit Ethernet has extended the original LAN technology to distances that make Ethernet a MAN and WAN standard.
The original idea for Ethernet was to allow two or more hosts to use the same medium with no interference between the signals. This problem of multiple user access to a shared medium was studied in the early 1970s at the University of Hawaii. A system called Alohanet was developed to allow various stations on the Hawaiian Islands structured access to the shared radio frequency band in the atmosphere. This work later formed the basis for the Ethernet access method known as CSMA/CD.
The first LAN in the world was the original version of Ethernet. Robert Metcalfe and his coworkers at Xerox designed it more than thirty years ago. The first Ethernet standard was published in 1980 by a consortium of Digital Equipment Corporation, Intel, and Xerox (DIX). Metcalfe wanted Ethernet to be a shared standard from which everyone could benefit, so it was released as an open standard. The first products that were developed from the Ethernet standard were sold in the early 1980s. Ethernet transmitted at up to 10 Mbps over thick coaxial cable up to a distance of 2 kilometers (km). This type of coaxial cable was referred to as thicknet and was about the width of a small finger.
In 1985, the IEEE standards committee for Local and Metropolitan Networks published standards for LANs. These standards start with the number 802. The standard for Ethernet is 802.3. The IEEE wanted to make sure that its standards were compatible with the International Standards Organization (ISO) and OSI model. To do this, the IEEE 802.3 standard had to address the needs of Layer 1 and the lower portion of Layer 2 of the OSI model. As a result, some small modifications to the original Ethernet standard were made in 802.3.
The differences between the two standards were so minor that any Ethernet NIC can transmit and receive both Ethernet and 802.3 frames. Essentially, Ethernet and IEEE 802.3 are the same standards.
The 10-Mbps bandwidth of Ethernet was more than enough for the slow PCs of the 1980s. By the early 1990s PCs became much faster, file sizes increased, and data flow bottlenecks occurred. Most were caused by the low availability of bandwidth. In 1995, IEEE announced a standard for a 100-Mbps Ethernet. This was followed by standards for Gigabit Ethernet in 1998 and 1999.
All the standards are essentially compatible with the original Ethernet standard. An Ethernet frame could leave an older coax 10-Mbps NIC in a PC, be placed onto a 10-Gbps Ethernet fiber link, and end up at a 100-Mbps NIC. As long as the frame stays on Ethernet networks it is not changed. For this reason Ethernet is considered very scalable. The bandwidth of the network could be increased many times while the Ethernet technology remains the same.
The original Ethernet standard has been amended many times to manage new media and higher transmission rates. These amendments provide standards for new technologies and maintain compatibility between Ethernet variations.
The next page explains the naming rules for the Ethernet family of networks.
6.1
Ethernet Fundamentals
6.1.2
IEEE Ethernet naming rules
This page focuses on the Ethernet naming rules developed by IEEE.
Ethernet is not one networking technology, but a family of networking technologies that includes Legacy, Fast Ethernet, and Gigabit Ethernet. Ethernet speeds can be 10, 100, 1000, or 10,000 Mbps. The basic frame format and the IEEE sublayers of OSI Layers 1 and 2 remain consistent across all forms of Ethernet.
When Ethernet needs to be expanded to add a new medium or capability, the IEEE issues a new supplement to the 802.3 standard. The new supplements are given a one or two letter designation such as 802.3u. An abbreviated description, called an identifier, is also assigned to the supplement.
The abbreviated description consists of the following elements:
* A number that indicates the number of Mbps transmitted
* The word base to indicate that baseband signaling is used
* One or more letters of the alphabet indicating the type of medium used. For example, F = fiber optical cable and T = copper unshielded twisted pair
Ethernet relies on baseband signaling, which uses the entire bandwidth of the transmission medium. The data signal is transmitted directly over the transmission medium.
In broadband signaling, the data signal is no longer placed directly on the transmission medium. Ethernet used broadband signaling in the 10BROAD36 standard. 10BROAD36 is the IEEE standard for an 802.3 Ethernet network using broadband transmission with thick coaxial cable running at 10 Mbps. 10BROAD36 is now considered obsolete. An analog or carrier signal is modulated by the data signal and then transmitted. Radio broadcasts and cable TV use broadband signaling.
IEEE cannot force manufacturers to fully comply with any standard. IEEE has two main objectives:
* Supply the information necessary to build devices that comply with Ethernet standards
* Promote innovation among manufacturers
Students will identify the IEEE 802 standards in the Interactive Media Activity.
The next page explains Ethernet and the OSI model.
6.1
Ethernet Fundamentals
6.1.3
Ethernet and the OSI model
This page will explain how Ethernet relates to the OSI model.
Ethernet operates in two areas of the OSI model. These are the lower half of the data link layer, which is known as the MAC sublayer, and the physical layer.
Data that moves from one Ethernet station to another often passes through a repeater. All stations in the same collision domain see traffic that passes through a repeater. A collision domain is a shared resource. Problems that originate in one part of a collision domain will usually impact the entire collision domain.
A repeater forwards traffic to all other ports. A repeater never sends traffic out the same port from which it was received. Any signal detected by a repeater will be forwarded. If the signal is degraded through attenuation or noise, the repeater will attempt to reconstruct and regenerate the signal.
To guarantee minimum bandwidth and operability, standards specify the maximum number of stations per segment, maximum segment length, and maximum number of repeaters between stations. Stations separated by bridges or routers are in different collision domains.
Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. Ethernet at Layer 1 involves signals, bit streams that travel on the media, components that put signals on media, and various topologies. Ethernet Layer 1 performs a key role in the communication that takes place between devices, but each of its functions has limitations. Layer 2 addresses these limitations.
Data link sublayers contribute significantly to technological compatibility and computer communications. The MAC sublayer is concerned with the physical components that will be used to communicate the information. The Logical Link Control (LLC) sublayer remains relatively independent of the physical equipment that will be used for the communication process.
Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer 1. While there are other varieties of Ethernet, the ones shown are the most widely used.
The Interactive Media Activity reviews the layers of the OSI model.
The next page explains the address system used by Ethernet networks.
6.1
Ethernet Fundamentals
6.1.4
Naming
This page will discuss the MAC addresses used by Ethernet networks.
An address system is required to uniquely identify computers and interfaces to allow for local delivery of frames on the Ethernet. Ethernet uses MAC addresses that are 48 bits in length and expressed as 12 hexadecimal digits. The first six hexadecimal digits, which are administered by the IEEE, identify the manufacturer or vendor. This portion of the MAC address is known as the Organizational Unique Identifier (OUI). The remaining six hexadecimal digits represent the interface serial number or another value administered by the manufacturer. MAC addresses are sometimes referred to as burned-in MAC addresses (BIAs) because they are burned into ROM and are copied into RAM when the NIC initializes.
At the data link layer MAC headers and trailers are added to upper layer data. The header and trailer contain control information intended for the data link layer in the destination system. The data from upper layers is encapsulated within the data link frame, between the header and trailer, and then sent out on the network.
The NIC uses the MAC address to determine if a message should be passed on to the upper layers of the OSI model. The NIC does not use CPU processing time to make this assessment. This enables better communication times on an Ethernet network.
When a device sends data on an Ethernet network, it can use the destination MAC address to open a communication pathway to the other device. The source device attaches a header with the MAC address of the intended destination and sends data through the network. As this data travels along the network media the NIC in each device checks to see if the MAC address matches the physical destination address carried by the data frame. If there is no match, the NIC discards the data frame. When the data reaches the destination node, the NIC makes a copy and passes the frame up the OSI layers. On an Ethernet network, all nodes must examine the MAC header.
All devices that are connected to the Ethernet LAN have MAC addressed interfaces. This includes workstations, printers, routers, and switches.
The next page will focus on Layer 2 frames.
6.1
Ethernet Fundamentals
6.1.5
Layer 2 framing
This page will explain how frames are created at Layer 2 of the OSI model.
Encoded bit streams, or data, on physical media represent a tremendous technological accomplishment, but they, alone, are not enough to make communication happen. Framing provides essential information that could not be obtained from coded bit streams alone. This information includes the following:
* Which computers are in communication with each other
* When communication between individual computers begins and when it ends
* Which errors occurred while the computers communicated
* Which computer will communicate next
Framing is the Layer 2 encapsulation process. A frame is the Layer 2 protocol data unit.
A voltage versus time graph could be used to visualize bits. However, it may be too difficult to graph address and control information for larger units of data. Another type of diagram that could be used is the frame format diagram, which is based on voltage versus time graphs. Frame format diagrams are read from left to right, just like an oscilloscope graph. The frame format diagram shows different groupings of bits, or fields, that perform other functions.
There are many different types of frames described by various standards.A single generic frame has sections called fields. Each field is composed of bytes. The names of the fields are as follows:
* Start Frame field
* Address field
* Length/Type field
* Data field
* Frame Check Sequence (FCS) field
When computers are connected to a physical medium, there must be a way to inform other computers when they are about to transmit a frame. Various technologies do this in different ways. Regardless of the technology, all frames begin with a sequence of bytes to signal the data transmission.
All frames contain naming information, such as the name of the source node, or source MAC address, and the name of the destination node, or destination MAC address.
Most frames have some specialized fields. In some technologies, a Length field specifies the exact length of a frame in bytes. Some frames have a Type field, which specifies the Layer 3 protocol used by the device that wants to send data.
Frames are used to send upper-layer data and ultimately the user application data from a source to a destination. The data package includes the message to be sent, or user application data. Extra bytes may be added so frames have a minimum length for timing purposes. LLC bytes are also included with the Data field in the IEEE standard frames. The LLC sublayer takes the network protocol data, which is an IP packet, and adds control information to help deliver the packet to the destination node. Layer 2 communicates with the upper layers through LLC.
All frames and the bits, bytes, and fields contained within them, are susceptible to errors from a variety of sources. The FCS field contains a number that is calculated by the source node based on the data in the frame. This number is added to the end of a frame that is sent. When the destination node receives the frame the FCS number is recalculated and compared with the FCS number included in the frame. If the two numbers are different, an error is assumed, the frame is discarded.
Because the source cannot detect that the frame has been discarded, retransmission has to be initiated by higher layer connection-oriented protocols providing data flow control. Because these protocols, such as TCP, expect frame acknowledgment, ACK, to be sent by the peer station within a certain time, retransmission usually occurs.
There are three primary ways to calculate the FCS number:
* Cyclic redundancy check (CRC) – performs calculations on the data.
* Two-dimensional parity – places individual bytes in a two-dimensional array and performs redundancy checks vertically and horizontally on the array, creating an extra byte resulting in an even or odd number of binary 1s.
* Internet checksum – adds the values of all of the data bits to arrive at a sum.
The node that transmits data must get the attention of other devices to start and end a frame. The Length field indicates where the frame ends. The frame ends after the FCS. Sometimes there is a formal byte sequence referred to as an end-frame delimiter.
The next page will discuss the frame structure of an Ethernet network.
6.1
Ethernet Fundamentals
6.1.6
Ethernet frame structure
This page will describe the frame structure of Ethernet networks.
At the data link layer the frame structure is nearly identical for all speeds of Ethernet from 10 Mbps to 10,000 Mbps. However, at the physical layer almost all versions of Ethernet are very different. Each speed has a distinct set of architecture design rules.
In the version of Ethernet that was developed by DIX prior to the adoption of the IEEE 802.3 version of Ethernet, the Preamble and Start-of-Frame (SOF) Delimiter were combined into a single field. The binary pattern was identical. The field labeled Length/Type was only listed as Length in the early IEEE versions and only as Type in the DIX version. These two uses of the field were officially combined in a later IEEE version since both uses were common.
The Ethernet II Type field is incorporated into the current 802.3 frame definition. When a node receives a frame it must examine the Length/Type field to determine which higher-layer protocol is present. If the two-octet value is equal to or greater than 0x0600 hexadecimal, 1536 decimal, then the contents of the Data Field are decoded according to the protocol indicated. Ethernet II is the Ethernet frame format that is used in TCP/IP networks.
The next page will discuss the information included in a frame.
6.1
Ethernet Fundamentals
6.1.7
Ethernet frame fields
This page defines the fields that are used in a frame.
Some of the fields permitted or required in an 802.3 Ethernet frame are as follows:
* Preamble
* SOF Delimiter
* Destination Address
* Source Address
* Length/Type
* Header and Data
* FCS
* Extension
The preamble is an alternating pattern of ones and zeros used to time synchronization in 10 Mbps and slower implementations of Ethernet. Faster versions of Ethernet are synchronous so this timing information is unnecessary but retained for compatibility.
A SOF delimiter consists of a one-octet field that marks the end of the timing information and contains the bit sequence 10101011.
The destination address can be unicast, multicast, or broadcast.
The Source Address field contains the MAC source address. The source address is generally the unicast address of the Ethernet node that transmitted the frame. However, many virtual protocols use and sometimes share a specific source MAC address to identify the virtual entity.
The Length/Type field supports two different uses. If the value is less than 1536 decimal, 0x600 hexadecimal, then the value indicates length. The length interpretation is used when the LLC layer provides the protocol identification. The type value indicates which upper-layer protocol will receive the data after the Ethernet process is complete. The length indicates the number of bytes of data that follows this field.
The Data field and padding if necessary, may be of any length that does not cause the frame to exceed the maximum frame size. The maximum transmission unit (MTU) for Ethernet is 1500 octets, so the data should not exceed that size. The content of this field is unspecified. An unspecified amount of data is inserted immediately after the user data when there is not enough user data for the frame to meet the minimum frame length. This extra data is called a pad. Ethernet requires each frame to be between 64 and 1518 octets.
A FCS contains a 4-byte CRC value that is created by the device that sends data and is recalculated by the destination device to check for damaged frames. The corruption of a single bit anywhere from the start of the Destination Address through the end of the FCS field will cause the checksum to be different. Therefore, the coverage of the FCS includes itself. It is not possible to distinguish between corruption of the FCS and corruption of any other field used in the calculation.
This page concludes this lesson. The next lesson will discuss the functions of an Ethernet network. The first page will introduce the concept of MAC.
6.2
Ethernet Operation
6.2.1
MAC
This page will define MAC and provide examples of deterministic and non-deterministic MAC protocols.
MAC refers to protocols that determine which computer in a shared-media environment, or collision domain, is allowed to transmit data. MAC and LLC comprise the IEEE version of the OSI Layer 2. MAC and LLC are sublayers of Layer 2. The two broad categories of MAC are deterministic and non-deterministic.
Examples of deterministic protocols include Token Ring and FDDI. In a Token Ring network, hosts are arranged in a ring and a special data token travels around the ring to each host in sequence. When a host wants to transmit, it seizes the token, transmits the data for a limited time, and then forwards the token to the next host in the ring. Token Ring is a collisionless environment since only one host can transmit at a time.
Non-deterministic MAC protocols use a first-come, first-served approach. Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a simple system. The NIC listens for the absence of a signal on the media and begins to transmit. If two nodes transmit at the same time a collision occurs and none of the nodes are able to transmit.
Three common Layer 2 technologies are Token Ring, FDDI, and Ethernet. All three specify Layer 2 issues, LLC, naming, framing, and MAC, as well as Layer 1 signaling components and media issues. The specific technologies for each are as follows:
* Ethernet – uses a logical bus topology to control information flow on a linear bus and a physical star or extended star topology for the cables
* Token Ring – uses a logical ring topology to control information flow and a physical star topology
* FDDI – uses a logical ring topology to control information flow and a physical dual-ring topology
The next page explains how collisions are avoided in an Ethernet network.
6.2
Ethernet Operation
6.2.2
MAC rules and collision detection/backoff
This page describes collision detection and avoidance in a CSMA/CD network.
Ethernet is a shared-media broadcast technology. The access method CSMA/CD used in Ethernet performs three functions:
* Transmitting and receiving data frames
* Decoding data frames and checking them for valid addresses before passing them to the upper layers of the OSI model
* Detecting errors within data frames or on the network
In the CSMA/CD access method, networking devices with data to transmit work in a listen-before-transmit mode. This means when a node wants to send data, it must first check to see whether the networking media is busy. If the node determines the network is busy, the node will wait a random amount of time before retrying. If the node determines the networking media is not busy, the node will begin transmitting and listening. The node listens to ensure no other stations are transmitting at the same time. After completing data transmission the device will return to listening mode.
Networking devices detect a collision has occurred when the amplitude of the signal on the networking media increases. When a collision occurs, each node that is transmitting will continue to transmit for a short time to ensure that all nodes detect the collision. When all nodes have detected the collision, the backoff algorithm is invoked and transmission stops. The nodes stop transmitting for a random period of time, determined by the backoff algorithm. When the delay periods expire, each node can attempt to access the networking media. The devices that were involved in the collision do not have transmission priority.
The Interactive Media Activity shows the procedure for collision detection in an Ethernet network.
The next page will discuss Ethernet timing.
6.2
Ethernet Operation
6.2.3
Ethernet timing
This page explains the importance of slot times in an Ethernet network.
The basic rules and specifications for proper operation of Ethernet are not particularly complicated, though some of the faster physical layer implementations are becoming so. Despite the basic simplicity, when a problem occurs in Ethernet it is often quite difficult to isolate the source. Because of the common bus architecture of Ethernet, also described as a distributed single point of failure, the scope of the problem usually encompasses all devices within the collision domain. In situations where repeaters are used, this can include devices up to four segments away.
Any station on an Ethernet network wishing to transmit a message first "listens" to ensure that no other station is currently transmitting. If the cable is quiet, the station will begin transmitting immediately. The electrical signal takes time to travel down the cable (delay), and each subsequent repeater introduces a small amount of latency in forwarding the frame from one port to the next. Because of the delay and latency, it is possible for more than one station to begin transmitting at or near the same time. This results in a collision.
If the attached station is operating in full duplex then the station may send and receive simultaneously and collisions should not occur. Full-duplex operation also changes the timing considerations and eliminates the concept of slot time. Full-duplex operation allows for larger network architecture designs since the timing restriction for collision detection is removed.
In half duplex, assuming that a collision does not occur, the sending station will transmit 64 bits of timing synchronization information that is known as the preamble. The sending station will then transmit the following information:
* Destination and source MAC addressing information
* Certain other header information
* The actual data payload
* Checksum (FCS) used to ensure that the message was not corrupted along the way
Stations receiving the frame recalculate the FCS to determine if the incoming message is valid and then pass valid messages to the next higher layer in the protocol stack.
10 Mbps and slower versions of Ethernet are asynchronous. Asynchronous means that each receiving station will use the eight octets of timing information to synchronize the receive circuit to the incoming data, and then discard it. 100 Mbps and higher speed implementations of Ethernet are synchronous. Synchronous means the timing information is not required, however for compatibility reasons the Preamble and Start Frame Delimiter (SFD) are present.
For all speeds of Ethernet transmission at or below 1000 Mbps, the standard describes how a transmission may be no smaller than the slot time. Slot time for 10 and 100-Mbps Ethernet is 512 bit-times, or 64 octets. Slot time for 1000-Mbps Ethernet is 4096 bit-times, or 512 octets. Slot time is calculated assuming maximum cable lengths on the largest legal network architecture. All hardware propagation delay times are at the legal maximum and the 32-bit jam signal is used when collisions are detected.
The actual calculated slot time is just longer than the theoretical amount of time required to travel between the furthest points of the collision domain, collide with another transmission at the last possible instant, and then have the collision fragments return to the sending station and be detected. For the system to work the first station must learn about the collision before it finishes sending the smallest legal frame size. To allow 1000-Mbps Ethernet to operate in half duplex the extension field was added when sending small frames purely to keep the transmitter busy long enough for a collision fragment to return. This field is present only on 1000-Mbps, half-duplex links and allows minimum-sized frames to be long enough to meet slot time requirements. Extension bits are discarded by the receiving station.
On 10-Mbps Ethernet one bit at the MAC layer requires 100 nanoseconds (ns) to transmit. At 100 Mbps that same bit requires 10 ns to transmit and at 1000 Mbps only takes 1 ns. As a rough estimate, 20.3 cm (8 in) per nanosecond is often used for calculating propagation delay down a UTP cable. For 100 meters of UTP, this means that it takes just under 5 bit-times for a 10BASE-T signal to travel the length the cable.
For CSMA/CD Ethernet to operate, the sending station must become aware of a collision before it has completed transmission of a minimum-sized frame. At 100 Mbps the system timing is barely able to accommodate 100 meter cables. At 1000 Mbps special adjustments are required as nearly an entire minimum-sized frame would be transmitted before the first bit reached the end of the first 100 meters of UTP cable. For this reason half duplex is not permitted in 10-Gigabit Ethernet.
The Interactive Media Activity will help students identify the bit time of different Ethernet speeds.
The next page defines interframe spacing and backoff.
6.2
Ethernet Operation
6.2.4
Interframe spacing and backoff
This page explains how spacing is used in an Ethernet network for data transmission.
The minimum spacing between two non-colliding frames is also called the interframe spacing. This is measured from the last bit of the FCS field of the first frame to the first bit of the preamble of the second frame.
After a frame has been sent, all stations on a 10-Mbps Ethernet are required to wait a minimum of 96 bit-times (9.6 microseconds) before any station may legally transmit the next frame. On faster versions of Ethernet the spacing remains the same, 96 bit-times, but the time required for that interval grows correspondingly shorter. This interval is referred to as the spacing gap. The gap is intended to allow slow stations time to process the previous frame and prepare for the next frame.
A repeater is expected to regenerate the full 64 bits of timing information, which is the preamble and SFD, at the start of any frame. This is despite the potential loss of some of the beginning preamble bits because of slow synchronization. Because of this forced reintroduction of timing bits, some minor reduction of the interframe gap is not only possible but expected. Some Ethernet chipsets are sensitive to a shortening of the interframe spacing, and will begin failing to see frames as the gap is reduced. With the increase in processing power at the desktop, it would be very easy for a personal computer to saturate an Ethernet segment with traffic and to begin transmitting again before the interframe spacing delay time is satisfied.
After a collision occurs and all stations allow the cable to become idle (each waits the full interframe spacing), then the stations that collided must wait an additional and potentially progressively longer period of time before attempting to retransmit the collided frame. The waiting period is intentionally designed to be random so that two stations do not delay for the same amount of time before retransmitting, which would result in more collisions. This is accomplished in part by expanding the interval from which the random retransmission time is selected on each retransmission attempt. The waiting period is measured in increments of the parameter slot time.
If the MAC layer is unable to send the frame after sixteen attempts, it gives up and generates an error to the network layer. Such an occurrence is fairly rare and would happen only under extremely heavy network loads, or when a physical problem exists on the network.
The next page will discuss collisions.
6.2
Ethernet Operation
6.2.5
Error handling
This page will describe collisions and how they are handled on a network.
The most common error condition on Ethernet networks are collisions. Collisions are the mechanism for resolving contention for network access. A few collisions provide a smooth, simple, low overhead way for network nodes to arbitrate contention for the network resource. When network contention becomes too great, collisions can become a significant impediment to useful network operation.
Collisions result in network bandwidth loss that is equal to the initial transmission and the collision jam signal. This is consumption delay and affects all network nodes possibly causing significant reduction in network throughput.
The considerable majority of collisions occur very early in the frame, often before the SFD. Collisions occurring before the SFD are usually not reported to the higher layers, as if the collision did not occur. As soon as a collision is detected, the sending stations transmit a 32-bit "jam" signal that will enforce the collision. This is done so that any data being transmitted is thoroughly corrupted and all stations have a chance to detect the collision.
In Figure two stations listen to ensure that the cable is idle, then transmit. Station 1 was able to transmit a significant percentage of the frame before the signal even reached the last cable segment. Station 2 had not received the first bit of the transmission prior to beginning its own transmission and was only able to send several bits before the NIC sensed the collision. Station 2 immediately truncated the current transmission, substituted the 32-bit jam signal and ceased all transmissions. During the collision and jam event that Station 2 was experiencing, the collision fragments were working their way back through the repeated collision domain toward Station 1. Station 2 completed transmission of the 32-bit jam signal and became silent before the collision propagated back to Station 1 which was still unaware of the collision and continued to transmit. When the collision fragments finally reached Station 1, it also truncated the current transmission and substituted a 32-bit jam signal in place of the remainder of the frame it was transmitting. Upon sending the 32-bit jam signal Station 1 ceased all transmissions.
A jam signal may be composed of any binary data so long as it does not form a proper checksum for the portion of the frame already transmitted. The most commonly observed data pattern for a jam signal is simply a repeating one, zero, one, zero pattern, the same as Preamble. When viewed by a protocol analyzer this pattern appears as either a repeating hexadecimal 5 or A sequence. The corrupted, partially transmitted messages are often referred to as collision fragments or runts. Normal collisions are less than 64 octets in length and therefore fail both the minimum length test and the FCS checksum test.
The next page will define different types of collisions.
6.2
Ethernet Operation
6.2.6
Types of collisions
This page covers the different types of collisions and their characteristics.
Collisions typically take place when two or more Ethernet stations transmit simultaneously within a collision domain. A single collision is a collision that was detected while trying to transmit a frame, but on the next attempt the frame was transmitted successfully. Multiple collisions indicate that the same frame collided repeatedly before being successfully transmitted. The results of collisions, collision fragments, are partial or corrupted frames that are less than 64 octets and have an invalid FCS. Three types of collisions are:
* Local
* Remote
* Late
To create a local collision on coax cable (10BASE2 and 10BASE5), the signal travels down the cable until it encounters a signal from the other station. The waveforms then overlap, canceling some parts of the signal out and reinforcing or doubling other parts. The doubling of the signal pushes the voltage level of the signal beyond the allowed maximum. This over-voltage condition is then sensed by all of the stations on the local cable segment as a collision.
In the beginning the waveform in Figure represents normal Manchester encoded data. A few cycles into the sample the amplitude of the wave doubles. That is the beginning of the collision, where the two waveforms are overlapping. Just prior to the end of the sample the amplitude returns to normal. This happens when the first station to detect the collision quits transmitting, and the jam signal from the second colliding station is still observed.
On UTP cable, such as 10BASE-T, 100BASE-TX and 1000BASE-T, a collision is detected on the local segment only when a station detects a signal on the RX pair at the same time it is sending on the TX pair. Since the two signals are on different pairs there is no characteristic change in the signal. Collisions are only recognized on UTP when the station is operating in half duplex. The only functional difference between half and full duplex operation in this regard is whether or not the transmit and receive pairs are permitted to be used simultaneously. If the station is not engaged in transmitting it cannot detect a local collision. Conversely, a cable fault such as excessive crosstalk can cause a station to perceive its own transmission as a local collision.
The characteristics of a remote collision are a frame that is less than the minimum length, has an invalid FCS checksum, but does not exhibit the local collision symptom of over-voltage or simultaneous RX/TX activity. This sort of collision usually results from collisions occurring on the far side of a repeated connection. A repeater will not forward an over-voltage state, and cannot cause a station to have both the TX and RX pairs active at the same time. The station would have to be transmitting to have both pairs active, and that would constitute a local collision. On UTP networks this is the most common sort of collision observed.
There is no possibility remaining for a normal or legal collision after the first 64 octets of data has been transmitted by the sending stations. Collisions occurring after the first 64 octets are called "late collisions". The most significant difference between late collisions and collisions occurring before the first 64 octets is that the Ethernet NIC will retransmit a normally collided frame automatically, but will not automatically retransmit a frame that was collided late. As far as the NIC is concerned everything went out fine, and the upper layers of the protocol stack must determine that the frame was lost. Other than retransmission, a station detecting a late collision handles it in exactly the same way as a normal collision.
The Interactive Media Activity will require students to identify the different types of collisions.
The next page will discuss the sources of Ethernet errors.
6.2
Ethernet Operation
6.2.7
Ethernet errors
This page will define common Ethernet errors.
Knowledge of typical errors is invaluable for understanding both the operation and troubleshooting of Ethernet networks.
The following are the sources of Ethernet error:
* Collision or runt – Simultaneous transmission occurring before slot time has elapsed
* Late collision – Simultaneous transmission occurring after slot time has elapsed
* Jabber, long frame and range errors – Excessively or illegally long transmission
* Short frame, collision fragment or runt – Illegally short transmission
* FCS error – Corrupted transmission
* Alignment error – Insufficient or excessive number of bits transmitted
* Range error – Actual and reported number of octets in frame do not match
* Ghost or jabber – Unusually long Preamble or Jam event
While local and remote collisions are considered to be a normal part of Ethernet operation, late collisions are considered to be an error. The presence of errors on a network always suggests that further investigation is warranted. The severity of the problem indicates the troubleshooting urgency related to the detected errors. A handful of errors detected over many minutes or over hours would be a low priority. Thousands detected over a few minutes suggest that urgent attention is warranted.
Jabber is defined in several places in the 802.3 standard as being a transmission of at least 20,000 to 50,000 bit times in duration. However, most diagnostic tools report jabber whenever a detected transmission exceeds the maximum legal frame size, which is considerably smaller than 20,000 to 50,000 bit times. Most references to jabber are more properly called long frames.
A long frame is one that is longer than the maximum legal size, and takes into consideration whether or not the frame was tagged. It does not consider whether or not the frame had a valid FCS checksum. This error usually means that jabber was detected on the network.
A short frame is a frame smaller than the minimum legal size of 64 octets, with a good frame check sequence. Some protocol analyzers and network monitors call these frames "runts". In general the presence of short frames is not a guarantee that the network is failing.
The term runt is generally an imprecise slang term that means something less than a legal frame size. It may refer to short frames with a valid FCS checksum although it usually refers to collision fragments.
The Interactive Media Activity will help students become familiar with Ethernet errors.
The next page will continue the discussion of Ethernet frame errors.
6.2
Ethernet Operation
6.2.8
FCS and beyond
This page will focus on additional errors that occur on an Ethernet network.
A received frame that has a bad Frame Check Sequence, also referred to as a checksum or CRC error, differs from the original transmission by at least one bit. In an FCS error frame the header information is probably correct, but the checksum calculated by the receiving station does not match the checksum appended to the end of the frame by the sending station. The frame is then discarded.
High numbers of FCS errors from a single station usually indicates a faulty NIC and/or faulty or corrupted software drivers, or a bad cable connecting that station to the network. If FCS errors are associated with many stations, they are generally traceable to bad cabling, a faulty version of the NIC driver, a faulty hub port, or induced noise in the cable system.
A message that does not end on an octet boundary is known as an alignment error. Instead of the correct number of binary bits forming complete octet groupings, there are additional bits left over (less than eight). Such a frame is truncated to the nearest octet boundary, and if the FCS checksum fails, then an alignment error is reported. This is often caused by bad software drivers, or a collision, and is frequently accompanied by a failure of the FCS checksum.
A frame with a valid value in the Length field but did not match the actual number of octets counted in the data field of the received frame is known as a range error. This error also appears when the length field value is less than the minimum legal unpadded size of the data field. A similar error, Out of Range, is reported when the value in the Length field indicates a data size that is too large to be legal.
Fluke Networks has coined the term ghost to mean energy (noise) detected on the cable that appears to be a frame, but is lacking a valid SFD. To qualify as a ghost, the frame must be at least 72 octets long, including the preamble. Otherwise, it is classified as a remote collision. Because of the peculiar nature of ghosts, it is important to note that test results are largely dependent upon where on the segment the measurement is made.
Ground loops and other wiring problems are usually the cause of ghosting. Most network monitoring tools do not recognize the existence of ghosts for the same reason that they do not recognize preamble collisions. The tools rely entirely on what the chipset tells them. Software-only protocol analyzers, many hardware-based protocol analyzers, hand held diagnostic tools, as well as most remote monitoring (RMON) probes do not report these events.
The Interactive Media Activity will help students become familiar with the terms and definitions of Ethernet errors.
The next page will describe Auto-Negotiation.
6.2
Ethernet Operation
6.2.9
Ethernet auto-negotiation
This page explains auto-negotiation and how it is accomplished.
As Ethernet grew from 10 to 100 and 1000 Mbps, one requirement was to make each technology interoperable, even to the point that 10, 100, and 1000 interfaces could be directly connected. A process called Auto-Negotiation of speeds at half or full duplex was developed. Specifically, at the time that Fast Ethernet was introduced, the standard included a method of automatically configuring a given interface to match the speed and capabilities of the link partner. This process defines how two link partners may automatically negotiate a configuration offering the best common performance level. It has the additional advantage of only involving the lowest part of the physical layer.
10BASE-T required each station to transmit a link pulse about every 16 milliseconds, whenever the station was not engaged in transmitting a message. Auto-Negotiation adopted this signal and renamed it a Normal Link Pulse (NLP). When a series of NLPs are sent in a group for the purpose of Auto-Negotiation, the group is called a Fast Link Pulse (FLP) burst. Each FLP burst is sent at the same timing interval as an NLP, and is intended to allow older 10BASE-T devices to operate normally in the event they should receive an FLP burst.
Auto-Negotiation is accomplished by transmitting a burst of 10BASE-T Link Pulses from each of the two link partners. The burst communicates the capabilities of the transmitting station to its link partner. After both stations have interpreted what the other partner is offering, both switch to the highest performance common configuration and establish a link at that speed. If anything interrupts communications and the link is lost, the two link partners first attempt to link again at the last negotiated speed. If that fails, or if it has been too long since the link was lost, the Auto-Negotiation process starts over. The link may be lost due to external influences, such as a cable fault, or due to one of the partners issuing a reset.
The next page will discuss half and full duplex modes.
6.2
Ethernet Operation
6.2.10
Link establishment and full and half duplex
This page will explain how links are established through Auto-Negotiation and introduce the two duplex modes.
Link partners are allowed to skip offering configurations of which they are capable. This allows the network administrator to force ports to a selected speed and duplex setting, without disabling Auto-Negotiation.
Auto-Negotiation is optional for most Ethernet implementations. Gigabit Ethernet requires its implementation, though the user may disable it. Auto-Negotiation was originally defined for UTP implementations of Ethernet and has been extended to work with other fiber optic implementations.
When an Auto-Negotiating station first attempts to link it is supposed to enable 100BASE-TX to attempt to immediately establish a link. If 100BASE-TX signaling is present, and the station supports 100BASE-TX, it will attempt to establish a link without negotiating. If either signaling produces a link or FLP bursts are received, the station will proceed with that technology. If a link partner does not offer an FLP burst, but instead offers NLPs, then that device is automatically assumed to be a 10BASE-T station. During this initial interval of testing for other technologies, the transmit path is sending FLP bursts. The standard does not permit parallel detection of any other technologies.
If a link is established through parallel detection, it is required to be half duplex. There are only two methods of achieving a full-duplex link. One method is through a completed cycle of Auto-Negotiation, and the other is to administratively force both link partners to full duplex. If one link partner is forced to full duplex, but the other partner attempts to Auto-Negotiate, then there is certain to be a duplex mismatch. This will result in collisions and errors on that link. Additionally if one end is forced to full duplex the other must also be forced. The exception to this is 10-Gigabit Ethernet, which does not support half duplex.
Many vendors implement hardware in such a way that it cycles through the various possible states. It transmits FLP bursts to Auto-Negotiate for a while, then it configures for Fast Ethernet, attempts to link for a while, and then just listens. Some vendors do not offer any transmitted attempt to link until the interface first hears an FLP burst or some other signaling scheme.
There are two duplex modes, half and full. For shared media, the half-duplex mode is mandatory. All coaxial implementations are half duplex in nature and cannot operate in full duplex. UTP and fiber implementations may be operated in half duplex. 10-Gbps implementations are specified for full duplex only.
In half duplex only one station may transmit at a time. For the coaxial implementations a second station transmitting will cause the signals to overlap and become corrupted. Since UTP and fiber generally transmit on separate pairs the signals have no opportunity to overlap and become corrupted. Ethernet has established arbitration rules for resolving conflicts arising from instances when more than one station attempts to transmit at the same time. Both stations in a point-to-point full-duplex link are permitted to transmit at any time, regardless of whether the other station is transmitting.
Auto-Negotiation avoids most situations where one station in a point-to-point link is transmitting under half-duplex rules and the other under full-duplex rules.
In the event that link partners are capable of sharing more than one common technology, refer to the list in Figure . This list is used to determine which technology should be chosen from the offered configurations.
Fiber-optic Ethernet implementations are not included in this priority resolution list because the interface electronics and optics do not permit easy reconfiguration between implementations. It is assumed that the interface configuration is fixed. If the two interfaces are able to Auto-Negotiate then they are already using the same Ethernet implementation. However, there remain a number of configuration choices such as the duplex setting, or which station will act as the Master for clocking purposes, that must be determined.
The Interactive Media Activity will help students understand the link establishment process.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
Ethernet is not one networking technology, but a family of LAN technologies that includes Legacy, Fast Ethernet, and Gigabit Ethernet. When Ethernet needs to be expanded to add a new medium or capability, the IEEE issues a new supplement to the 802.3 standard. The new supplements are given a one or two letter designation such as 802.3u. Ethernet relies on baseband signaling, which uses the entire bandwidth of the transmission medium. Ethernet operates at two layers of the OSI model, the lower half of the data link layer, known as the MAC sublayer and the physical layer. Ethernet at Layer 1 involves interfacing with media, signals, bit streams that travel on the media, components that put signals on media, and various physical topologies. Layer 1 bits need structure so OSI Layer 2 frames are used. The MAC sublayer of Layer 2 determines the type of frame appropriate for the physical media.
The one thing common to all forms of Ethernet is the frame structure. This is what allows the interoperability of the different types of Ethernet.
Some of the fields permitted or required in an 802.3 Ethernet Frame are:
* Preamble
* Start Frame Delimiter
* Destination Address
* Source Address
* Length/Type
* Data and Pad
* Frame Check Sequence
In 10 Mbps and slower versions of Ethernet, the Preamble provides timing information the receiving node needs in order to interpret the electrical signals it is receiving. The Start Frame Delimiter marks the end of the timing information. 10 Mbps and slower versions of Ethernet are asynchronous. That is, they will use the preamble timing information to synchronize the receive circuit to the incoming data. 100 Mbps and higher speed implementations of Ethernet are synchronous. Synchronous means the timing information is not required, however for compatibility reasons the Preamble and SFD are present.
The address fields of the Ethernet frame contain Layer 2, or MAC, addresses.
All frames are susceptible to errors from a variety of sources. The Frame Check Sequence (FCS) field of an Ethernet frame contains a number that is calculated by the source node based on the data in the frame. At the destination it is recalculated and compared to determine that the data received is complete and error free.
Once the data is framed the Media Access Control (MAC) sublayer is also responsible to determine which computer on a shared-medium environment, or collision domain, is allowed to transmit the data. There are two broad categories of Media Access Control, deterministic (taking turns) and non-deterministic (first come, first served).
Examples of deterministic protocols include Token Ring and FDDI. The carrier sense multiple access with collision detection (CSMA/CD) access method is a simple non-deterministic system. The NIC listens for an absence of a signal on the media and starts transmitting. If two nodes or more nodes transmit at the same time a collision occurs. If a collision is detected the nodes wait a random amount of time and retransmit.
The minimum spacing between two non-colliding frames is also called the interframe spacing. Interframe spacing is required to insure that all stations have time to process the previous frame and prepare for the next frame.
Collisions can occur at various points during transmission. A collision where a signal is detected on the receive and transmit circuits at the same time is referred to as a local collision. A collision that occurs before the minimum number of bytes can be transmitted is called a remote collision. A collision that occurs after the first sixty-four octets of data have been sent is considered a late collision. The NIC will not automatically retransmit for this type of collision.
While local and remote collisions are considered to be a normal part of Ethernet operation, late collisions are considered to be an error. Ethernet errors result from detection of frames sizes that are longer or shorter than standards allow or excessively long or illegal transmissions called jabber. Runt is a slang term that refers to something less than the legal frame size.
Auto-Negotiation detects the speed and duplex mode, half-duplex or full-duplex, of the device on the other end of the wire and adjusts to match those settings.
Overview
Ethernet has been the most successful LAN technology mainly because of how easy it is to implement. Ethernet has also been successful because it is a flexible technology that has evolved as needs and media capabilities have changed. This module will provide details about the most important types of Ethernet. The goal is to help students understand what is common to all forms of Ethernet.
Changes in Ethernet have resulted in major improvements over the 10-Mbps Ethernet of the early 1980s. The 10-Mbps Ethernet standard remained virtually unchanged until 1995 when IEEE announced a standard for a 100-Mbps Fast Ethernet. In recent years, an even more rapid growth in media speed has moved the transition from Fast Ethernet to Gigabit Ethernet. The standards for Gigabit Ethernet emerged in only three years. A faster Ethernet version called 10-Gigabit Ethernet is now widely available and faster versions will be developed.
MAC addresses, CSMA/CD, and the frame format have not been changed from earlier versions of Ethernet. However, other aspects of the MAC sublayer, physical layer, and medium have changed. Copper-based NICs capable of 10, 100, or 1000 Mbps are now common. Gigabit switch and router ports are becoming the standard for wiring closets. Optical fiber to support Gigabit Ethernet is considered a standard for backbone cables in most new installations.
This module covers some of the objectives for the CCNA 640-801, INTRO 640-821, and ICND 640-811 exams.
Students who complete this module should be able to perform the following tasks:
* Describe the differences and similarities among 10BASE5, 10BASE2, and 10BASE-T Ethernet
* Define Manchester encoding
* List the factors that affect Ethernet timing limits
* List 10BASE-T wiring parameters
* Describe the key characteristics and varieties of 100-Mbps Ethernet
* Describe the evolution of Ethernet
* Explain the MAC methods, frame formats, and transmission process of Gigabit Ethernet
* Describe the uses of specific media and encoding with Gigabit Ethernet
* Identify the pinouts and wiring typical to the various implementations of Gigabit Ethernet
* Describe the similarities and differences between Gigabit and 10-Gigabit Ethernet
* Describe the basic architectural considerations of Gigabit and 10-Gigabit Ethernet
7.1
10-Mbps and 100-Mbps Ethernet
7.1.1
10-Mbps Ethernet
This page will discuss 10-Mbps Ethernet technologies.
10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four common features of Legacy Ethernet are timing parameters, the frame format, transmission processes, and a basic design rule.
Figure displays the parameters for 10-Mbps Ethernet operation. 10-Mbps Ethernet and slower versions are asynchronous. Each receiving station uses eight octets of timing information to synchronize its receive circuit to the incoming data. 10BASE5, 10BASE2, and 10BASE-T all share the same timing parameters. For example, 1 bit time at 10 Mbps = 100 nanoseconds (ns) = 0.1 microseconds = 1 10-millionth of a second. This means that on a 10-Mbps Ethernet network, 1 bit at the MAC sublayer requires 100 ns to transmit.
For all speeds of Ethernet transmission 1000 Mbps or slower, transmission can be no slower than the slot time. Slot time is just longer than the time it theoretically can take to go from one extreme end of the largest legal Ethernet collision domain to the other extreme end, collide with another transmission at the last possible instant, and then have the collision fragments return to the sending station to be detected.
10BASE5, 10BASE2, and 10BASE-T also have a common frame format.
The Legacy Ethernet transmission process is identical until the lower part of the OSI physical layer. As the frame passes from the MAC sublayer to the physical layer, other processes occur before the bits move from the physical layer onto the medium. One important process is the signal quality error (SQE) signal. The SQE is a transmission sent by a transceiver back to the controller to let the controller know whether the collision circuitry is functional. The SQE is also called a heartbeat. The SQE signal is designed to fix the problem in earlier versions of Ethernet where a host does not know if a transceiver is connected. SQE is always used in half-duplex. SQE can be used in full-duplex operation but is not required. SQE is active in the following instances:
* Within 4 to 8 microseconds after a normal transmission to indicate that the outbound frame was successfully transmitted
* Whenever there is a collision on the medium
* Whenever there is an improper signal on the medium, such as jabber, or reflections that result from a cable short
* Whenever a transmission has been interrupted
All 10-Mbps forms of Ethernet take octets received from the MAC sublayer and perform a process called line encoding. Line encoding describes how the bits are actually signaled on the wire. The simplest encodings have undesirable timing and electrical characteristics. Therefore, line codes have been designed with desirable transmission properties. This form of encoding used in 10-Mbps systems is called Manchester encoding.
Manchester encoding uses the transition in the middle of the timing window to determine the binary value for that bit period. In Figure , the top waveform moves to a lower position so it is interpreted as a binary zero. The second waveform moves to a higher position and is interpreted as a binary one. The third waveform has an alternating binary sequence. When binary data alternates, there is no need to return to the previous voltage level before the next bit period. The wave forms in the graphic show that the binary bit values are determined based on the direction of change in a bit period. The voltage levels at the start or end of any bit period are not used to determine binary values.
Legacy Ethernet has common architectural features. Networks usually contain multiple types of media. The standard ensures that interoperability is maintained. The overall architectural design is most important in mixed-media networks. It becomes easier to violate maximum delay limits as the network grows. The timing limits are based on the following types of parameters:
* Cable length and propagation delay
* Delay of repeaters
* Delay of transceivers
* Interframe gap shrinkage
* Delays within the station
10-Mbps Ethernet operates within the timing limits for a series of up to five segments separated by up to four repeaters. This is known as the 5-4-3 rule. No more than four repeaters can be used in series between any two stations. There can also be no more than three populated segments between any two stations.
The next page will describe 10BASE5.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.2
10BASE5
This page will discuss the original 1980 Ethernet product, which is 10BASE5. 10BASE5 transmitted 10 Mbps over a single thin coaxial cable bus.
10BASE5 is important because it was the first medium used for Ethernet. 10BASE5 was part of the original 802.3 standard. The primary benefit of 10BASE5 was length. 10BASE5 may be found in legacy installations. It is not recommended for new installations. 10BASE5 systems are inexpensive and require no configuration. Two disadvantages are that basic components like NICs are very difficult to find and it is sensitive to signal reflections on the cable. 10BASE5 systems also represent a single point of failure.
10BASE5 uses Manchester encoding. It has a solid central conductor. Each segment of thick coax may be up to 500 m (1640.4 ft) in length. The cable is large, heavy, and difficult to install. However, the distance limitations were favorable and this prolonged its use in certain applications.
When the medium is a single coaxial cable, only one station can transmit at a time or a collision will occur. Therefore, 10BASE5 only runs in half-duplex with a maximum transmission rate of 10 Mbps.
Figure illustrates a configuration for an end-to-end collision domain with the maximum number of segments and repeaters. Remember that only three segments can have stations connected to them. The other two repeated segments are used to extend the network.
The Lab Activity will help students decode a waveform.
The Interactive Media Activity will help students learn the features of 10BASE5 technology.
The next page will discuss 10BASE2.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.3
10BASE2
This page covers 10BASE2, which was introduced in 1985.
Installation was easier because of its smaller size, lighter weight, and greater flexibility. 10BASE2 still exists in legacy networks. Like 10BASE5, it is no longer recommended for network installations. It has a low cost and does not require hubs.
10BASE2 also uses Manchester encoding. Computers on a 10BASE2 LAN are linked together by an unbroken series of coaxial cable lengths. These lengths are attached to a T-shaped connector on the NIC with BNC connectors.
10BASE2 has a stranded central conductor. Each of the maximum five segments of thin coaxial cable may be up to 185 m (607 ft) long and each station is connected directly to the BNC T-shaped connector on the coaxial cable.
Only one station can transmit at a time or a collision will occur. 10BASE2 also uses half-duplex. The maximum transmission rate of 10BASE2 is 10 Mbps.
There may be up to 30 stations on a 10BASE2 segment. Only three out of five consecutive segments between any two stations can be populated.
The Interactive Media Activity will help students learn the features of 10BASE2 technology.
The next page will discuss 10BASE-T.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.4
10BASE-T
This page covers 10BASE-T, which was introduced in 1990.
10BASE-T used cheaper and easier to install Category 3 UTP copper cable instead of coax cable. The cable plugged into a central connection device that contained the shared bus. This device was a hub. It was at the center of a set of cables that radiated out to the PCs like the spokes on a wheel. This is referred to as a star topology. As additional stars were added and the cable distances grew, this formed an extended star topology. Originally 10BASE-T was a half-duplex protocol, but full-duplex features were added later. The explosion in the popularity of Ethernet in the mid-to-late 1990s was when Ethernet came to dominate LAN technology.
10BASE-T also uses Manchester encoding. A 10BASE-T UTP cable has a solid conductor for each wire. The maximum cable length is 90 m (295 ft). UTP cable uses eight-pin RJ-45 connectors. Though Category 3 cable is adequate for 10BASE-T networks, new cable installations should be made with Category 5e or better. All four pairs of wires should be used either with the T568-A or T568-B cable pinout arrangement. This type of cable installation supports the use of multiple protocols without the need to rewire. Figure shows the pinout arrangement for a 10BASE-T connection. The pair that transmits data on one device is connected to the pair that receives data on the other device.
Half duplex or full duplex is a configuration choice. 10BASE-T carries 10 Mbps of traffic in half-duplex mode and 20 Mbps in full-duplex mode.
The Interactive Media Activity will help students learn the features of 10BASE-T technology.
The next page describes the wiring and architecture of 10BASE-T.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.5
10BASE-T wiring and architecture
This page explains the wiring and architecture of 10BASE-T.
A 10BASE-T link generally connects a station to a hub or switch. Hubs are multi-port repeaters and count toward the limit on repeaters between distant stations. Hubs do not divide network segments into separate collision domains. Bridges and switches divide segments into separate collision domains. The maximum distance between bridges and switches is based on media limitations.
Although hubs may be linked, it is best to avoid this arrangement. A network with linked hubs may exceed the limit for maximum delay between stations. Multiple hubs should be arranged in hierarchical order like a tree structure. Performance is better if fewer repeaters are used between stations.
An architectural example is shown in Figure . The distance from one end of the network to the other places the architecture at its limit. The most important aspect to consider is how to keep the delay between distant stations to a minimum, regardless of the architecture and media types involved. A shorter maximum delay will provide better overall performance.
10BASE-T links can have unrepeated distances of up to 100 m (328 ft). While this may seem like a long distance, it is typically maximized when wiring an actual building. Hubs can solve the distance issue but will allow collisions to propagate. The widespread introduction of switches has made the distance limitation less important. If workstations are located within 100 m (328 ft) of a switch, the 100-m distance starts over at the switch.
The next page will describe Fast Ethernet.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.6
100-Mbps Ethernet
This page will discuss 100-Mbps Ethernet, which is also known as Fast Ethernet. The two technologies that have become important are 100BASE-TX, which is a copper UTP medium and 100BASE-FX, which is a multimode optical fiber medium.
Three characteristics common to 100BASE-TX and 100BASE-FX are the timing parameters, the frame format, and parts of the transmission process. 100BASE-TX and 100BASE-FX both share timing parameters. Note that one bit time at 100-Mbps = 10 ns = .01 microseconds = 1 100-millionth of a second.
The 100-Mbps frame format is the same as the 10-Mbps frame.
Fast Ethernet is ten times faster than 10BASE-T. The bits that are sent are shorter in duration and occur more frequently. These higher frequency signals are more susceptible to noise. In response to these issues, two separate encoding steps are used by 100-Mbps Ethernet. The first part of the encoding uses a technique called 4B/5B, the second part of the encoding is the actual line encoding specific to copper or fiber.
The next page will discuss the 100BASE-TX standard.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.7
100BASE-TX
This page will describe 100BASE-TX.
In 1995, 100BASE-TX was the standard, using Category 5 UTP cable, which became commercially successful.
The original coaxial Ethernet used half-duplex transmission so only one device could transmit at a time. In 1997, Ethernet was expanded to include a full-duplex capability that allowed more than one PC on a network to transmit at the same time. Switches replaced hubs in many networks. These switches had full-duplex capabilities and could handle Ethernet frames quickly.
100BASE-TX uses 4B/5B encoding, which is then scrambled and converted to Multi-Level Transmit (MLT-3) encoding. Figure shows four waveform examples. The top waveform has no transition in the center of the timing window. No transition indicates a binary zero. The second waveform shows a transition in the center of the timing window. A transition represents a binary one. The third waveform shows an alternating binary sequence. The fourth wavelength shows that signal changes indicate ones and horizontal lines indicate zeros.
Figure shows the pinout for a 100BASE-TX connection. Notice that the two separate transmit-receive paths exist. This is identical to the 10BASE-T configuration.
100BASE-TX carries 100 Mbps of traffic in half-duplex mode. In full-duplex mode, 100BASE-TX can exchange 200 Mbps of traffic. The concept of full duplex will become more important as Ethernet speeds increase.
The next page will discuss the fiber optic version of Fast Ethernet.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.8
100BASE-FX
This page covers 100BASE-FX.
When copper-based Fast Ethernet was introduced, a fiber version was also desired. A fiber version could be used for backbone applications, connections between floors, buildings where copper is less desirable, and also in high-noise environments. 100BASE-FX was introduced to satisfy this desire. However, 100BASE-FX was never adopted successfully. This was due to the introduction of Gigabit Ethernet copper and fiber standards. Gigabit Ethernet standards are now the dominant technology for backbone installations, high-speed cross-connects, and general infrastructure needs.
The timing, frame format, and transmission are the same in both copper and fiber versions of 100-Mbps Fast Ethernet. 100BASE-FX, however, uses NRZI encoding, which is shown in Figure . The top waveform has no transition, which indicates a binary 0. In the second waveform, the transition in the center of the timing window indicates a binary 1. In the third waveform, there is an alternating binary sequence. In the third and fourth waveforms it is more obvious that no transition indicates a binary zero and the presence of a transition is a binary one.
Figure summarizes a 100BASE-FX link and pinouts. A fiber pair with either ST or SC connectors is most commonly used.
The separate Transmit (Tx) and Receive (Rx) paths in 100BASE-FX optical fiber allow for 200-Mbps transmission.
The next page will explain the Fast Ethernet architecture.
7.1
10-Mbps and 100-Mbps Ethernet
7.1.9
Fast Ethernet architecture
This page describes the architecture of Fast Ethernet.
Fast Ethernet links generally consist of a connection between a station and a hub or switch. Hubs are considered multi-port repeaters and switches are considered multi-port bridges. These are subject to the 100-m (328 ft) UTP media distance limitation.
A Class I repeater may introduce up to 140 bit-times latency. Any repeater that changes between one Ethernet implementation and another is a Class I repeater. A Class II repeater is restricted to smaller timing delays, 92 bit times, because it immediately repeats the incoming signal to all other ports without a translation process. To achieve a smaller timing delay, Class II repeaters can only connect to segment types that use the same signaling technique.
As with 10-Mbps versions, it is possible to modify some of the architecture rules for 100-Mbps versions. Modification of the architecture rules is strongly discouraged for 100BASE-TX. 100BASE-TX cable between Class II repeaters may not exceed 5 m (16 ft). Links that operate in half duplex are not uncommon in Fast Ethernet. However, half duplex is undesirable because the signaling scheme is inherently full duplex.
Figure shows architecture configuration cable distances. 100BASE-TX links can have unrepeated distances up to 100 m. Switches have made this distance limitation less important. Most Fast Ethernet implementations are switched.
This page concludes this lesson. The next lesson will discuss Gigabit and 10-Gigabit Ethernet. The first page describes 1000-Mbps Ethernet standards.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.1
1000-Mbps Ethernet
This page covers the 1000-Mbps Ethernet or Gigabit Ethernet standards. These standards specify both fiber and copper media for data transmissions. The 1000BASE-T standard, IEEE 802.3ab, uses Category 5, or higher, balanced copper cabling. The 1000BASE-X standard, IEEE 802.3z, specifies 1 Gbps full duplex over optical fiber.
1000BASE-TX, 1000BASE-SX, and 1000BASE-LX use the same timing parameters, as shown in Figure . They use a 1 ns, 0.000000001 of a second, or 1 billionth of a second bit time. The Gigabit Ethernet frame has the same format as is used for 10 and 100-Mbps Ethernet. Some implementations of Gigabit Ethernet may use different processes to convert frames to bits on the cable. Figure shows the Ethernet frame fields.
The differences between standard Ethernet, Fast Ethernet and Gigabit Ethernet occur at the physical layer. Due to the increased speeds of these newer standards, the shorter duration bit times require special considerations. Since the bits are introduced on the medium for a shorter duration and more often, timing is critical. This high-speed transmission requires higher frequencies. This causes the bits to be more susceptible to noise on copper media.
These issues require Gigabit Ethernet to use two separate encoding steps. Data transmission is more efficient when codes are used to represent the binary bit stream. The encoded data provides synchronization, efficient usage of bandwidth, and improved signal-to-noise ratio characteristics.
At the physical layer, the bit patterns from the MAC layer are converted into symbols. The symbols may also be control information such as start frame, end frame, and idle conditions on a link. The frame is coded into control symbols and data symbols to increase in network throughput.
Fiber-based Gigabit Ethernet, or 1000BASE-X, uses 8B/10B encoding, which is similar to the 4B/5B concept. This is followed by the simple nonreturn to zero (NRZ) line encoding of light on optical fiber. This encoding process is possible because the fiber medium can carry higher bandwidth signals.
The next page will discuss the 1000BASE-T standard.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.2
1000BASE-T
This page will describe 1000BASE-T.
As Fast Ethernet was installed to increase bandwidth to workstations, this began to create bottlenecks upstream in the network. The 1000BASE-T standard, which is IEEE 802.3ab, was developed to provide additional bandwidth to help alleviate these bottlenecks. It provided more throughput for devices such as intra-building backbones, inter-switch links, server farms, and other wiring closet applications as well as connections for high-end workstations. Fast Ethernet was designed to function over Category 5 copper cable that passes the Category 5e test. Most installed Category 5 cable can pass the Category 5e certification if properly terminated. It is important for the 1000BASE-T standard to be interoperable with 10BASE-T and 100BASE-TX.
Since Category 5e cable can reliably carry up to 125 Mbps of traffic, 1000 Mbps or 1 Gigabit of bandwidth was a design challenge. The first step to accomplish 1000BASE-T is to use all four pairs of wires instead of the traditional two pairs of wires used by 10BASE-T and 100BASE-TX. This requires complex circuitry that allows full-duplex transmissions on the same wire pair. This provides 250 Mbps per pair. With all four-wire pairs, this provides the desired 1000 Mbps. Since the information travels simultaneously across the four paths, the circuitry has to divide frames at the transmitter and reassemble them at the receiver.
The 1000BASE-T encoding with 4D-PAM5 line encoding is used on Category 5e, or better, UTP. That means the transmission and reception of data happens in both directions on the same wire at the same time. As might be expected, this results in a permanent collision on the wire pairs. These collisions result in complex voltage patterns. With the complex integrated circuits using techniques such as echo cancellation, Layer 1 Forward Error Correction (FEC), and prudent selection of voltage levels, the system achieves the 1-Gigabit throughput.
In idle periods there are nine voltage levels found on the cable, and during data transmission periods there are 17 voltage levels found on the cable. With this large number of states and the effects of noise, the signal on the wire looks more analog than digital. Like analog, the system is more susceptible to noise due to cable and termination problems.
The data from the sending station is carefully divided into four parallel streams, encoded, transmitted and detected in parallel, and then reassembled into one received bit stream. Figure represents the simultaneous full duplex on four-wire pairs. 1000BASE-T supports both half-duplex as well as full-duplex operation. The use of full-duplex 1000BASE-T is widespread.
The next page will introduce 1000BASE-SX and LX
7.2
Gigabit and 10-Gigabit Ethernet
7.2.3
1000BASE-SX and LX
This page will discuss single-mode and multimode optical fiber.
The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone technology.
The timing, frame format, and transmission are common to all versions of 1000 Mbps. Two signal-encoding schemes are defined at the physical layer. The 8B/10B scheme is used for optical fiber and shielded copper media, and the pulse amplitude modulation 5 (PAM5) is used for UTP.
1000BASE-X uses 8B/10B encoding converted to non-return to zero (NRZ) line encoding. NRZ encoding relies on the signal level found in the timing window to determine the binary value for that bit period. Unlike most of the other encoding schemes described, this encoding system is level driven instead of edge driven. That is the determination of whether a bit is a zero or a one is made by the level of the signal rather than when the signal changes levels.
The NRZ signals are then pulsed into the fiber using either short-wavelength or long-wavelength light sources. The short-wavelength uses an 850 nm laser or LED source in multimode optical fiber (1000BASE-SX). It is the lower-cost of the options but has shorter distances. The long-wavelength 1310 nm laser source uses either single-mode or multimode optical fiber (1000BASE-LX). Laser sources used with single-mode fiber can achieve distances of up to 5000 meters. Because of the length of time to completely turn the LED or laser on and off each time, the light is pulsed using low and high power. A logic zero is represented by low power, and a logic one by high power.
The Media Access Control method treats the link as point-to-point. Since separate fibers are used for transmitting (Tx) and receiving (Rx) the connection is inherently full duplex. Gigabit Ethernet permits only a single repeater between two stations. Figure is a 1000BASE Ethernet media comparison chart.
The next page describes the architecture of Gigabit Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.4
Gigabit Ethernet architecture
This page will discuss the architecture of Gigabit Ethernet.
The distance limitations of full-duplex links are only limited by the medium, and not the round-trip delay. Since most Gigabit Ethernet is switched, the values in Figures and are the practical limits between devices. Daisy-chaining, star, and extended star topologies are all allowed. The issue then becomes one of logical topology and data flow, not timing or distance limitations.
A 1000BASE-T UTP cable is the same as 10BASE-T and 100BASE-TX cable, except that link performance must meet the higher quality Category 5e or ISO Class D (2000) requirements.
Modification of the architecture rules is strongly discouraged for 1000BASE-T. At 100 meters, 1000BASE-T is operating close to the edge of the ability of the hardware to recover the transmitted signal. Any cabling problems or environmental noise could render an otherwise compliant cable inoperable even at distances that are within the specification.
It is recommended that all links between a station and a hub or switch be configured for Auto-Negotiation to permit the highest common performance. This will avoid accidental misconfiguration of the other required parameters for proper Gigabit Ethernet operation.
The next page will discuss 10-Gigabit Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.5
10-Gigabit Ethernet
This page will describe 10-Gigabit Ethernet and compare it to other versions of Ethernet.
IEEE 802.3ae was adapted to include 10 Gbps full-duplex transmission over fiber optic cable. The basic similarities between 802.3ae and 802.3, the original Ethernet are remarkable. This 10-Gigabit Ethernet (10GbE) is evolving for not only LANs, but also MANs, and WANs.
With the frame format and other Ethernet Layer 2 specifications compatible with previous standards, 10GbE can provide increased bandwidth needs that are interoperable with existing network infrastructure.
A major conceptual change for Ethernet is emerging with 10GbE. Ethernet is traditionally thought of as a LAN technology, but 10GbE physical layer standards allow both an extension in distance to 40 km over single-mode fiber and compatibility with synchronous optical network (SONET) and synchronous digital hierarchy (SDH) networks. Operation at 40 km distance makes 10GbE a viable MAN technology. Compatibility with SONET/SDH networks operating up to OC-192 speeds (9.584640 Gbps) make 10GbE a viable WAN technology. 10GbE may also compete with ATM for certain applications.
To summarize, how does 10GbE compare to other varieties of Ethernet?
* Frame format is the same, allowing interoperability between all varieties of legacy, fast, gigabit, and 10 gigabit, with no reframing or protocol conversions.
* Bit time is now 0.1 nanoseconds. All other time variables scale accordingly.
* Since only full-duplex fiber connections are used, CSMA/CD is not necessary.
* The IEEE 802.3 sublayers within OSI Layers 1 and 2 are mostly preserved, with a few additions to accommodate 40 km fiber links and interoperability with SONET/SDH technologies.
* Flexible, efficient, reliable, relatively low cost end-to-end Ethernet networks become possible.
* TCP/IP can run over LANs, MANs, and WANs with one Layer 2 transport method.
The basic standard governing CSMA/CD is IEEE 802.3. An IEEE 802.3 supplement, entitled 802.3ae, governs the 10GbE family. As is typical for new technologies, a variety of implementations are being considered, including:
* 10GBASE-SR – Intended for short distances over already-installed multimode fiber, supports a range between 26 m to 82 m
* 10GBASE-LX4 – Uses wavelength division multiplexing (WDM), supports 240 m to 300 m over already-installed multimode fiber and 10 km over single-mode fiber
* 10GBASE-LR and 10GBASE-ER – Support 10 km and 40 km over single-mode fiber
* 10GBASE-SW, 10GBASE-LW, and 10GBASE-EW – Known collectively as 10GBASE-W, intended to work with OC-192 synchronous transport module SONET/SDH WAN equipment
The IEEE 802.3ae Task force and the 10-Gigabit Ethernet Alliance (10 GEA) are working to standardize these emerging technologies.
10-Gbps Ethernet (IEEE 802.3ae) was standardized in June 2002. It is a full-duplex protocol that uses only optic fiber as a transmission medium. The maximum transmission distances depend on the type of fiber being used. When using single-mode fiber as the transmission medium, the maximum transmission distance is 40 kilometers (25 miles). Some discussions between IEEE members have begun that suggest the possibility of standards for 40, 80, and even 100-Gbps Ethernet.
The next page will discuss the architecture of 10-Gigabit Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.6
10-Gigabit Ethernet architectures
This page describes the 10-Gigabit Ethernet architectures.
As with the development of Gigabit Ethernet, the increase in speed comes with extra requirements. The shorter bit time duration because of increased speed requires special considerations. For 10 GbE transmissions, each data bit duration is 0.1 nanosecond. This means there would be 1,000 GbE data bits in the same bit time as one data bit in a 10-Mbps Ethernet data stream. Because of the short duration of the 10 GbE data bit, it is often difficult to separate a data bit from noise. 10 GbE data transmissions rely on exact bit timing to separate the data from the effects of noise on the physical layer. This is the purpose of synchronization.
In response to these issues of synchronization, bandwidth, and Signal-to-Noise Ratio, 10-Gigabit Ethernet uses two separate encoding steps. By using codes to represent the user data, transmission is made more efficient. The encoded data provides synchronization, efficient usage of bandwidth, and improved Signal-to-Noise Ratio characteristics.
Complex serial bit streams are used for all versions of 10GbE except for 10GBASE-LX4, which uses Wide Wavelength Division Multiplex (WWDM) to multiplex four bit simultaneous bit streams as four wavelengths of light launched into the fiber at one time.
Figure represents the particular case of using four slightly different wavelength, laser sources. Upon receipt from the medium, the optical signal stream is demultiplexed into four separate optical signal streams. The four optical signal streams are then converted back into four electronic bit streams as they travel in approximately the reverse process back up through the sublayers to the MAC layer.
Currently, most 10GbE products are in the form of modules, or line cards, for addition to high-end switches and routers. As the 10GbE technologies evolve, an increasing diversity of signaling components can be expected. As optical technologies evolve, improved transmitters and receivers will be incorporated into these products, taking further advantage of modularity. All 10GbE varieties use optical fiber media. Fiber types include 10µ single-mode Fiber, and 50µ and 62.5µ multimode fibers. A range of fiber attenuation and dispersion characteristics is supported, but they limit operating distances.
Even though support is limited to fiber optic media, some of the maximum cable lengths are surprisingly short. No repeater is defined for 10-Gigabit Ethernet since half duplex is explicitly not supported.
As with 10 Mbps, 100 Mbps and 1000 Mbps versions, it is possible to modify some of the architecture rules slightly. Possible architecture adjustments are related to signal loss and distortion along the medium. Due to dispersion of the signal and other issues the light pulse becomes undecipherable beyond certain distances.
The next page will discuss the future of Ethernet.
7.2
Gigabit and 10-Gigabit Ethernet
7.2.7
Future of Ethernet
This page will teach students about the future of Ethernet.
Ethernet has gone through an evolution from Legacy —> Fast —> Gigabit —> MultiGigabit technologies. While other LAN technologies are still in place (legacy installations), Ethernet dominates new LAN installations. So much so that some have referred to Ethernet as the LAN "dial tone". Ethernet is now the standard for horizontal, vertical, and inter-building connections. Recently developing versions of Ethernet are blurring the distinction between LANs, MANs, and WANs.
While 1-Gigabit Ethernet is now widely available and 10-Gigabit products becoming more available, the IEEE and the 10-Gigabit Ethernet Alliance are working on 40, 100, or even 160 Gbps standards. The technologies that are adopted will depend on a number of factors, including the rate of maturation of the technologies and standards, the rate of adoption in the market, and cost.
Proposals for Ethernet arbitration schemes other than CSMA/CD have been made. The problem of collisions with physical bus topologies of 10BASE5 and 10BASE2 and 10BASE-T and 100BASE-TX hubs is no longer common. Using UTP and optical fiber with separate Tx and Rx paths, and the decreasing costs of switches make single shared media, half-duplex media connections much less important.
The future of networking media is three-fold:
1. Copper (up to 1000 Mbps, perhaps more)
2. Wireless (approaching 100 Mbps, perhaps more)
3. Optical fiber (currently at 10,000 Mbps and soon to be more)
Copper and wireless media have certain physical and practical limitations on the highest frequency signals that can be transmitted. This is not a limiting factor for optical fiber in the foreseeable future. The bandwidth limitations on optical fiber are extremely large and are not yet being threatened. In fiber systems, it is the electronics technology (such as emitters and detectors) and fiber manufacturing processes that most limit the speed. Upcoming developments in Ethernet are likely to be heavily weighted towards Laser light sources and single-mode optical fiber.
When Ethernet was slower, half-duplex, subject to collisions and a "democratic" process for prioritization, was not considered to have the Quality of Service (QoS) capabilities required to handle certain types of traffic. This included such things as IP telephony and video multicast.
The full-duplex high-speed Ethernet technologies that now dominate the market are proving to be sufficient at supporting even QoS-intensive applications. This makes the potential applications of Ethernet even wider. Ironically end-to-end QoS capability helped drive a push for ATM to the desktop and to the WAN in the mid-1990s, but now it is Ethernet, not ATM that is approaching this goal.
This page concludes this lesson. The next page will summarize the main points from the module.
Summary
This page summarizes the topics discussed in this module.
Ethernet is a technology that has increased in speed one thousand times, from 10 Mbps to 10,000 Mbps, in less than a decade. All forms of Ethernet share a similar frame structure and this leads to excellent interoperability. Most Ethernet copper connections are now switched full duplex, and the fastest copper-based Ethernet is 1000BASE-T, or Gigabit Ethernet. 10 Gigabit Ethernet and faster are exclusively optical fiber-based technologies.
10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four common features of Legacy Ethernet are timing parameters, frame format, transmission process, and a basic design rule.
Legacy Ethernet encodes data on an electrical signal. The form of encoding used in 10 Mbps systems is called Manchester encoding. Manchester encoding uses a change in voltage to represent the binary numbers zero and one. An increase or decrease in voltage during a timed period, called the bit period, determines the binary value of the bit.
In addition to a standard bit period, Ethernet standards set limits for slot time and interframe spacing. Different types of media can affect transmission timing and timing standards ensure interoperability. 10 Mbps Ethernet operates within the timing limits offered by a series of no more than five segments separated by no more than four repeaters.
A single thick coaxial cable was the first medium used for Ethernet. 10BASE2, using a thinner coax cable, was introduced in 1985. 10BASE-T, using twisted-pair copper wire, was introduced in 1990. Because it used multiple wires 10BASE-T offered the option of full-duplex signaling. 10BASE-T carries 10 Mbps of traffic in half-duplex mode and 20 Mbps in full-duplex mode.
10BASE-T links can have unrepeated distances up to 100 m. Beyond that network devices such as repeaters, hub, bridges and switches are used to extend the scope of the LAN. With the advent of switches, the 4-repeater rule is not so relevant. You can extend the LAN indefinitely by daisy-chaining switches. Each switch-to-switch connection, with maximum length of 100m, is essentially a point-to-point connection without the media contention or timing issues of using repeaters and hubs.
100-Mbps Ethernet, also known as Fast Ethernet, can be implemented using twisted-pair copper wire, as in 100BASE-TX, or fiber media, as in 100BASE-FX. 100 Mbps forms of Ethernet can transmit 200 Mbps in full duplex.
Because the higher frequency signals used in Fast Ethernet are more susceptible to noise, two separate encoding steps are used by 100-Mbps Ethernet to enhance signal integrity.
Gigabit Ethernet over copper wire is accomplished by the following:
* Category 5e UTP cable and careful improvements in electronics are used to boost 100 Mbps per wire pair to 125 Mbps per wire pair.
* All four wire pairs instead of just two. This allows 125 Mbps per wire pair, or 500 Mbps for the four wire pairs.
* Sophisticated electronics allow permanent collisions on each wire pair and run signals in full duplex, doubling the 500 Mbps to 1000 Mbps.
On Gigabit Ethernet networks bit signals occur in one tenth of the time of 100 Mbps networks and 1/100 of the time of 10 Mbps networks. With signals occurring in less time the bits become more susceptible to noise. The issue becomes how fast the network adapter or interface can change voltage levels to signal bits and still be detected reliably one hundred meters away at the receiving NIC or interface. At this speed encoding and decoding data becomes even more complex.
The fiber versions of Gigabit Ethernet, 1000BASE-SX and 1000BASE-LX offer the following advantages: noise immunity, small size, and increased unrepeated distances and bandwidth. The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone technology.
CCNA module text
Subscribe to:
Posts (Atom)
tim's shared items
Blog Archive
-
▼
2007
(118)
-
▼
September
(28)
- Google calendar on your mobile
- Cheap computer parts in Australia
- Linux weekly news
- ccna module 5
- ccna module 6 & 7
- A good Python exercise - needs a mail log file
- Fwd: How to stay out of or get out of Google Hell
- Do the boost tutorial
- C++ meta programming and performance programs
- Try a program with Boost.Asio
- How to write servers to support 10000 clients
- C++ FAQ Lite
- Coding Horror Blog
- fractals - java applets
- glade, gnome, python object relational manager
- peer to peer file sharing client
- Walking on Water....
- OCR optical character recognition
- smalltalk - squeak - open source implementation
- search mediafire.com using google
- MS Dos 5 video
- Makefile dependency generation
- parallel python
- optical illusions
- Google code jam practise
- bs
- JavaScript charting
- Synching google calendar to mobile phone, Python o...
-
▼
September
(28)