Ethernet is a family of computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, and has since been refined to support higher bit rates and longer link distances (but see 'Parameter contention' below).
The experimental Ethernet, done at Xerox PARC, operated at 3 Mbits/second; the first commercial version, defined by a consortium of DEC, Intel and Xerox, operated at 10 Mbits/second. Follow-on versions have been produced at 100 and 1000 ('Gigabit Ethernet') Mbits/second.
The first two generations of Ethernet (at PARC, and the original XDI Ethernet) used coaxial cable, which has good high-frequence propagation (see images above, and at the top).
To connect a transceiver (or 'station') to the network, a 'vampire tap' was used. A special connector was clamped to the cable; this connector had a threaded hole in it, with the center of the hole aligned with the center conductor of the cable. A cutting tool was then screwed in, which cut a hole through the ground layer (woven metallic layer in the images). This was then withdrawn, and the transceiver was screwed in; the transceiver's connector had a sharp pin which penetrated the insulation around the center conductor and made contact with it.
For a variety of reasons, this technology (called 10BASE5) was not the best approach for wide-scale service: it was replaced first with i) 10BASE2 (so-called 'thin net'), a smaller diamater coaxial cable with standard coaxial connectors pre-fabricated onto it; and then with ii) twisted-pair (10Base-T), using RJ11 connectors.
Originally, Ethernet was a Carrier-Sense Multiple Access with Collision Detection (CSMA-CD) bus (i.e. a shared broadcast channel on which all stations can hear any station which is transmitting). What this means is that a station wanting to transmit waits until it hears silence ('carrier sense'), and then starts transmitting. If it 'hears' another station start to transmit while it is so doing (a 'collision' - hence 'collision detection'), it stops, 'backs off' a random delay, and re-tries. (This is essentially exactly identical to what a group of people chatting at a party will do.)
With both stations doing the 'random backoff after collision', the chances are that one will start before the other, and succeed in acquiring the channel: i.e its new transmission will arrive at the other station, so it can suppress the other station from transmitting, before the second station's random time-out will have run out.
Two of the critical parameters for an Ethernet are the network's physical size (and hence end-end delay), and its speed. This is because if the network is large enough, and the bit rate high enough, it is possible for two stations, at opposite ends of the network, to each send a short message which appears to each of them to have been successfully transmitted, but which produces a collision in the middle of the network between them.
(I.e. the second station's message will only start to arrive at the first after the first's message has finished being sent, so no collision is detected there; and similarly at the second station.)
To prevent this happening, the 10 Mbits/second spec called for a minimum message size, and a maximum physical size, so that one station's message (in the situation above) was guaranteed to start arriving at the other station before the other station finished its message - i..e. guaranteeing a detectable (at the transmitter) collision.
However, this produces an unfortunate conflict between two parameters which it is desirable to increase; the speed, and maximum network size. If the network is made larger, then to guarantee that collisions will be detected, either i) the minimum message size must be made larger, or ii) the speed must be slowed. Similarly, if the speed is raised, to keep the collision detection guarantee, either i) the minimum message size must be increased, or ii) the maximum size of the network must be reduced.
Since making the minimum message size larger is generally undesirable (it will reduce the network's maximum throughput in messages per unit time), that fix is generally undesirable, leaving speed and network size as competing parameters.
Along with the issue above, the analog engineering of a very high speed, long, bus presents significant challenges. (This was one of the reasons for the initial popularity of rings, which are constructed of a number of point-point links, inherently a simpler analog problem domain - and one susceptible to easy conversion to fiber optic technology.)
This had led to a gradual conversion to a wholly different kind of network structure in the internals of the network, where the attached stations cannot see it.
The 'network' overall has been reduced to a series of two-node physical 'Ethernets', in which one end is the station, and the other is a 'hub' - a packet switch which acts as a bridge. A number of these hubs may be connected together (usually by two-node 'Ethernet' links). Thus, a typical 'Ethernet' is no longer a CSMA-CD bus, but rather a classic packet-switching network, with the hubs as the packet switches.
As in many similar cases, only the interface between the two parts (the station and the network) has been retained; the equipment on either side has been free to transform internally, and on one side it has done so in a major way.