Difference between revisions of "VAXcluster"

From Computer History Wiki
Jump to: navigation, search
m
m (External links: + good paper; +cat)
Line 13: Line 13:
 
The mass storage server is a free-standing, high-speed, intelligent device designed to the specifications of the Digital Storage Architecture and known as the Hierarchical Storage Controller, or [[HSC50]].  
 
The mass storage server is a free-standing, high-speed, intelligent device designed to the specifications of the Digital Storage Architecture and known as the Hierarchical Storage Controller, or [[HSC50]].  
 
Each VAX processor or HSC50 in a cluster is called a node.  
 
Each VAX processor or HSC50 in a cluster is called a node.  
Up to 16 nodes may be connected in a VAXcluster via a high-speed bus known as the Computer Interconnect, or CI.
+
Up to 16 nodes may be connected in a VAXcluster via a high-speed bus known as the [[Computer Interconnect]], or CI.
  
 
=== The Computer Interconnect ===
 
=== The Computer Interconnect ===
Line 165: Line 165:
 
These systems all satisfy the Rule of Total Connectivity.  
 
These systems all satisfy the Rule of Total Connectivity.  
 
If disaster strikes one location, the other location continues to provide the critical elements of the computing environment.
 
If disaster strikes one location, the other location continues to provide the critical elements of the computing environment.
 +
 +
==External links==
 +
 +
* [https://research.cs.wisc.edu/areas/os/Qual/papers/vaxclusters.pdf VAXclusters: A Closely-Coupled Distributed System] - a very good paper on VAXclusters
 +
 +
[[Category: VAXen]]

Revision as of 11:16, 6 June 2022

FIRST DRAFT

VAXcluster Terminology and Key Components

The following five subchapters are citations from: "ED-26065-98 VAXcluster Technical Summary 1984"

They reflect the state of clustering as of VMS 4.0 in 1984.

What is a VAXcluster?

A VAXcluster configuration is a system that combines two or more VAX processors, and mass storage servers if desired, in a loosely coupled manner. Clustering is possible with VAX-11/750, VAX-11/780, VAX-11/782, VAX-11/785, and VAX 8600 processors. The mass storage server is a free-standing, high-speed, intelligent device designed to the specifications of the Digital Storage Architecture and known as the Hierarchical Storage Controller, or HSC50. Each VAX processor or HSC50 in a cluster is called a node. Up to 16 nodes may be connected in a VAXcluster via a high-speed bus known as the Computer Interconnect, or CI.

The Computer Interconnect

The CI (Computer Interconnect) is a high-speed, fault-tolerant, dual-path bus. Using a CI bus, any combination of processor nodes and intelligent I/O-subsystem nodes up to 16 in number can be loosely coupled in a computer-room environment.

Nodes in a VAXcluster use a multiaccess-bus topology that allows any VAX node in thecluster to talk to any other vax node. Either a VAX processor or an intelligent I/O-subsystem such as a HSC50 can serve as anode. The VMS operating system uses a new system-level protocol to communicate among cluster nodes.

The CI bus, which has a bandwidth of 70 Mbits per second, features an immediate acknowledgment scheme by which channel time is reserved at the end of each message to allow the destination device to acknowledge receipt of the message. The loss of any node from the cluster does not block communication among the remaining nodes because no single node is bus master.

The CI780 and CI750 Interfaces

The CI interfaces are microcoded intelligent controllers that connect VAX-11/780, VAX-11/782, VAX-11/785, and VAX-11/750 processors to the CI bus. Each interface attaches to one CI bus, which consists of two transmit cables and two receive cables. Under normal operating conditions, both sets of cables are available to meet traffic demands. If one path becomes unavailable, then all traffic uses the remaining path. Meanwhile, the VMS operating system periodically tests a failed path. As soon as the path becomesavailable again, it is automatically used for normal traffic.

The SC008 Star Coupler

The Star Coupler is the common connection point for all cluster nodes linked to the CI. It connects all CI cables from the individual nodes, creating a radial or star arrangement that has a maximum radius of 45 meters. The Star Coupler can be configured to support VAXcluster systems of up to 16 nodes.

The Star Coupler provides passive coupling of the signals from all cluster nodes by means of power splitter/combiner transformers, permitting the removal of any node from the cluster without affecting the operation of any other node. In addition, the dual paths of the CI bus are electrically separated from each other.

For each node, transmit and receive connectors are provided for each CI path. A signal received from a transmit cable is distributed to all receive cables. The Star Coupler terminates all cables with their characteristic impedance. This allows connection or disconnection of nodes during normal cluster operations without affecting the rest of the cluster.

The HSC50

The HSC50 (Hierarchical Storage Controller) is a self-contained intelligent, mass storage subsystem that connects one or more host processors to a set of mass storage disks or tapes. The HSC50, itself a cluster node, communicates with host cpus by way of the CI, using Digital’s MSCP (Mass Storage Control Protocol) for host communications. Communication between the HSC50 and the mass storage drives is through the Standard Disk Interface (SDI) and the Standard Tape Interface (STI).

To maximize throughput, the HSC50 handles two or more concurrent operations on several drives in order to optimize physical operations such as track seeks and rotational positioning.

The HSC50 subsystem has a comprehensive set of autonomous diagnostics and redundant components. It also can continue operating in a degraded mode with minimal reduction in throughput if a disk or tape interface module fails. In addition, the SPI and STI also use passive coupling, so they can be disconnected and reconnected without disrupting the operation of other VAXcluster devices.

Each HSC50 can support up to six interfaces made up of any combination of SDIs and STIs. An SDI can support four HSC50-compatible disks, yielding a maximum of 24 disks per HSC50. And an STI can support four masters plus three slaves per master, resulting in a maximum of 96 tape drives per HSC50.

The VMS V3 Era

DEC VAXclusters were introduced with VMS V3.0 in 1982.

The CI780 was the first interface to connect VAX-11/780 computers to a cluster.

Chapter 2.1 "CI780 AND COMPUTER CLUSTERS" of "AA-D015D-TE VAX-VMS Release Notes Version 3.0" describes ...

Chapter 1.5 "CI780 SOFTWARE SUPPORT PREREQUISITE" of "AA-N472A-TE VAX-VMS Release Notes Version 3.1" ...

VMS V3.3 introduced the HSC50 Hierarchical Storage Controller, although its support was still "restricted".

The HSC50 supported RA60, RA80, and RA81 disks in conjunction with the CI780 connected to VAX-11/780 or VAX-11/782 computers.

Chapters 1.9 "INSTALLING VERSION 3.3 ON AN HSC50 DISK" and 2.2 "HSC50 SUPPORT" of "AA-P764A-TE VAX-VMS Release Notes Version 3.3" describe usage and support of the HSC50. (+SC008)

Chapters 2.8 "HSC50S AND COMPATIBILITY MODE UTILITIES RESTRICTION" and 2.11 "CI PORT DRIVER (PADRIVER) CHANGE" of "AA-P765A-TE VAX-VMS Release Notes Version 3.4" describe possible problems with devicew names and some changes new with VMS V3.4.

VMS V3.5 brought a new CI device, the CI750 interface for the VAX-11/750.

Chapter 2.2 "CI 750", 2.3 "CI PORT REINITIALIZATION", and Appendix A "TECHNICAL NOTE ON VAX/VMS CI" of "AA-P766A-TE VAX-VMS Release Notes Version 3.5" ...

Chapter 3.1 "CI PORT REVISION LEVELS" of "AA-CJ33A-TE VAX-VMS Release Notes Version 3.7" ...

The VMS V4 Era

VMS V4.0: VAX-11/785, VAX 8600

VMS V4.3: VAX 8650 HSC70

VMS V4.3A: VAX 8200 & CIBCI (VAXBI CI Interface) ???

VMS V4.4: CIBCI (VAXBI CI Interface) ???

Volume Shadowing

CLUSTER PACKAGE 8974

CLUSTER PACKAGE 8978

VAX 8300

VAX 8500

VAX 8550

VAX 8700

VAX 8800

VAXCluster System Revision F1 Jul86 O.pdf

EJ-28788-78 Announcing Digital's New HSC70 VAXcluster Mass Storage Controller And Volume Shadowing.pdf

EY-9222E-SA-0001 VAXcluster Maintenance Student Workbook Volume 1 CI.pdf

EY-9222E-SB-0001 VAXcluster Maintenance Student Workbook Volume 2 HSC.pdf

EY-9222E-SC-0001 VAXcluster Maintenance Student Workbook Volume 3 Cluster.pdf

dtj_v01-05_sep1987.pdf

AA-Y513A-TE Guide to VAXclusters V4.0

VAXcluster Evolution

(taken from: VAXcluster Systems Quorum, Volume 8, Issue 2, November 1992)

The evolution of VAXcluster technology led to increasing degrees of resource availability.

VMS Version 4.0 supported both the dual-pathed CI and the dual porting of disk devices. Thus, a single point of failure for access to a disk device could be avoided when VAXcluster configurations were first implemented.

Volume shadowing, introduced in Version 4.4 of VMS, improved information availability by replicating data on multiple disk volumes. Since volume shadowing can also be applied to system disks, it also improved system availability.

Version 5.0 introduced the use of multiple interconnects in VAXcluster configurations. It also supported failover of VAXcluster communication from one interconnect to another.

Version 5.4 introduced support for a VMS system using multiple CI adapters.

Version 5.4-3 introduced support for a VMS system using multiple local area network (LAN) adapters for VAXcluster communication. Thus, there need not be a single point of failure for communication among OpenVMS systems in a VAXcluster.

This same evolution also led to VAXcluster configurations whose nodes are distributed over increasingly larger geographies. The original VAXcluster interconnect (CI) limited the maximum supported distance between any two nodes in a VAXcluster to 90 meters. Hence, clustered VMS systems and storage were typically confined to a single computer room.

Ethernet permitted nodes to be spread over distances measured in hundreds of meters or, perhaps, a few thousand meters. Thus, Ethernet led to the clustering of nodes throughout a building or multiple buildings in close proximity to each other.

With FDDI (Fiber Distributed Data Interface), the geographic distribution of nodes in a VAXcluster configuration can be measured in tens of kilometers. When bridges are used to combine FDDI and Ethernet, even larger VAXcluster configurations can be achieved. Support for clustering over even larger geographies is planned.

Support for FDDI as a VAXcluster interconnect leads to another form of resource availability in the VAXcluster computing environment - disaster tolerance through site redundancy. This concept is based on duplicating critical hardware and software components of a VAXcluster configuration in two distinct and widely separated locations. Even though there is a great distance between these two locations, the OpenVMS systems and storage in both locations function as a single VAXcluster configuration. These systems all satisfy the Rule of Total Connectivity. If disaster strikes one location, the other location continues to provide the critical elements of the computing environment.

External links