Computer System Architecture
The computer system may be organized in a number of different ways, which we can categorize roughly according to the number of general-purpose processors used.
Single-Processor Systems
Most systems use a single processor. The variety of single-processor systems may be surprising, however, since these systems range from PDAs to mainframes. On a single-processor system, there is one main CPU capable of executing a general-purpose instruction set, including instructions from user processes. Almost all systems have other special-purpose processors as well.They may come in the form of device-specific processors, such as a disk, keyboard, and graphics controllers; or, on mainframes, they may come in the form of more general-purpose processors, such as I/O processors that move data rapidly among the components of the system. All of these special-purpose processors run a limited instruction set and do not run user processes.
Sometimes they are managed by the operating system, in that the operating system sends them information about their next task and monitors their status. For example, a disk-controller microprocessor receives a sequence of requests from the main CPU and implements its own disk queue and scheduling algorithm. This arrangement relieves the main CPU of e overhead of disk scheduling. PCs contain a microprocessor in the keyboard to convert the keystrokes into codes to be sent to the CPU.
In other systems or circumstances, special into the hardware. The operating system cannot communicate with these processors; they do their jobs autonomously. The use of special-purpose microprocessors is common and does not turn a single-processor system into a multiprocessor. If there is only one general-purpose CPU, then the system is a single-processor system.
Multiprocessor Systems
Although single-processor systems are the most common, multiprocessor systems (also known as parallel systems or tightly coupled systems) are growing in importance. Such systems have two or more processors in close communication, sharing the computer bus and sometimes the clock, memory, and peripheral devicesAdvantages Multiprocessor systems
1. Increased throughput
By increasing the number of processors, we expect to get more work done in less time. The speed-up ratio with N processors is not N, however; rather, it is less than N. When multiple processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly. This overhead, plus contention for shared resources, lowers the expected gain from additional processors. Similarly, N programmers working closely together do not produce N times the amount of work a single programmer would produce.2. The economy of scale
Multiprocessor systems can cost less than the equivalentmultiple single-processor systems because they can share peripherals mass storage, and power supplies. If several programs operate the same set of data, it is cheaper to store those data on one disk and to have on the l the processors share them than to have many computers with local disks and many copies of the data
3. Increased reliability
If functions can be distributed properly among several processors, then the failure of one processor will not halt the system, only slow it down. If we have ten processors and one fails, then each of the remaining nine processors can pick up a share of the work of the failed processor. Thus, the entire system runs only 10 percent slower rather than failing altogether.Increased reliability of a computer system is crucial in many applications The ability to continue providing service proportional to the level of surviving hardware is called graceful degradation. Some systems go beyond graceful degradation and are called fault-tolerant because they can suffer a failure of any single component and still continue operation.
Note that fault tolerance requires a mechanism to allow the failure to be detected, diagnosed, and, if possible, corrected. The HP Non-Stop system (formerly Tandem) system uses both hardware and software duplication to ensure continued operation despite faults. The system consists of multiple pairs of CPUs, working in lockstep. Both processors in the pair execute each instruction and compare the results.
If the results differ, then one CPU of the pair is at fault, and both are halted. The
process that was being executed is then moved to another pair of CPUs, and the instruction that failed is restarted. This solution is expensive since it involves special hardware and considerable hardware duplication.
The multiple-processor systems in use today are of two types. Some systems use asymmetric multiprocessing, in which each processor is assigned a specific task. A master processor controls the system; the other processors either look to the master for instruction or have predefined tasks. This scheme defines a master-slave relationship. The master processor schedules and allocates work to the slave processors.
The most common systems use symmetric multiprocessing (SMP), in which each processor performs all tasks within the operating system. SMP means that all processors are peers; no master-slave relationship exists between processors.
A recent trend in CPU design is to include multiple compute cores on a single chip. In essence, these are multiprocessor chips. Two-way chips are becoming mainstream, while N-way chips are going to be common in high-end systems. Aside from architectural considerations such as cache, memory, and bus contention, these multi-core CPUs look to the operating system just as N standard processors.
Lastly, blade servers are a recent development in which multiple processor boards and networking boards are placed in the same chassis between these and traditional multiprocessor systems are that The difference each blade-processor board boots independently and runs its own operating system. Some blade-server boards are multiprocessors as well, which blurs the lines between types of computers. In essence, those servers consist of multiple independent multiprocessor systems
The multiple-processor systems in use today are of two types. Some systems use asymmetric multiprocessing, in which each processor is assigned a specific task. A master processor controls the system; the other processors either look to the master for instruction or have predefined tasks. This scheme defines a master-slave relationship. The master processor schedules and allocates work to the slave processors.
The most common systems use symmetric multiprocessing (SMP), in which each processor performs all tasks within the operating system. SMP means that all processors are peers; no master-slave relationship exists between processors.
A recent trend in CPU design is to include multiple compute cores on a single chip. In essence, these are multiprocessor chips. Two-way chips are becoming mainstream, while N-way chips are going to be common in high-end systems. Aside from architectural considerations such as cache, memory, and bus contention, these multi-core CPUs look to the operating system just as N standard processors.
Lastly, blade servers are a recent development in which multiple processor boards and networking boards are placed in the same chassis between these and traditional multiprocessor systems are that The difference each blade-processor board boots independently and runs its own operating system. Some blade-server boards are multiprocessors as well, which blurs the lines between types of computers. In essence, those servers consist of multiple independent multiprocessor systems
Clustered Systems
Another type of multiple-CPU system is the clustered system. Like multiprocessor systems, clustered systems gather together multiple CPUs to accomplish computational work. Clustered systems differ from multiprocessor systems, however, in that they are composed of two or more individual systems coupled together.The definition of the term clustered is not concrete; many commercial packages wrestle with what a clustered system is and why one form is better than another. The generally accepted definition is that clustered computers share storage and are closely linked via a local area network (LAN) or a faster interconnect such as InfiniBand.
Clustering is usually used to provide high-availability service; that service will continue even if one or more systems in the availability is generally obtained by adding a level of redundancy to the system. A layer of cluster software runs on the cluster nodes.
Clustering is usually used to provide high-availability service; that service will continue even if one or more systems in the availability is generally obtained by adding a level of redundancy to the system. A layer of cluster software runs on the cluster nodes.
Each node monitors one or more of the others (over the LAN). If the monitored machine fails, the monitoring machine can take ownership of its storage and restart the applications that were running on the failed machine. The users and clients c the applications see only a brief interruption of service.
Clustering can be structured asymmetrically or symmetrically. In asymmetric clustering, one machine is in hot-standby mode while the other is running the applications. The hot-standby host machine does nothing but monitor the active server. If that server fails, the hot-standby host becomes the
active server.
Clustering can be structured asymmetrically or symmetrically. In asymmetric clustering, one machine is in hot-standby mode while the other is running the applications. The hot-standby host machine does nothing but monitor the active server. If that server fails, the hot-standby host becomes the
active server.
In symmetric mode, two or more hosts are running applications and monitoring each other. This mode is obviously more efficient, as it uses all of the available hardware. It does require that more than one application be available to run.
Other forms of clusters include parallel clusters and clustering over a wide-area network (WAN). Parallel clusters allow multiple hosts to access the same data on the shared storage. Because most operating systems lack support for simultaneous data access by multiple host parallel clusters is usually accomplished by the use of special versions of software and special releases of applications.
Other forms of clusters include parallel clusters and clustering over a wide-area network (WAN). Parallel clusters allow multiple hosts to access the same data on the shared storage. Because most operating systems lack support for simultaneous data access by multiple host parallel clusters is usually accomplished by the use of special versions of software and special releases of applications.
For example, Oracle Parallel Server is a version of Oracle's database that has been designed to run on a parallel cluster. Each machine runs Oracle, and a layer of software tracks access to the shared disk. Each machine has full access to all data in the database. To provide this shared access to data, the system must also supply access control and locking to ensure that no conflicting operations occur. This function, commonly known as a distributed lock manager (DLM), is included in some clustering technology.
Clustering technology changing rapidly. Some cluster products support Cluster technology is supported by dozens of systems in a cluster, as well as clustered nodes that are by miles. Many of these improvements are made possible by the storage area network.ks (SANs), as described.
Clustering technology changing rapidly. Some cluster products support Cluster technology is supported by dozens of systems in a cluster, as well as clustered nodes that are by miles. Many of these improvements are made possible by the storage area network.ks (SANs), as described.
If the applications and their data are stored on are can assign the application to run on any of the SAN, then the cluster software host that is attached to the SAN. If the host fails, then any other host can take over. In a database cluster, dozens of hosts can share the same database, greatly increasing performance and reliability