Architecture des systèmes informatiques

Facebook Twitter

Département Informatique Cnam Paris. Avant-Propos.

Hypercube

DSM Distributed Shared Memory. Category:Distributed computing architecture. Symmetric multiprocessing. Diagram of a symmetric multiprocessing system Symmetric multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors connect to a single, shared main memory, have full access to all I/O devices, and are controlled by a single OS instance that treats all processors equally, reserving none for special purposes.

Symmetric multiprocessing

Most multiprocessor systems today use an SMP architecture. In the case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors. SMP systems are tightly coupled multiprocessor systems with a pool of homogeneous processors running independently, each processor executing different programs and working on different data and with capability of sharing common resources (memory, I/O device, interrupt system and so on) and connected using a system bus or a crossbar.

Design[edit] Asymmetric multiprocessing. Asymmetric multiprocessing (AMP) was a software stopgap for handling multiple CPUs before symmetric multiprocessing (SMP) was available.

Asymmetric multiprocessing

It has also been used to provide less expensive options[1] on systems where SMP was available. In an asymmetric multiprocessing system, not all CPUs are treated equally; for example, a system might only allow (either at the hardware or operating system level) one CPU to execute operating system code or might only allow one CPU to perform I/O operations. Other AMP systems would allow any CPU to execute operating system code and perform I/O operations, so that they were symmetric with regard to processor roles, but attached some or all peripherals to particular CPUs, so that they were asymmetric with regard to peripheral attachment.

Asymmetric multiprocessing Multiprocessing is the use of more than one CPU in a computer system. Supercomputer. The Blue Gene/P supercomputer at Argonne National Lab runs over 250,000 processors using normal data center air conditioning, grouped in 72 racks/cabinets connected by a high-speed optical network[1] A supercomputer is a computer at the frontline of contemporary processing capacity – particularly speed of calculation which can happen at speeds of nanoseconds.

Supercomputer

Simultaneous multithreading. Details[edit] Multithreading is similar in concept to preemptive multitasking but is implemented at the thread level of execution in modern superscalar processors.

Simultaneous multithreading

Simultaneous multithreading (SMT) is one of the two main implementations of multithreading, the other form being temporal multithreading. In temporal multithreading, only one thread of instructions can execute in any given pipeline stage at a time. Superscalar. Simple superscalar pipeline.

Superscalar

By fetching and dispatching two instructions at a time, a maximum of two instructions per cycle can be completed. (IF = Instruction Fetch, ID = Instruction Decode, EX = Execute, MEM = Memory access, WB = Register write back, i = Instruction number, t = Clock cycle [i.e., time]) A superscalar CPU architecture implements a form of parallelism called instruction-level parallelism within a single processor. It therefore allows faster CPU throughput than would otherwise be possible at a given clock rate. Vector processor. Other CPU designs may include some multiple instructions for vector processing on multiple (vectorised) data sets, typically known as MIMD (Multiple Instruction, Multiple Data) and realized with VLIW.

Vector processor

Such designs are usually dedicated to a particular application and not commonly marketed for general purpose computing. In the Fujitsu FR-V VLIW/vector processor both technologies are combined. History[edit] Early work[edit] Beowulf (computing) The Borg, a 52-node Beowulf cluster used by the McGill Universitypulsar group to search for pulsations from binary pulsars A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them.

Beowulf (computing)

The result is a high-performance parallel computing cluster from inexpensive personal computer hardware. Massively parallel. In computing, massively parallel refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel.

Massively parallel

In one approach, e.g., in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[1] An example is BOINC, a volunteer-based, opportunistic grid system.[2] In another approach, a large number of processors are used in close proximity to each other, e.g., in a computer cluster. Amdahl's law. The speedup of a program using multiple processors in parallel computing is limited by the sequential fraction of the program.

Amdahl's law

For example, if 95% of the program can be parallelized, the theoretical maximum speedup using parallel computing would be 20× as shown in the diagram, no matter how many processors are used. Amdahl's law, also known as Amdahl's argument,[1] is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors. Flynn's taxonomy. Flynn's taxonomy is a classification of computer architectures, proposed by Michael J.

Flynn's taxonomy

Flynn in 1966.[1][2] Classifications[edit] The four classifications defined by Flynn are based upon the number of concurrent instruction (or control) and data streams available in the architecture: Single Instruction, Single Data stream (SISD) Speedup. In parallel computing, speedup refers to how much a parallel algorithm is faster than a corresponding sequential algorithm.

Definition[edit] Speedup is defined by the following formula: Cost efficiency. Cost efficiency (or cost optimality), in the context of parallel computer algorithms, refers to a measure of how effectively parallel computing can be used to solve a particular problem. A parallel algorithm is considered cost efficient if its asymptotic running time multiplied by the number of processing units involved in the computation is comparable to the running time of the best sequential algorithm. For example, an algorithm that can be solved in time using the best known sequential algorithm and in a parallel computer with. Gustafson's law. Gustafson's Law Gustafson's Law (also known as Gustafson–Barsis' law) is a law in computer science which says that computations involving arbitrarily large data sets can be efficiently parallelized. Gustafson's Law provides a counterpoint to Amdahl's law, which describes a limit on the speed-up that parallelization can provide, given a fixed data set size.

Gustafson's law was first described [1] by John L. Gustafson and his colleague Edwin H. Barsis: Parallel computing. The maximum possible speed-up of a single program as a result of parallelization is known as Amdahl's law. Background[edit] Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit on one computer. Karp–Flatt metric. The Karp–Flatt metric is a measure of parallelization of code in parallel processor systems. This metric exists in addition to Amdahl's law and the Gustafson's law as an indication of the extent to which a particular computer code is parallelized. It was proposed by Alan H. Cesames. UeNSY104.pdf (Objet application/pdf)