Architecture des systèmes informatiques
Get flash to fully experience Pearltrees
Le but de ce cours est de donner une idée de la programmation, algorithmique et sémantique de systèmes parallèles et distribués. Pour y arriver, on utilise un biais dans un premier temps, qui nous permettra d'aborder les principaux thèmes du domaine. On va utiliser le système de ``threads'' de JAVA pour simuler des processus parallèles et distribués. Plus tard, on utilisera d'autres fonctionnalités de JAVA, comme les RMI, pour effectuer véritablement des calculs en parallèle sur plusieurs machines. Les ``threads'' en tant que tels sont une première approche du parallélisme qui peut même aller jusqu'au ``vrai parallélisme'' sur une machine multi-processeurs. Pour avoir un panorama un peu plus complet du parallélisme et de la distribution, il faut être capable de faire travailler plusieurs machines (éventuellement très éloignées l'une de l'autre; penser à internet) en même temps.
Diagram of a symmetric multiprocessing system Symmetric multiprocessing ( SMP ) involves a multiprocessor computer hardware architecture where two or more identical processors are connected to a single shared main memory and are controlled by a single OS instance. Most common multiprocessor systems today use an SMP architecture. In the case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors. SMP systems are tightly coupled multiprocessor systems with a pool of homogeneous processors running independently, each processor executing different programs and working on different data and with capability of sharing common resources (memory, I/O device, interrupt system and so on) and connected using a system bus or a crossbar . [ edit ] Design
Asymmetric multiprocessing ( AMP ) was a software stopgap for handling multiple CPUs before symmetric multiprocessing (SMP) was available. It has also been used to provide less expensive options [ 1 ] on systems where SMP was available. Asymmetric multiprocessing Multiprocessing is the use of more than one CPU in a computer system. The CPU is the arithmetic and logic engine that executes user applications; an I/O interface such as a GPU , even if it is implemented using an embedded processor, does not constitute a CPU because it does not run the user's application program. With multiple CPUs, more than one set of program instructions can be executed at the same time.
The Blue Gene /P supercomputer at Argonne National Lab runs over 250,000 processors using normal data center air conditioning, grouped in 72 racks/cabinets connected by a high-speed optical network [ 1 ] A supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation. The term "Super Computing" was first used in the New York World in 1929 to refer to large custom-built tabulators that IBM had made for Columbia University . Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), and later at Cray Research .
Simultaneous multithreading ( SMT ) is a technique for improving the overall efficiency of superscalar CPUs with hardware multithreading . SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures . [ edit ] Details Multithreading is similar in concept to preemptive multitasking but is implemented at the thread level of execution in modern superscalar processors.
Simple superscalar pipeline. By fetching and dispatching two instructions at a time, a maximum of two instructions per cycle can be completed. (IF = Instruction Fetch, ID = Instruction Decode, EX = Execute, MEM = Memory access, WB = Register write back, i = Instruction number, t = Clock cycle [i.e., time]) A superscalar CPU architecture implements a form of parallelism called instruction level parallelism within a single processor. It therefore allows faster CPU throughput than would otherwise be possible at a given clock rate .
A vector processor , or array processor , is a central processing unit (CPU) that implements an instruction set containing instructions that operate on one-dimensional arrays of data called vectors . This is in contrast to a scalar processor , whose instructions operate on single data items. Vector processors can greatly improve performance on certain workloads, notably numerical simulation and similar tasks. Vector machines appeared in the early 1970s and dominated supercomputer design through the 1970s into the 90s, notably the various Cray platforms. The rapid rise in the price-to-performance ratio of conventional microprocessor designs led to the vector supercomputer's demise in the later 1990s.
The Borg, a 52-node Beowulf cluster used by the McGill University pulsar group to search for pulsations from binary pulsars A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware.
In computing , massively parallel refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel. In one approach, e.g., in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available. [ 1 ] An example is BOINC , a volunteer-based , opportunistic grid system. [ 2 ] In another approach, a large number of processors are used in close proximity to each other, e.g., in a computer cluster .
The speedup of a program using multiple processors in parallel computing is limited by the sequential fraction of the program. For example, if 95% of the program can be parallelized, the theoretical maximum speedup using parallel computing would be 20× as shown in the diagram, no matter how many processors are used. Amdahl's law , also known as Amdahl's argument , [ 1 ] is named after computer architect Gene Amdahl , and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.
Flynn's taxonomy is a classification of computer architectures , proposed by Michael J. Flynn in 1966. [ 1 ] [ 2 ] [ edit ] Classifications The four classifications defined by Flynn are based upon the number of concurrent instruction (or control) and data streams available in the architecture: Single Instruction, Single Data stream (SISD)
In parallel computing , speedup refers to how much a parallel algorithm is faster than a corresponding sequential algorithm . [ edit ] Definition Speedup is defined by the following formula:
Cost efficiency (or cost optimality ), in the context of parallel computer algorithms , refers to a measure of how effectively parallel computing can be used to solve a particular problem. A parallel algorithm is considered cost efficient if its asymptotic running time multiplied by the number of processing units involved in the computation is comparable to the running time of the best sequential algorithm. For example, an algorithm that can be solved in time using the best known sequential algorithm and in a parallel computer with
Gustafson's Law Gustafson's Law (also known as Gustafson-Barsis' law ) is a law in computer science which says that computations involving arbitrarily large data sets can be efficiently parallelized . Gustafson's Law provides a counterpoint to Amdahl's law , which describes a limit on the speed-up that parallelization can provide, given a fixed data set size. Gustafson's law was first described [ 1 ] by John L. Gustafson and his colleague Edwin H. Barsis :
Parallel computing is a form of computation in which many calculations are carried out simultaneously, [ 1 ] operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel"). There are several different forms of parallel computing: bit-level , instruction level , data , and task parallelism . Parallelism has been employed for many years, mainly in high-performance computing , but interest in it has grown lately due to the physical constraints preventing frequency scaling . [ 2 ] As power consumption (and consequently heat generation) by computers has become a concern in recent years, [ 3 ] parallel computing has become the dominant paradigm in computer architecture , mainly in the form of multicore processors . [ 4 ]