Architecture des systèmes informatiques

Facebook Twitter
Département Informatique Cnam Paris
Le but de ce cours est de donner une idée de la programmation, algorithmique et sémantique de systèmes parallèles et distribués. Pour y arriver, on utilise un biais dans un premier temps, qui nous permettra d'aborder les principaux thèmes du domaine. On va utiliser le système de ``threads'' de JAVA pour simuler des processus parallèles et distribués. Plus tard, on utilisera d'autres fonctionnalités de JAVA, comme les RMI, pour effectuer véritablement des calculs en parallèle sur plusieurs machines. Les ``threads'' en tant que tels sont une première approche du parallélisme qui peut même aller jusqu'au ``vrai parallélisme'' sur une machine multi-processeurs. Pour avoir un panorama un peu plus complet du parallélisme et de la distribution, il faut être capable de faire travailler plusieurs machines (éventuellement très éloignées l'une de l'autre; penser à internet) en même temps. Avant-Propos Avant-Propos
Hypercube

DSM Distributed Shared Memory

Category:Distributed computing architecture
Diagram of a symmetric multiprocessing system Symmetric multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors connect to a single, shared main memory, have full access to all I/O devices, and are controlled by a single OS instance that treats all processors equally, reserving none for special purposes. Most multiprocessor systems today use an SMP architecture. In the case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors. SMP systems are tightly coupled multiprocessor systems with a pool of homogeneous processors running independently, each processor executing different programs and working on different data and with capability of sharing common resources (memory, I/O device, interrupt system and so on) and connected using a system bus or a crossbar. Design[edit] Symmetric multiprocessing Symmetric multiprocessing
Asymmetric multiprocessing Asymmetric multiprocessing (AMP) was a software stopgap for handling multiple CPUs before symmetric multiprocessing (SMP) was available. It has also been used to provide less expensive options[1] on systems where SMP was available. In an asymmetric multiprocessing system, not all CPUs are treated equally; for example, a system might only allow (either at the hardware or operating system level) one CPU to execute operating system code or might only allow one CPU to perform I/O operations. Other AMP systems would allow any CPU to execute operating system code and perform I/O operations, so that they were symmetric with regard to processor roles, but attached some or all peripherals to particular CPUs, so that they were asymmetric with regard to peripheral attachment. Asymmetric multiprocessing Multiprocessing is the use of more than one CPU in a computer system. Asymmetric multiprocessing
Supercomputer The Blue Gene/P supercomputer at Argonne National Lab runs over 250,000 processors using normal data center air conditioning, grouped in 72 racks/cabinets connected by a high-speed optical network[1] A supercomputer is a computer at the frontline of contemporary processing capacity – particularly speed of calculation which can happen at speeds of nanoseconds. Supercomputer
Simultaneous multithreading Details[edit] Multithreading is similar in concept to preemptive multitasking but is implemented at the thread level of execution in modern superscalar processors. Simultaneous multithreading (SMT) is one of the two main implementations of multithreading, the other form being temporal multithreading. In temporal multithreading, only one thread of instructions can execute in any given pipeline stage at a time. Simultaneous multithreading
Superscalar Superscalar Simple superscalar pipeline. By fetching and dispatching two instructions at a time, a maximum of two instructions per cycle can be completed. (IF = Instruction Fetch, ID = Instruction Decode, EX = Execute, MEM = Memory access, WB = Register write back, i = Instruction number, t = Clock cycle [i.e., time]) A superscalar CPU architecture implements a form of parallelism called instruction-level parallelism within a single processor. It therefore allows faster CPU throughput than would otherwise be possible at a given clock rate.
Vector processor Other CPU designs may include some multiple instructions for vector processing on multiple (vectorised) data sets, typically known as MIMD (Multiple Instruction, Multiple Data) and realized with VLIW. Such designs are usually dedicated to a particular application and not commonly marketed for general purpose computing. In the Fujitsu FR-V VLIW/vector processor both technologies are combined. History[edit] Early work[edit] Vector processor
The Borg, a 52-node Beowulf cluster used by the McGill Universitypulsar group to search for pulsations from binary pulsars A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware. Beowulf (computing) Beowulf (computing)
Massively parallel Massively parallel In computing, massively parallel refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel. In one approach, e.g., in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[1] An example is BOINC, a volunteer-based, opportunistic grid system.[2] In another approach, a large number of processors are used in close proximity to each other, e.g., in a computer cluster.
The speedup of a program using multiple processors in parallel computing is limited by the sequential fraction of the program. For example, if 95% of the program can be parallelized, the theoretical maximum speedup using parallel computing would be 20× as shown in the diagram, no matter how many processors are used. Amdahl's law, also known as Amdahl's argument,[1] is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors. Amdahl's law

Amdahl's law

Flynn's taxonomy is a classification of computer architectures, proposed by Michael J. Flynn in 1966.[1][2] Classifications[edit] The four classifications defined by Flynn are based upon the number of concurrent instruction (or control) and data streams available in the architecture: Single Instruction, Single Data stream (SISD) Flynn's taxonomy
Speedup In parallel computing, speedup refers to how much a parallel algorithm is faster than a corresponding sequential algorithm. Definition[edit] Speedup is defined by the following formula:
Cost efficiency Cost efficiency (or cost optimality), in the context of parallel computer algorithms, refers to a measure of how effectively parallel computing can be used to solve a particular problem. A parallel algorithm is considered cost efficient if its asymptotic running time multiplied by the number of processing units involved in the computation is comparable to the running time of the best sequential algorithm. For example, an algorithm that can be solved in time using the best known sequential algorithm and in a parallel computer with
Gustafson's Law Gustafson's Law (also known as Gustafson–Barsis' law) is a law in computer science which says that computations involving arbitrarily large data sets can be efficiently parallelized. Gustafson's Law provides a counterpoint to Amdahl's law, which describes a limit on the speed-up that parallelization can provide, given a fixed data set size. Gustafson's law was first described [1] by John L. Gustafson and his colleague Edwin H. Barsis: Gustafson's law
Parallel computing
Karp–Flatt metric
Cesames
ueNSY104.pdf (Objet application/pdf)