Introduction to Parallel and Distributed Computing (1) Indicators for measuring the quality of parallel programs

Parallel and Distributed Computing: Indicators for Measuring the Quality of Parallel Programs (1)

Introduction

For ordinary learners: This article and subsequent series are a set of detailed parallel computing and distributed computing, openMP and MPI introductory tutorials

Write to P University: Because the teaching English is too plastic, my academic English is also too bad, and I am always on the verge of dissociation in the class of the division. Therefore, I have summarized and divided the main content of the teaching (the content comes from the reading of PPT and textbooks) especially after studying, for future generations to use. Read this series you will get:

  • Teacher Luo also divides all the knowledge points of the instructor (there is also a branch to sort out the test points)
  • All assignments and detailed explanations
  • Complete a two-hour plastic English class in half an hour(The ppt reader is already too much, I still use Plastic English)

In particular, since Lecture1 has nothing solid except speedup, I won’t rewrite it here.

For convenience, the following willMPI and OpenMP parallel programming: C language version (By Michael J. Quinn)Referred to asTextbook

In particular, for P university students, it is recommended to skip to Chapter 17 to learn OpenMP after reading Chapter 3 of the textbook.

Section 1 Indicators for measuring the quality of parallel programs

In a nutshell, the reason for parallel and distributed computing is that the work of moving bricks is divided among 10 people to move faster than 1 person.

We naturally pay attention to how fast - that isSpeedup ratio(Speedup) problem

  • A m d a h l ′ s   L a w Amdahl's\ Law Amdahls Law
    S p e e d u p = o n e   t h r e a d   e x e c u t i o n   t i m e n   t h r e a d   e x e c u t i o n   t i m e = 1 ( 1 − p ) + p / n Speedup=\frac{one\ thread\ execution\ time} {n\ thread\ execution\ time}=\frac 1 {(1-p)+p/n} Speedup=n thread execution timeone thread execution time=(1p)+p/n1
    Where ppp is the part of the program that can be parallelized (P arallel fraction Parallel\ fractionParallel fraction), 1 − p 1-p 1p is the only serial part of the program (S equential fraction Sequential\ fractionSequential fraction)

The content and meaning of this formula are obvious

In addition, we will naturally pay attention to another issue-if I use one person and the work time is 24 hours, if I use a thousand people, the work time is 23 hours, then I am a Are you a rich man? ——Parallel efficiency(Parallel Efficiency)

P a r a l l e l E f f i c i e n c y = o n e   t h r e a d   e x e c u t i o n   t i m e n ∗ ( n   t h r e a d   e x e c u t i o n   t i m e ) = 1 n ( 1 − p ) + p Parallel Efficiency=\frac{one\ thread\ execution\ time} {n*(n\ thread\ execution\ time)}=\frac 1 {n(1-p)+p} ParallelEfficiency=n(n thread execution time)one thread execution time=n(1p)+p1
In simple terms, this formula means that as a boss, you not only have to consider how much time is saved by asking a group of people to move bricks, but also how many people you want to hire for this.

Guess you like

Origin blog.csdn.net/Kaiser_syndrom/article/details/105059391