Reviews on parallel programming并行计算英文班复习考试范围及题型:(1—10章)1 基本概念解释;Translation (Chinese)2 问答题。
Questions and answer3 算法的画图描述。
Graphical description on algorithms4 编程。
AlgorithmsReviews on parallel programming并行计算1 基本概念解释;Translation (Chinese)SMPMPPCluster of WorkstationParallelism,pipelining,Network topology,diameter of a network,Bisection width,data decomposition,task dependency graphsgranularityconcurrencyprocessprocessor,linear array,mesh,hypercube,reduction,prefix-sum,gather,scatter,thread s,mutual exclusionshared address space,synchronization,the degree of concurrency,Dual of a communication operation,2 问答题。
Questions and answerChapter 1 第1章1) Why we need parallel computing? 1)为什么我们需要并行计算?答:2) Please explain what are the main difference between parallel computing and sequential computing 2)解释并行计算与串行计算在算法设计中的主要不同点在那里?答:Chapter 2 第2章1) What are SIMD, SPMD and MIMD denote? 1)解释SIMD, SPMD 和 MIMD是什么含义。
答:2) Please draw a typical architecture of SIMD and a typical architecture of MIMD to explan.2)请绘制一个典型的SIMD的体系结构和MIMD的架构。
答:3) What are the two typical communication models of Parallel Platforms? You can give a short introduction on Massage Passing and Shared address space models.3)并行平台的两个典型的通信模式是什么?你可以给一个简短的介绍信息传递和共享地址空间模型。
能说出Massage Passing和Shared address space models两种通讯模型。
答:4) In the ideal parallel random access machine(PRAM), what are the meaning of EREW, CREW and CRCW?4)在理想并行计算模型中(parallel random access machine(PRAM), EREW, CREW, 和CRCW表示的意思是什么?答:Chapter 3 第3章1) Be able to explain at least 2 kinds of the basic decomposition techniques, i.e., Recursivedecomposition, Data decomposition, Exploration decomposition and Speculative decomposition.1)能够解释的基本的把问题分解技术,至少有2种,例如,递归分解,数据分解,探索分解和投机分解。
(1)递归分解,如快速排序(2)数据分解,矩阵乘法,矩阵与向量的乘法,按行或格网的数据划分。
(3)探索分解,人工智能中的状态空间的问题求解、如16数码问题。
(4)投机分解,利用处理器大多数时间处于空闲的特点,把后面可以先计算的任务,提前计算出,在许多情况下会加速程序的运行。
如对 case, if 语句的句子同时计算出来。
答:2) When the work balance of tasks become bed, which is scheduled based on data decomposition, what methods can improve the work balance of tasks, block-cyclic distribution, Randomized blockdistributions and graph partitioning.2)当平衡工作的任务成为基于数据分解,有什么方法可以改善平衡工作的任务。
对稀疏矩阵或在同一数据集合上,操作密度不同的计算,如何达到调度平衡的问题, 具体方法是什么: (1)block-cyclic distribution (采用在一个矩阵上循环安排任务计算完成的方法) (2)对矩阵的下标采用随机映射的方法 (3)图划分的方法答:Chapter 4 第4章1) Be familiar with the basic communication operations as well as their implementations on the typical models, hypercube, linear array and mesh (graphical description) 1)熟悉的基本通信业务,以及对他们的典型模式实现,超立方体,线性阵列和网状(图形描述)one to all broadcast; all to one reductionall to all broadcast; all to all reductionscatter, gather, all reduce, prefix sum,all to all personalized communication. Circular shift个人认为以下的1-4更为重要,算法实现没必要记住,但是要知道每个操作具体是怎么做的答:2) be able to use basic communication operation to design parallel algorithms, e.g.matrix-vector multiplication, shortest path trees and minimum spanning tree.2)能使用基本的通信操作设计并行算法,例如:矩阵向量乘法,最短路径树和最小生成树。
答:Chapter 5 第5章1) The formulae of speedup, efficiency and cost of a parallel algorithm and be able to givea short exolaination.1)并行算法的加速,效率和成本的计算公式,并能给出一个简短解释。
(1)知道并行算法的分析测度,以及相应的测度计算公式。
(2)并行算法并行加速比S,S = Ts/Tp , Tp 和Ts 表示并行算法和串行算法的时间复杂性。
(3)效率E = E= S/p= Ts/(Tp*p),(4)费用cost = P * Tp答:2) The proof on Amdahl’s law: If a problem of size W has a serial compo nent Ws, prove that W/Ws is an upper bound on its speedup, no matter how many processing elements are used.2)Amdahl定理的证明:设W表示某算法A求解问题的全部工作量,W= Ws+Wp, 其中 Ws 表示必须串行计算的工作量;Wp表示可以并行计算的工作量。
试证明该算法的并行加速比的上界是W/Ws ,无论有多少个处理单元。
答:Chapter 6 第6章1) Be able to describe the program architecture of a MPI program.1)能够来描述程序的MPI程序架构。
#include <mpi.h>Main(int argc, char *argv[]){int npes, myrank;MPI_init(&argc, &argv);MPI_Comm_szie(MPI_COMM_WORLD, &npes);MPI_Comm_rank(MPI_COMM_WORLD, &myrank);并行程序代码部分通讯结束部分MPI_Finilize()}2) fundimental functions for MPIa) MPI_Init establishing a parallel computational environmentb) MPI_Finalize closes all parallel tasksc) MPI_Comm_size get the number of processes availabled) MPI_Comm_rank get the id of processe) MPI_send message sending functionsf) MPI_Recv message receiving functions2)了解MPI的基本函数1)MPI_INIT建立一个并行计算环境2)MPI_Finalize关闭所有并行任务3)MPI_Comm_size获得可用的进程数4)MPI_Comm_rank得到进程ID5)MPI_send消息发送功能6)MPI_Recv消息接收功能3) Blocking and non-block message passing in MPI_Send.a) Return only after the corresponding MPI_Recv have been issued and the message has been sent to the receiver (blocking) completely.b) Initialize a process to copy the message into a buffer of the destination and then return (non-blocking with a buffer) without waiting the finish of the data transformation.Please give the corresponding sentences for the two kinds of massage passing in MPI.3)能说明在MPI_Send中的阻塞的消息传递(blocking Message Passing)和无阻塞的消息传递(Non-blocking Message Passing)的含义和具体如何实现的。