Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SIMD与MIMD的区别 #23

Open
laixintao opened this issue Dec 26, 2017 · 5 comments
Open

SIMD与MIMD的区别 #23

laixintao opened this issue Dec 26, 2017 · 5 comments

Comments

@laixintao
Copy link
Owner

为什么说SIMD也有多个处理器?

A SIMD computer consists of n identical processors, each with its own local memory, where
it is possible to store data. All processors work under the control of a single instruction stream; in addition to this, there are n data streams, one for each processor. The processors work simultaneously on each step and execute the same instruction, but on different data elements. This is an example of data-level parallelism. The SIMD architectures are much more versatile than MISD architectures. Numerous problems covering a wide range of applications can be solved by parallel algorithms on SIMD computers. Another interesting feature is that the algorithms for these computers are relatively easy to design, analyze, and implement. The limit is that only the problems that can be divided into a number of subproblems (which are
all identical, each of which will then be solved contemporaneously, through the same set of instructions) can be addressed with the SIMD computer. With the supercomputer developed according to this paradigm, we must mention the Connection Machine (1985 Thinking Machine) and MPP (NASA - 1983). As we will see in Chapter 6, GPU Programming with Python, the advent of modern graphics processor unit (GPU), built with many SIMD embedded units has lead to a more widespread use of this computational paradigm.

@rtrobin
Copy link

rtrobin commented Jul 25, 2020

SIMD 是指单指令多数据流,即每个数据流并行的处理同一套指令集。在具体的硬件实现上,以英伟达 CUDA 为例,多个数据流是跑在多个 CUDA SM 上的,这里的 SM (stream multi-processor) 即是独立的处理器计算单元。

感谢楼主对这本书的翻译,非常适合 python 并行入门。我对 GPU 编程有所了解,可以帮助翻译最后一章节的部分内容。

@laixintao
Copy link
Owner Author

Hi 感谢帮助。

那这里说的单指令多数据流,指的是单个相同的指令跑在多个处理器上,同事处理多条数据吗?

GPU 我没接触过,所以这一章一直搁置了,如果你有空帮忙翻译的话,我会尽快review合并~ 感谢 🥂

@rtrobin
Copy link

rtrobin commented Jul 25, 2020

那这里说的单指令多数据流,指的是单个相同的指令跑在多个处理器上,同事处理多条数据吗?

简单来说,是可以这么理解的。

我之前没做过类似的翻译工作,可以搞不确定自己能否跟上进度。我看到,现在最后一章只有第一节有翻译,后面章节有人认领了么?

@rtrobin
Copy link

rtrobin commented Jul 25, 2020

另外,我翻了一下原书,书中使用的还是 vs2008 和 cuda4.2,现在的 cuda 已经到11了。。。我不确定这个古老版本和当前版本的 python wrapper 使用方法是否还一致。。。

@laixintao
Copy link
Owner Author

我看到,现在最后一章只有第一节有翻译,后面章节有人认领了么?

目前是没有的哈,翻译开始的时候可以尽早打开一个 work in progress 的 PR(在标题写 "[WIP]" 或者用 Github 的 Create draft 都可以),这样可以避免重复认领。如果现在没有开着的 PR,就是没有翻译工作在进行。

另外,我翻了一下原书,书中使用的还是 vs2008 和 cuda4.2,现在的 cuda 已经到11了。。。我不确定这个古老版本和当前版本的 python wrapper 使用方法是否还一致。。。

我觉得作为翻译的话,按照原文翻译也可以,如果你能将其中的例子,和过时的部分改成现在流行的,更好一些~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants