parallel computing vs. distributed computing

I think the explanation below by someone given on this page should be interesting...

http://arstechnica.com/civis/viewtopic.php?f=18&t=185623

Distributed computing has to be less bandwidth intensive. Therefore, the tasks have to be less co-dependent (since there is little cross-node communication, if any).

Parallel processing is faster and has higher bandwidth between nodes, but is harder to scale - you generally max out at 32 sockets in a single server, with 2-4 socket servers being the only really affordable ones. That can make up to 128 processors in a quad-core arrangement.

By contrast, it's much cheaper to buy 32 quad-core single-socket computers (or 128 single-core computers) and connect them via GigE or whatever. It's also harder to break a 128-core DC setup than a single massive server.

Finally, you can often get people to lend you computing time with DC. If you get one C2D core for 6-8 hours from someone for free, you can jump on it.

猜你喜欢

转载自standalone.iteye.com/blog/1575649