A case study in programming for parallel-processors
By Jack L. Rosenfeld
Communications of the ACM,
Vol. 12 No. 12, Pages 645-655
10.1145/363626.363628
An affirmative partial answer is provided to the question of whether it is possible to program parallel-processor computing systems to efficiently decrease execution time for useful problems. Parallel-processor systems are multiprocessor systems in which several of the processors can simultaneously execute separate tasks of a single job, thus cooperating to decrease the solution time of a computational problem. The processors have independent instruction counters, meaning that each processor executes its own task program relatively independently of the other processors. Communication between cooperating processors is by means of data in storage shared by all processors.A program for the determination of the distribution of current in an electrical network was written for a parallel-processor computing system, and execution of this program was simulated. The data gathered from simulation runs demonstrate the efficient solution of this problem, typical of a large class of important problems. It is shown that, with proper programming, solution time when NP processors are applied approaches 1/NP times the solution time for a single processor, while improper programming can actually lead to an increase of solution time with the number of processors. Storage interference and other measures of performance are discussed. Stability of the method of solution was also investigated.
The full text of this article is premium content
0 Comments
No entries found
Log in to Read the Full Article
Purchase the Article
Log in
Create a Web Account
If you are an ACM member, Communications subscriber, Digital Library subscriber, or use your institution's subscription, please set up a web account to access premium content and site
features. If you are a SIG member or member of the general public, you may set up a web account to comment on free articles and sign up for email alerts.