Managing Parallelism in Parallel Systems

Authors

  • Y. M. Teo

Abstract

A fundamental problem of parallel computing is that applications often require  large-size instances of an algorithm while parallel systems are generally hardwired architectures which cannot be easily reconfigured according to program size or structure. The unfolding of parallelism in an application can result in very large temporary storage requirements during periods in which results are being produced at a faster rate than instructions are being executed. This is an area of major concern when designing parallel systems. Strategies for managing parallelism have the basic aim of generating enough parallelism to utilize the machine fully, while at the same time keeping resource demands low enough to allow programs to execute without running out of storage resources.  We first examine the parallelism management approaches adopted in a number of parallel computers such as the NEC mPD7281 Dataflow Chip, the FLAGSHIP computer, the NYU Ultracomputer, the Zero Assignment Parallel Processor, the Rediflow Multiprocessor, the ETL SIGMA-1 Dataflow Machine and the MIT Tagged-Token Dataflow Machine. We then discuss the design of the throttling mechanism for managing parallelism in the Multi-Ring Manchester Dataflow Machine and analyzed the simulation results obtained. We conclude by refuting the criticism that parallel systems require excessive amount of memory resources for executing programs.

Downloads

Download data is not yet available.

Published

2012-01-26

How to Cite

Teo, Y. M. (2012). Managing Parallelism in Parallel Systems. COMPUTING AND INFORMATICS, 14(1), 57–91. Retrieved from https://www.cai.sk/ojs/index.php/cai/article/view/221