resource theory

Traditional synthesis has developed from single processors to multiprocessors. The characteristics have been inherited. Based on the Turing machine the mechanism behind execution has been very simple. The mechanism of the multiprocessors has been based on different architectures as shared memory or message transferring. No real theories have been developed on the resource utilization, because of their large complexity. Instead a lot of experiments have been done,

There has been work considering special models for parallel execution. [Eager 89] has formulated a theory for communicating tasks where the size of the graph is constant. It has been shown that task allocation is crucial and could not be fixed.

This report is intended to present

  • that under certain circumstances executing any program on general, arbitrarily small/large microprocessor systems are possible. The processors are not required to be tuned for the programs or vice versa.
  • a model where latency time and storage volume used can be computed for a general executing program. The model uses three sets of parameters describing the program, the multiprocessor architecture and their speed.
  • that microprocessors can be optimized using the presented model. It can be shown that in certain cases the granularity of processors can be very important for performance.

In this paper a general theory on execution is presented. This theory is based on the three main execution characteristics: storage, transport and execution. It presumes that such resources are used and is therefore different from the traditional sequential mechanism.

The theory is general and can be applied to most of the existing semantics such as the imperative or declarative and also indeterministic semantics. This fact makes the theory applicable to conventional multiprocessors and new ones.

In this theory there exist several allocating (or mapping) functions. These are specific to different architectures. In this paper very general functions are used. They are based on random allocation. More specific algorithms can be introduced enabling the random algorithms to be used as references.

The theory of imperative semantics is straight, but can be hard to implement in detail. To solve this, the implementation can be based on coarse grain. Then granularity grows, approximately corresponding to objects of object-oriented semantics. For declarative semantics the smallest details can be even analysed.

Several mathematical expressions are set up from the theory, modelling the execution. The model presents execution frequency (corresponds to an execution rate), the load on processors and links, and storage volumes. When the implementation of the units is known, architectures can be optimized using these parameters. Such optimization will be presented in a subsequent report.

The theory is the base for the computer concept rp8601



next >