Fusion of Neural Networks, Fuzzy Systems and Genetic Algorithms: Industrial Applications Fusion of Neural Networks, Fuzzy Systems and Genetic Algorithms: Industrial Applications
by Lakhmi C. Jain; N.M. Martin
CRC Press, CRC Press LLC
ISBN: 0849398045   Pub Date: 11/01/98
  

Previous Table of Contents Next


2.2 Scheduling Problem Definition

Each call/service processing task consists of a number of processes. The question is what could be considered as a basic call/service grain — what the elementary task is or how simple/complex it is. It is well known that telecommunication applications are programmed as if they were composed of a large number of small agents.

The solution with higher parallelism generally could be obtained with smaller ETs, but the proper boundary also depends on the way in which the concurrent programming language supports the creation and destruction of parallel activities, as well as interprocess communication. The programming reasons are against unconstrained parallelism, i.e., parallel activities at the instruction level. In fact, an individual ET consists of several sequential activities; it describes a sequential behaviour, but it is allowed to be concurrently executed [4].

This is the reason why the internal structures of call/service tasks have to be defined in order to arrange the parallel system, and to analyze it as well. It is assumed that all ETs have the same duration, i.e., each one can be handled in an elementary interval Δt. A task pattern associated to a processing request, ri, is defined by the following parameters:

ni the number of ETs,
pi the maximum number of ETs which can be processed in parallel,
mi the number of ET partitions which must be processed in sequence,

and an internal structure,

where partition index j is referred as height, sj represents the number of ETs in a partition j (1 ≤ sjpi), and Σsj = ni. In that way, serial-parallel ordering of ETs defines a task pattern for a specific request. If mi = ni and pi = 1, ETs are evaluated in sequence; for mi = 1 and pi = ni, all ETs could be simultaneously executed. All other cases define partial parallelism for a given set of ETs. The total request processing time, mi × Δt, cannot be decreased; it is determined by the inherent parallelism of ETs. An example is shown in Figure 4.


Figure 4  Task pattern example.

Note that the task pattern graphic representation corresponds to Gantt chart. The task pattern can also be defined as a set of partially ordered ETs and represented by a directed acyclic graph G = (V, E). V is a finite set of vertices representing ETs. A set of edges E, where eij denote a directed edge from i to j, describes the partial ordering or precedence relation between ETs, denoted by >. If i > j then i must be completed before j. We consider a graph with identical ET times (equals 1 Δt of time, each). The length of the critical path of the graph mi is the minimum finishing time, i.e., the minimum time to complete the set of ETs.

The problem of determining optimum task pattern, i.e., the task pattern with minimum pi satisfying the minimum processing time mi is equivalent to the problem of determining the minimum number of processors required to evaluate the program in the shortest possible time. The problem of determining the task pattern with the shortest finishing time for some pi corresponds to the classical scheduling problem based on a deterministic model, because the relationship between ETs and ET execution time are known [5, 6]. The goal is to assign ETs to the processors, so that precedence relations are preserved and the set of ETs is completed in the shortest possible time. The problem is known as computationally intractable and some heuristic algorithm should be used.

A graph representing the task pattern with ET1 - ET10, similar to the previous one, is shown in Figure 5 (both patterns are equal with respect to the number of ETs, maximum parallelism, and finishing time, but precedence relations are different).


Figure 5  Graph representation of a task pattern.

An application of genetic algorithm for scheduling implies a careful definition of GA constructs [7], especially the following.

Schedule representation

The representation of a schedule must accommodate the precedence relation between ETs. The schedule is represented by means of several lists of ETs, one for each processor. The order of tasks in the list corresponds to the order of execution. Every task appears only once in the schedule. A schedule satisfying all these conditions is the legal schedule. Let us consider two processors, P1 and P2, and the graph representation of a task pattern shown in Figure 5. Legal schedules A and B, both with the finishing time FT, equal to 8 Δt, are, for instance,

Evidently, the schedule representation is more complex than string representation used in SGA, as described in Section 2.1. The genetic operators must maintain the precedence relation in each ET list (processor) and ensure a unique appearance of all ETs in the schedule.

Crossover

Crossover operator will produce a legal schedule, if crossover sites always lie between ETs with different heights, and if the heights of the tasks on the first right positions of the crossover sites are equal. The following example shows the result of a crossover operation between schedules A and B producing new schedules C and D. The crossover sites (*) are placed between following pairs of ETs: in processor P1, schedule A (ET4, ET7), in P1, B (ET4, ET6) (note ET7 and ET6 have equal heights), in P2, A (ET8, ET9), and in P2, B (ET8, ET9).

In this example, schedule C has a better fitness value than the parent schedules.

Mutation

Mutation operator will produce legal schedule, if ETs with identical heights are mutated, exchanging their positions in the schedule. For example, by using mutation operator on ET7 and ET6 of schedule A, a new schedule E is produced.

In this example, the new schedule E has a better fitness value than old schedule A.


Previous Table of Contents Next

Copyright © CRC Press LLC