The most important issue when parallelizing sequential programs is the efficient assignment of computations into different processing elements. The most extensive, in terms of time execution, part of a program is the nested loops. Too many approaches have been devoted in parallelizing nested loops, and assigning the concurrent partitions of such a loop into different processors. In the past, all methods have been focused upon linear schedules produced by manipulating the reduced dependence graph, which in some cases achieve near optimal solutions. This paper presents a new method of free scheduling loop computations into time, based on task graph scheduling techniques. It will be shown that this schedule is optimal in terms of time, outperforming all linear schedules. Furthermore, in terms of total number of processors, the presented method includes a heuristic refinement of the free schedule which 'shuffles' computations into time, without loss of the optimal time performance, to augment the mean processor utilization. In all cases, the proposed method uses less number of processors, while preserving the optimal total execution time. The 'shuffling' of computations is based on graph theory approaches, and uses PERT problem techniques. Such scheduling is convenient for parallelizing tools (such as compilers), but has practical interest for shared memory multiprocessor systems, where the communication delay imposed by such non-regular scheduling is of no interest.