The average overhead required for producing a scalar result may be reduced by vectorizing dataflow computations. This reduction may lead to eliminating or reducing pipeline starvation during periods characterized by low degrees of parallelism. A dataflow model based on a vector queueing scheme is presented. The model supports parallelism between different activations of vector nodes, and requires less overhead than conventional dynamic models.