Motion information is regarded as one of the most important cues for developing semantics in video data. Yet it is extremely challenging to build semantics of video clips particularly when it involves interactive motion of multiple objects. Most of the existing research has focused on capturing and modelling the motion of each object individually thus loosing interaction information. Such approaches yield low precision-recall ratios and limited indexing and retrieval performances. This paper presents a novel framework for compact representation of multi-object motion trajectories. Three efficient multi-trajectory indexing and retrieval algorithms based on multilinear algebraic representations are proposed. These include: (i) geometrical multiple-trajectory indexing and retrieval (GMIR), (ii) unfolded multiple-trajectory indexing and retrieval (UMIR), and (iii) concentrated multiple-trajectory indexing and retrieval (CMIR). The proposed tensor-based representations not only remarkably reduce the dimensionality of the indexing space but also enable the realization of fast retrieval systems. The proposed representations and algorithms can be robustly applied to both full and partial (segmented) multiple motion trajectories with varying number of objects, trajectory lengths, and sampling rates. The proposed algorithms have been implemented and evaluated using real video datasets. Simulation results demonstrate that the CMIR algorithm provides superior precision-recall metrics, and smaller query processing time compared to the other approaches.