This paper provides new probabilistic guarantees for recovering the common support of jointly sparse vectors in multiple measurement vector (MMV) models. In recent times, Bayesian approaches for sparse signal recovery (such as sparse Bayesian learning and correlation-aware LASSO) have shown preliminary evidence that under appropriate conditions (such as access to ideal covariance matrix of the measurements or certain restrictive orthogonality condition on the signals), it is possible to recover supports of size ( $K$ ) larger than the dimension ( $M$ ) of each measurement vector. However, no results exist that characterize the probability with which this can be achieved for a finite number of measurement vectors ( $L$ ). This paper bridges this gap by formulating the support recovery problem in terms of a multiple hypothesis testing framework. Chernoff-type upper bounds on the probability of error are established, and new sufficient conditions are derived that guarantee its exponential decay with respect to $L$ even when $K=O(M^2)$ . Our sufficient conditions are based on the properties of the so-called Khatri–Rao product of the measurement matrix and reveal the importance of a sampler design. Negative results are also established indicating that when $K$ exceeds a certain threshold (in terms of $M$ ), there will exist a class of measurement matrices for which any support recovery algorithm will fail. Using results from the geometric probability, we characterize the probability with which a randomly generated measurement matrix will belong to this class and show that this probability tends to 1 asymptotically in the size ( $N$ ) of the sparse vectors.