Randomized sampling has recently been proven a highly efficient technique for computing approximate factorizations of matrices that have low numerical rank. This paper describes an extension of such techniques to a wider class of matrices that are not themselves rank-deficient but have off-diagonal blocks that are; specifically, the class of so-called hierarchically semiseparable (HSS) matrices. HSS matrices arise frequently in numerical analysis and signal processing, particularly in the construction of fast methods for solving differential and integral equations numerically. The HSS structure admits algebraic operations (matrix-vector multiplications, matrix factorizations, matrix inversion, etc.) to be performed very rapidly, but only once the HSS representation of the matrix has been constructed. How to rapidly compute this representation in the first place is much less well understood. The present paper demonstrates that if an $N\times N$ matrix can be applied to a vector in $O(N)$ time, and if individual entries of the matrix can be computed rapidly, then provided that an HSS representation of the matrix exists, it can be constructed in $O(N\,k^{2})$ operations, where $k$ is an upper bound for the numerical rank of the off-diagonal blocks. The point is that when legacy codes (based on, e.g., the fast multipole method) can be used for the fast matrix-vector multiply, the proposed algorithm can be used to obtain the HSS representation of the matrix, and then well-established techniques for HSS matrices can be used to invert or factor the matrix.