The article considers linear algebra libraries such as BLAS, LAPACK, ScaLAPACK, MKL, and ATLAS, which support high-performance computing (HPC) in modern architectures and are used both in well-known performance tests and in various applications. In the majority of applications, the most time-consuming computation stages are implemented by calling subroutines from such libraries; therefore, the optimal choice of a library is an important issue in setting up computations. The main aim of this review is to describe the invariant characteristics of libraries to achieve high performance of applications. High performance computations used in different fields of knowledge are briefly reviewed. Classification of linear algebra libraries in terms of their functionality and applied high-performance architectures is suggested. The basic low-level BLAS library implemented for all HPC architectures is demonstrated. It is pointed out that the BLAS library supports dividing of the entire computation process into several parallel flows in systems with a common memory field; for such systems, tools such as OpenMP or OpenACC are used. In the case of systems with distributed memory, the parallel version of this library, called PBLAS is used, which supports exchange of messages between nodes using the MPI standard. Higher-level libraries based on the BLAS, e.g., the LAPACK library, which contains a large set of different programs for linear algebra, are described. The ScaLAPACK library for the distributed memory model, which is based on the LAPACK and PBLAS libraries, as well as the Intel MKL library, which is its modern development, are presented. To support efficient operation of hybrid systems, the fundamentally new libraries MAGMA and PLASMA involving features for optimizing linear-algebraic computing of small dimension are analyzed. Libraries supporting solution of eigenvalue problems, such as the EISPACK, PeigS, and a number of other libraries, are investigated. It is pointed out that in the new ELPA library oriented to supercomputers, both OpenMP and MPI tools can be used. It is noted that operations on sparse matrices, especially multiplication of matrices, are very relevant for many applied fields of science; in this regard, the SparseBLAS library can be considered to be the basic standard for them. It is concluded that the optimal choice of a library depends essentially on both the particular application and on the used computing architecture.