Algorithmic methods are developed that guarantee efficient complexity estimates for strongly convex-concave saddle-point problems in the case when one group of variables has a high dimension, while another has a rather low dimension (up to 100). These methods are based on reducing problems of this type to the minimization (maximization) problem for a convex (concave) functional with respect to one of the variables such that an approximate value of the gradient at an arbitrary point can be obtained with the required accuracy using an auxiliary optimization subproblem with respect to the other variable. It is proposed to use the ellipsoid method and Vaidya's method for low-dimensional problems and accelerated gradient methods with inexact information about the gradient or subgradient for high-dimensional problems. In the case when one group of variables, ranging over a hypercube, has a very low dimension (up to five), another proposed approach to strongly convex-concave saddle-point problems is rather efficient. This approach is based on a new version of a multidimensional analogue of Nesterov's method on a square (the multidimensional dichotomy method) with the possibility to use inexact values of the gradient of the objective functional. Bibliography: 28 titles.