Recent progress in understanding of rank-structured tensor decompositions in ℝ d $$ {\mathbb{R}}^d $$ and development of related tensor numerical methods enables efficient techniques for solution of the multidimensional problems in scientific computing and data science, avoiding the curse of dimensionality. The novel tensor numerical methods are based on the nonlinear rank-structured tensor representation/approximation of d $$ d $$ -variate functions and operators discretized on large n × n × ⋯ × n ⏟ d $$ \underset{d}{\underbrace{n\times n\times \cdots \times n}} $$ grids. The rank-structured canonical, Tucker, and tensor train (TT) data formats combined with advanced multilinear algebra offer O ( d n ) $$ O(dn) $$ -complexity of numerical calculations, contrary to O ( n d ) $$ O\left({n}^d\right) $$ exponential scaling when using the traditional grid-based approaches. The recent quantized tensor train (QTT) approximation technique paves the way to O ( d log n ) $$ O\left(d\log n\right) $$ logarithmic complexity of the corresponding numerical algorithms. Tensor computations have been progressively and successfully used in the wide range of real-life applications in the fields of computational quantum chemistry, stochastic computations and uncertainty quantification, optimal control problems, multiparticle dynamics, bio-molecular modeling and drug design, multidimensional data modeling and many others. However, application of the advantageous tensor-structured numerical methods to particular problems in scientific computing requires interdisciplinary cooperation and nontrivial bridging of tensor approaches with many other special numerical techniques, for example, domain-specific rank-structured parametrizations/formats, low-rank approximation of operators and functions, adaptation to geometries, preconditioned iteration on rank-structured tensor manifolds or matching to stochastic/parametric features of the problem. This special issue presents seven papers on tensor numerical methods, addressing both theoretical aspects of the rank-structured representation (approximation) for the multidimensional or multiparametric data and the efficient numerical algorithms for some real life applications. Oseledets et al.1 study the local convergence of the alternating low-rank optimization methods with over-relaxation for low-rank matrix and tensor problems. The analysis is based on the linearization of the method which takes the form of an SOR iteration for a positive semidefinite Hessian and is studied in the corresponding quotient geometry of equivalent low-rank representations. A number of interesting numerical experiments are presented to demonstrate the benefit of over-relaxation in low-rank optimization. In particular the problems of matrix completion, low-rank solution of the Lyapunov matrix equation and solution of linear systems in the QTT tensor format have been considered. A novel method to approximate optimal feedback laws in the problem of nonlinear optimal control for dynamical systems by using the low rank TT tensor decomposition is proposed by Sallandt et al.2 The feedback control is ubiquitous in real dynamical systems since the controlled system can in general not be expected to follow model predictions exactly. However, the computing an optimal feedback control law for nonlinear systems is inherently difficult since it requires solution of the so-called Hamilton–Jacobi–Bellmann equation, which is a high-dimensional nonlinear parabolic partial differential equation. Hence the traditional Galerkin schemes suffer from the curse of dimensionality. The employed approach is based on the modified Dirac–Frenkel variational principle in order to perform the dynamical system on low-dimensional TT manifold. A rigorous description of the numerical scheme and demonstrations of its performance are provided. For the problems of data analysis or uncertainty quantification one should deal with high-dimensional random variables and the corresponding probability density function. Litvinenko et al.3 propose to present the discretized probability density in the low rank tensor format, which however, makes computationally difficult tasks, when some function of probability density is required which needs the point-wise representations of a tensor. The arising computational problem becomes tractable when considering the compressed data as a an element of associative commutative algebra with an inner product and using the corresponding matrix algorithms. The authors use this method for computing the divergences or distances between different probability functions. Numerical results present computations for a number of f-divergences for high-dimensional probability functions in the TT tensor format. The low Tucker rank tensor completion problem is considered by Quan Yu et al.,4 where it is simplified to a low rank approximation of the Tucker unfolding matrices. The low rank tensor completion problem, that is, the reconstruction of a tensor from the observed incomplete data, has a wide range of real-life applications, such as seismic data reconstruction, color image video recovery or medical image processing. The relation between the Tucker ranks and the ranks of the factor matrices in the Tucker decomposition have been derived and then the Tucker completion problem was reformulated as a multilinear low rank matrix completion problem. The latter is solved by using the symmetric block coordinate descent method and the truncated norm minimization. Numerous numerical examples dealing with image, video, MRI, and just randomly generated Tucker tensor data are presented confirming the efficiency of the proposed techniques. Antil et al.5 present a new algorithm for the solution of high-dimensional risk-averse optimization problems governed by partial differential equations (PDE) or ordinary differential equations is proposed. The algorithm is based on low rank tensor approximations of discretized random fields and an efficient preconditioner for the optimized system in the full space formulation. The first numerical example considers the case of PDEs with random variables in constraint while the second numerical test addresses the realistic problem on application to devise a lockdown plan for United Kingdom under COVID-19. The numerical results indicate that the proposed method is feasible for risk-averse optimization problems under tens of random variables. Rakhuba and Vysotsky6 consider the inversion of circulant matrix and derive the corresponding rank bounds for their QTT structure. Theoretical QTT rank bounds in terms of the number of nonzero elements in the first column of a matrix are proven for the general case. Under certain conditions, the explicit form of the matrix inverse is derived. This applies to the inversion of one-dimensional stiffness and mass matrices for periodic boundary value problems on uniform grids. In case of moderate QTT rank this approach allows the O ( log n ) $$ O\left(\log n\right) $$ complexity representation and matrix algebra for both n × n $$ n\times n $$ circulant and its inverse. Advantages of the proposed method are demonstrated on the example of one-dimensional convection-reaction-diffusion boundary value problem in periodic setting. Khoromskij and Khoromskaia7 propose and analyze the numerical algorithm for fast iterative solution of three-dimensional (3D and two-dimensional) periodic elliptic problems in random media. The highly oscillating coefficients are built as a checkerboard type configuration of bumps randomly distributed on large L × L × L $$ L\times L\times L $$ lattice embedded into fine n × n × n $$ n\times n\times n $$ computational grid. Evaluation of the average entities of elliptic operators with random coefficients requires the multiple (say, many thousand times) solution of the discretized equations with the newly generated stiffness matrix, in the course of stochastic sampling. Elliptic problem solver is based on fast generation of the stiffness matrix in the Kronecker product form, and on the low Kronecker rank preconditioner for the discrete 3D periodic Laplacian pseudo-inverse. Numerical examples illustrate the performance of the presented solver for equations with randomly generated jumping coefficients, and in application to stochastic homogenization of 3D elliptic operators. The authors declare that they have no conflict of interests. The data that support the findings of this study are available from the corresponding author upon reasonable request.