Roth’s orthogonal function method in discrepancy theory and some new connections

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

In this survey we give a comprehensive, but gentle introduction to the circle of questions surrounding the classical problems of discrepancy theory, unified by the same approach originated in the work of Klaus Roth (Mathematika 1:73–79, 1954) and based on multiparameter Haar (or other orthogonal) function expansions. Traditionally, the most important estimates of the discrepancy function were obtained using variations of this method. However, despite a large amount of work in this direction, the most important questions in the subject remain wide open, even at the level of conjectures. The area, as well as the method, has enjoyed an outburst of activity due to the recent breakthrough improvement of the higher-dimensional discrepancy bounds and the revealed important connections between this subject and harmonic analysis, probability (small deviation of the Brownian motion), and approximation theory (metric entropy of spaces with mixed smoothness). Without assuming any prior knowledge of the subject, we present the history and different manifestations of the method, its applications to related problems in various fields, and a detailed and intuitive outline of the latest higher-dimensional discrepancy estimate.

Similar Papers
  • Book Chapter
  • Cite Count Icon 28
  • 10.1007/978-3-319-04696-9_2
Roth’s Orthogonal Function Method in Discrepancy Theory and Some New Connections
  • Jan 1, 2014
  • Dmitriy Bilyk

In this survey we give a comprehensive, but gentle introduction to the circle of questions surrounding the classical problems of discrepancy theory, unified by the same approach originated in the work of Klaus Roth (Mathematika 1:73–79, 1954) and based on multiparameter Haar (or other orthogonal) function expansions. Traditionally, the most important estimates of the discrepancy function were obtained using variations of this method. However, despite a large amount of work in this direction, the most important questions in the subject remain wide open, even at the level of conjectures. The area, as well as the method, has enjoyed an outburst of activity due to the recent breakthrough improvement of the higher-dimensional discrepancy bounds and the revealed important connections between this subject and harmonic analysis, probability (small deviation of the Brownian motion), and approximation theory (metric entropy of spaces with mixed smoothness). Without assuming any prior knowledge of the subject, we present the history and different manifestations of the method, its applications to related problems in various fields, and a detailed and intuitive outline of the latest higher-dimensional discrepancy estimate.

  • Conference Article
  • Cite Count Icon 131
  • 10.5555/644108.644149
Approximation of functions over redundant dictionaries using coherence
  • Jan 12, 2003
  • Anna C Gilbert + 2 more

One of the central problems of modern mathematical approximation theory is to approximate functions, or signals, concisely, with elements from a large candidate set called a dictionary. Formally, we are given a signal A ∈ RN and a dictionary D = {φi}i∈I of unit vectors that span RN. A representation R of B terms for input A ∈ RN is a linear combination of dictionary elements, R = σi∈A αiφi, for φi ∈ D and some A, vAv ≥ B. Typically, B ⪡ N, so that R is a concise approximation to signal A. The error of the representation indicates by how well it approximates A, and is given by ∥A - R∥2 = √Σt|A[t - R[t]|2. The problem is to find the best B-term representation, i.e., find a R that minimizes ∥A - R∥2. A dictionary may be redundant in the sense that there is more than one possible exact representation for A, i.e., vDv > N = dim(RN). Redundant dictionaries are used because, both theoretically and in practice, for important classes of signals, as the size of a dictionary increases, the error and the conciseness of the approximations improve.We present the first known efficient algorithm for finding a provably approximate representation for an input signal over redundant dictionaries. We identify and focus on redundant dictionaries with small coherence (ie., vectors are nearly orthogonal). We present an algorithm that preprocesses any such dictionary in time and space polynomial in vDv, and obtains an 1 + e approximate representation of the given signal in time nearly linear in signal size N and polylogarithmic in vDv; by contrast, most algorithms in the literature require Ω(vDv)time, and, yet, provide no provable bounds. The technical crux of our result is our proof that two commonly used local search techniques, when combined appropriately, gives a provably near-optimal signal representation over redundant dictionaries with small coherence. Our result immediately applies to several specific redundant dictionaries considered by the domain experts thus far. In addition, we present new redundant dictionaries which have small coherence (and therefore are amenable to our algorithms) and yet have significantly large sizes, thereby adding to the redundant dictionary construction literature.Work with redundant dictionaries forms the emerging field of highly nonlinear approximation theory. We have presented algorithmic results for some of the most basic problems in this area, but other mathematical and algorithmic questions remain to be explored.

  • Research Article
  • Cite Count Icon 3
  • 10.1090/s0002-9947-1954-0061280-4
On representations and extensions of bounded linear functionals defined on classes of analytic functions
  • Jan 1, 1954
  • Transactions of the American Mathematical Society
  • Philip Davis + 1 more

1. Introduction. In treatments of the subject of interpolation and approximation for complex analytic functions, it is usual to deal with the theory of interpolation series and the theory of complex analytic Fourier (i.e., orthogonal) series separately. Interpolation series are generally associated with a sequence of point functionals such as f(an) or f(n) (an), and many interpolation series possess Taylor-like convergence properties. On the other hand, complex analytic Fourier series are associated with a sequence of integral inner products, and the theory carries with it the usual least squares best approximation properties of expansions in orthogonal functions. From a purely formal point of view, however, and even in the real case, these two types of expansions have many properties in common. The principal structural feature of both types of expansions is their use of biorthogonal sets. That is to say, in both theories we are confronted with a set of functions

  • Research Article
  • Cite Count Icon 3
  • 10.13287/j.1001-9332.202103.016
Temporal-spatial variation and the affecting factors of protected areas in Guizhou, China.
  • Mar 1, 2021
  • Ying yong sheng tai xue bao = The journal of applied ecology
  • Han Fan + 4 more

The establishment of protected areas is the bottom line of ecological security for promoting the construction of ecological civilization and supporting economic and social development, which is an important strategy to realize sustainable development and maintain ecological security. In order to reveal the large spatial process of protected areas and its influencing factors, we used the methods of nearest neighbor index, kernel density, and standard deviational ellipse to analyze the temporal-spatial variation characteristics of the protected areas in Guizhou Province from 2002 to 2017, as well as the influencing factors combined with geo-detectors. The results showed that, during the study period, the number, area, and types of protected areas in Guizhou Province showed a diversified and rapid development, forming a protected area system with nature reserves, forest parks and scenic spots as the main body and wetland parks, geoparks and natural heritage sites as the supplement. The spatial cohesion of protected areas was strengthened, the scope of spatial distribution was expanding, and the speed of spatial movement was declining, forming a spatial pattern dominated by the northeast-southwest direction and gradually stable. The coalescence process in protected areas was strongly influenced by topography and vegetation distribution. The protected areas tended to cluster in gentle terrain around rivers and mountains and in areas of concentrated vegetation. The spatial differentiation of protected areas was jointly affected by multiple factors at different levels. The explanatory power of different factors to the spatial differentiation of protected areas was different. Among them, the normalized difference vegetation index, areas of forest and highway mileage were the common main factors affecting the spatial differentiation of the number and area of protected areas, and the explanatory power of different factors was significantly consolidated after interaction, characterized as nonlinear or bi-factor enhancement.

  • Research Article
  • 10.11108/kagis.2010.13.2.094
Comparison of Two Methods for Estimating the Appearance Probability of Seawater Temperature Difference for the Development of Ocean Thermal Energy
  • Jan 1, 2010
  • Dong-Young Yoon + 4 more

Understanding of the amount of energy resources and site selection are required prior to develop Ocean Thermal Energy (OTE). It is necessary to calculate the appearance probability of difference of seawater temperature(ΔT) between sea surface layer and underwater layers. This research mainly aimed to calculate the appearance probability of ΔT using frequency analysis(FA) and harmonic analysis(HA), and compare the advantages and weaknesses of those methods which has used in the South Sea of Korea. Spatial scale for comparison of two methods was divided into local and global scales related to the estimation of energy resources amount and site selection. In global scale, the Probability Differences(PD) of calculated ΔT from using both methods were created as spatial distribution maps, and compared areas of PD. In local scale, both methods were compared with not only the results of PD at the region of highest probability but also bimonthly probabilities in the regions of highest and lowest PD. Basically, the strong relationship(pearson r=0.96, α=0.05) between probabilities of two methods showed the usefulness of both methods. In global scale, the area of PD more than 10% was less than 5% of the whole area, which means both methods can be applied to estimate the amount of OTE resources. However, in practice, HA method was considered as a more pragmatic method due to its capability of calculating under various ΔT conditions. In local scale, there was no significant difference between the high probability areas by both methods, showing difference under 5%. However, while FA could detect the whole range of probability, HA had a disadvantage of inability of detecting probability less than 10%. Therefore it was analyzed that the HA is more suitable to estimate the amount of energy resources, and FA is more suitable to select the site for OTE development. KEYWORDS : OTE(Ocean Thermal Energy), Frequency Analysis, Harmonic Analysis, Seawater Temperature Difference, Appearance Probability

  • Book Chapter
  • 10.1007/978-3-0348-0625-1_8
Geometry of the Gauss Map and Lattice Points in Convex Domains
  • Jan 1, 2014
  • Alex Iosevich + 1 more

In the previous two chapters, we have gained a significant amount of understanding about the L p-average decay for the Fourier transform of characteristic functions of convex sets and considered some applications to problems in lattice point counting and discrepancy theory. In this chapter we consider more elaborate applications of average decay in number theory where the discrepancy function needs to be estimated for almost every rotation instead of averaging over rotations in some L p-norm. This naturally leads us to the examination of certain maximal functions and as a result brings in some classical harmonic analysis that arises so often in the first part of this book.KeywordsLattice PointMaximal FunctionConvex DomainPoisson Summation FormulaLacunary SequenceThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

  • Research Article
  • Cite Count Icon 15
  • 10.1112/s0025579300014376
Geometry of the gauss map and lattice points in convex domains
  • Dec 1, 2001
  • Mathematika
  • L Brandolini + 4 more

In the previous two chapters, we have gained a significant amount of understanding about the L p -average decay for the Fourier transform of characteristic functions of convex sets and considered some applications to problems in lattice point counting and discrepancy theory. In this chapter we consider more elaborate applications of average decay in number theory where the discrepancy function needs to be estimated for almost every rotation instead of averaging over rotations in some L p -norm. This naturally leads us to the examination of certain maximal functions and as a result brings in some classical harmonic analysis that arises so often in the first part of this book.

  • Book Chapter
  • 10.1007/978-94-010-2697-0_15
Computer Processing of the Visual Evoked Response
  • Jan 1, 1973
  • Karl J Fritz + 5 more

One of the chief problems in making the visual evoked response (VER) a usable tool in the clinic and laboratory is the complex nature of the response itself. Signal averaging removes much noise but the remaining complex waveform requires further processing. We have studied three processing techniques which analyze the shape of the waveform as a whole. These are: projection of the waveform onto a vector space spanned by orthogonal functions; least squares fitting with non-linear functions of the parameters; and integration of the product of the waveform and a template function. These studies were motivated by our need for computer methods to characterize waveforms during the course of experiments.Projection of waveforms onto a vector space spanned by orthogonal functions has been valuable for both smoothing and characterization of the responses. Once an expansion in orthogonal functions has been obtained for a group of similar waveforms, additional computation is done to find another set of basis vectors which provides a more rapidly convergent expansion. It is possible to characterize a waveform with as i few as six coefficients using this method.Least squares fitting with non-linear functions of the parameters allows selection of descriptive parameters with a simple physical interpretation. It has proved difficult to find a simple non-linear function which faithfully represents a VER over the entire time range: however, portions of a VER are well fit.Integration of the product of a waveform and a template function can be done very rapidly. A set of such integrals provides a convenient method of characterizing waveforms whenever a representation of the waveform itself is not necessary.Our experience indicates that either projection of waveforms on a vector space of orthogonal functions or integration of products of waveforms and template functions will provide sufficient speed and accuracy to characterize a waveform during an experiment.

  • Research Article
  • 10.1007/s12209-009-0015-4
Greedy algorithm in m-term approximation for periodic Besov class with mixed smoothness
  • Feb 1, 2009
  • Transactions of Tianjin University
  • Zhanjie Song + 1 more

Nonlinear m-term approximation plays an important role in machine learning, signal processing and statistical estimating. In this paper by means of a nondecreasing dominated function, a greedy adaptive compression numerical algorithm in the best m-term approximation with regard to tensor product wavelet-type basis is proposed. The algorithm provides the asymptotically optimal approximation for the class of periodic functions with mixed Besov smoothness in the Lq norm. Moreover, it depends only on the expansion of function f by tensor product wavelet-type basis, but neither on q nor on any special features of f.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.jco.2019.05.003
Extremal distributions of discrepancy functions
  • May 23, 2019
  • Journal of Complexity
  • Ralph Kritzinger + 1 more

Extremal distributions of discrepancy functions

  • Single Book
  • Cite Count Icon 203
  • 10.1007/978-3-319-92240-9
Hyperbolic Cross Approximation
  • Apr 21, 2017
  • Dinh Dũng + 2 more

Hyperbolic cross approximation is a special type of multivariate approximation. Recently, driven by applications in engineering, biology, medicine and other areas of science new challenging problems have appeared. The common feature of these problems is high dimensions. We present here a survey on classical methods developed in multivariate approximation theory, which are known to work very well for moderate dimensions and which have potential for applications in really high dimensions. The theory of hyperbolic cross approximation and related theory of functions with mixed smoothness are under detailed study for more than 50 years. It is now well understood that this theory is important both for theoretical study and for practical applications. It is also understood that both theoretical analysis and construction of practical algorithms are very difficult problems. This explains why many fundamental problems in this area are still unsolved. Only a few survey papers and monographs on the topic are published. This and recently discovered deep connections between the hyperbolic cross approximation (and related sparse grids) and other areas of mathematics such as probability, discrepancy, and numerical integration motivated us to write this survey. We try to put emphases on the development of ideas and methods rather than list all the known results in the area. We formulate many problems, which, to our knowledge, are open problems. We also include some very recent results on the topic, which sometimes highlight new interesting directions of research. We hope that this survey will stimulate further active research in this fascinating and challenging area of approximation theory and numerical analysis.

  • Book Chapter
  • 10.1017/9781108689687.005
Hyperbolic Cross Approximation
  • Jul 6, 2018
  • Đinh Dũng + 2 more

Hyperbolic cross approximation is a special type of multivariate approximation. Recently, driven by applications in engineering, biology, medicine and other areas of science new challenging problems have appeared. The common feature of these problems is high dimensions. We present here a survey on classical methods developed in multivariate approximation theory, which are known to work very well for moderate dimensions and which have potential for applications in really high dimensions. The theory of hyperbolic cross approximation and related theory of functions with mixed smoothness are under detailed study for more than 50 years. It is now well understood that this theory is important both for theoretical study and for practical applications. It is also understood that both theoretical analysis and construction of practical algorithms are very difficult problems. This explains why many fundamental problems in this area are still unsolved. Only a few survey papers and monographs on the topic are published. This and recently discovered deep connections between the hyperbolic cross approximation (and related sparse grids) and other areas of mathematics such as probability, discrepancy, and numerical integration motivated us to write this survey. We try to put emphases on the development of ideas and methods rather than list all the known results in the area. We formulate many problems, which, to our knowledge, are open problems. We also include some very recent results on the topic, which sometimes highlight new interesting directions of research. We hope that this survey will stimulate further active research in this fascinating and challenging area of approximation theory and numerical analysis.

  • Research Article
  • Cite Count Icon 9
  • 10.13157/arla.67.2.2020.ra2
Analysis of Spatio-Temporal Patterns of Red Kite Milvus milvus Electrocution
  • Feb 28, 2020
  • Ardeola
  • Gabriela Crespo-Luengo + 3 more

El territorio de Castilla y leon constituye una de las areas de invernada mas importantes de Europa para el milano real Milvus milvus y es, a la vez, una zona donde se esta registrando un fuerte declive de su poblacion, siendo la electrocucion en lineas electricas una de las principales causas de mortalidad. Es por ello que, para poder plantear medidas eficaces de conservacion para esta ave rapaz, es necesario tener un mayor conocimiento de su ecologia espacio-temporal anual y su relacion con los eventos de electrocucion registrados hasta la fecha. En este trabajo se elaboran modelos de distribucion para las poblaciones reproductoras e invernantes de milano real en Castilla y Leon, considerando factores climaticos, topograficos y de habitat con el objetivo de analizar si existe variacion del espacio ambiental entre estaciones. Se construyen un tercer y cuarto modelo de riesgo de electrocucion para cada poblacion, que tambien consideran las caracteristicas tecnicas de los apoyos electricos para comprobar su influencia en los eventos de electrocucion de ambas poblaciones. Nuestros resultados muestran que hay un cambio notable en la distribucion geografica temporal del milano, diferenciando un area importante de invernada en el centro de la comunidad, caracterizada por zonas de cultivos, temperaturas moderadas, vertederos y presencia humana; y un area reproductora hacia el sur de la comunidad, en la que las variables de cercania a muladares, baja precipitacion, lejania a vertederos, pendientes suaves y areas agroforestales son las que mas influyen. El modelo de riesgo identifica las areas idoneas para la reproduccion de milano real como las de mayor riesgo, asi como los tipos de cruceta mas peligrosos: recto, boveda y ballesta. finalmente, el enfoque teorico presentado aqui proporciona un marco para el diseno de medidas de gestion y control orientadas a minimizar las electrocuciones de milano real en la mitad norte de la peninsula iberica.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.11648/j.mcs.20180301.12
A Research Approximation to Generalized Riemann Derivatives by Integral Operator Families
  • Jan 1, 2018
  • Mathematics and Computer Science
  • Lutfi Akin

Approximation theory has very important applications of polynomial approximation in various areas of functional analysis, Harmonic analysis, Fourier analysis, application mathematic, operator theory in the field generalized derivatives and numerical solutions of differential and integral equations, etc. Integral operators is very important in Harmonic and Fourier analysis. The study of approximation theory is a well-established area of research which deals with the problem of approximating a function <i>f</i> by means of a sequence <i>L<sub>n</sub></i> of positive linear operators. Generalized derivatives (Riemann, Peano and Taylor derivative) are more general than ordinary derivative. Approximation theory is very important for mathematical world. Nowadays, many mathematicians are working in this field.

  • Research Article
  • Cite Count Icon 26
  • 10.1016/j.jco.2011.07.001
Fibonacci sets and symmetrization in discrepancy theory
  • Aug 19, 2011
  • Journal of Complexity
  • Dmitriy Bilyk + 2 more

Fibonacci sets and symmetrization in discrepancy theory

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.