ASYMPTOTIC PROPERTIES OF THE GAUGE AND POWER OF STEP-INDICATOR SATURATION

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Detecting multiple structural breaks at unknown dates is a central challenge in time-series econometrics. Step-indicator saturation (SIS) addresses this challenge during model selection, and we develop its asymptotic theory for tuning parameter choice. We study its frequency gauge—the false detection rate—and show it is consistent and asymptotically normal. Simulations suggest that a smaller gauge minimizes bias in post-selection regression estimates. For the small gauge situation, we develop a complementary Poisson theory. We compare the local power of SIS to detect shifts with that of Andrews’ break test. We find that SIS excels when breaks are near the sample end or closely spaced. An application to U.K. labor productivity reveals a growth slowdown after the 2008 financial crisis.

Similar Papers
  • Research Article
  • Cite Count Icon 13
  • 10.1002/eap.2304
Improving the ability of a BACI design to detect impacts within a kelp-forest community.
  • Mar 17, 2021
  • Ecological Applications
  • Andrew Rassweiler + 5 more

Distinguishing between human impacts and natural variation in abundance remains difficult because most species exhibit complex patterns of variation in space and time. When ecological monitoring data are available, a before-after-control-impact (BACI) analysis can control natural spatial and temporal variation to better identify an impact and estimate its magnitude. However, populations with limited distributions and confounding spatial-temporal dynamics can violate core assumptions of BACI-type designs. In this study, we assessed how such properties affect the potential to identify impacts. Specifically, we quantified the conditions under which BACI analyses correctly (or incorrectly) identified simulated anthropogenic impacts in a spatially and temporally replicated data set of fish, macroalgal, and invertebrate species found on nearshore subtidal reefs in southern California, USA. We found BACIfailed to assess very localized impacts, and had low power but high precision when assessing region-wide impacts. Power was highest for severe impacts of moderate spatial scale, and impacts were most easily detected in species with stable, widely distributed populations. Serial autocorrelation in the data greatly inflated false impact detection rates, and could be partly controlled for statistically, while spatial synchrony in dynamics had no consistent effect on power or false detection rates. Unfortunately, species that offer high power to detect real impacts were also more likely to detect impacts where none had occurred. However, considering power and false detection rates together can identify promising indicator species, and collectively analyzing data for similar species improved the net ability to assess impacts. These insights set expectations for the sizes and severities of impacts that BACI analyses can detect in real systems, point to the importance of serial autocorrelation (but not of spatial synchrony), and indicate how to choose the species, and groups of species, that can best identify impacts.

  • Research Article
  • Cite Count Icon 8
  • 10.3390/econometrics5020022
Unit Roots and Structural Breaks
  • May 30, 2017
  • Econometrics
  • Pierre Perron

This special issue deals with problems related to unit roots and structural change, and the interplay between the two.[...]

  • Conference Article
  • 10.1117/12.2178146
Model of Large-format EO-IR sensor for calculating the probability of true and false detection and tracking for moving and fixed objects
  • May 12, 2015
  • Andrew R Korb + 1 more

A model was developed to understand the effects of spatial resolution and Signal to Noise ratio on the detection and tracking performance of wide-field, diffraction-limited electro-optic and infrared motion imagery systems. False positive detection probability and false positive rate per frame were calculated as a function of target-to-background contrast and object size. Results showed that moving objects are fundamentally more difficult to detect than stationary objects because SNR for fixed objects increases and false positive probability detection rates diminish rapidly with successive frames whereas for moving objects the false detection rate remains constant or increases with successive frames. The model specifies that the desired performance of a detection system, measured by the false positive detection rate, can be achieved by image system designs with different combinations of SNR and spatial resolution, usually requiring several pixels resolving the object; this capability to tradeoff resolution and SNR enables system design trades and cost optimization. For operational use, detection thresholds required to achieve a particular false detection rate can be calculated. Interestingly, for moderate size images the model converges to the Johnson Criteria. Johnson found that an imaging system with an SNR >3.5 has a probability of detection >50% when the resolution on the object is 4 pixels or more. Under these conditions our model finds the false positive rate is less than one per hundred image frames, and the ratio of the probability of object detection to false positive detection is much greater than one. The model was programmed into Matlab to generate simulated images frames for visualization.

  • Research Article
  • Cite Count Icon 7
  • 10.1017/s0016672306008391
Gene–environment interactions in complex diseases: genetic models and methods for QTL mapping in multiple half-sib populations
  • Sep 15, 2006
  • Genetical Research
  • Haja N Kadarmideen + 2 more

An interval quantitative trait locus (QTL) mapping method for complex polygenic diseases (as binary traits) showing QTL by environment interactions (QEI) was developed for outbred populations on a within-family basis. The main objectives, within the above context, were to investigate selection of genetic models and to compare liability or generalized interval mapping (GIM) and linear regression interval mapping (RIM) methods. Two different genetic models were used: one with main QTL and QEI effects (QEI model) and the other with only a main QTL effect (QTL model). Over 30 types of binary disease data as well as six types of continuous data were simulated and analysed by RIM and GIM. Using table values for significance testing, results show that RIM had an increased false detection rate (FDR) for testing interactions which was attributable to scale effects on the binary scale. GIM did not suffer from a high FDR for testing interactions. The use of empirical thresholds, which effectively means higher thresholds for RIM for testing interactions, could repair this increased FDR for RIM, but such empirical thresholds would have to be derived for each case because the amount of FDR depends on the incidence on the binary scale. RIM still suffered from higher biases (15-100% over- or under-estimation of true values) and high standard errors in QTL variance and location estimates than GIM for QEI models. Hence GIM is recommended for disease QTL mapping with QEI. In the presence of QEI, the model including QEI has more power (20-80% increase) to detect the QTL when the average QTL effect is small (in a situation where the model with a main QTL only is not too powerful). Top-down model selection is proposed in which a full test for QEI is conducted first and then the model is subsequently simplified. Methods and results will be applicable to human, plant and animal QTL mapping experiments.

  • Research Article
  • Cite Count Icon 14
  • 10.1080/17520843.2016.1210189
The random-walk hypothesis revisited: new evidence on multiple structural breaks in emerging markets
  • Aug 8, 2016
  • Macroeconomics and Finance in Emerging Market Economies
  • Geoffrey Ngene + 2 more

We examine whether stock prices in 18 emerging markets follow random-walk or mean-reversion processes in the presence of sudden and gradual multiple structural breaks. Our tests endogenously determined the structural shifts and are more powerful than either the traditional random-walk (unit root) tests or the single structural break tests. In all emerging markets, we find strong evidence for multiple structural breaks. When we use single break tests, the random-walk hypothesis is rejected. However, when we use tests of double level shifts in the mean and make due allowance for multiple structural breaks, the results are consistent with the random-walk hypothesis in the vast majority of the sampled markets. The evidence proves robust to using price indexes whether denominated in U.S. dollars, in local currencies or in real terms, and also to using fractional integration tests. Our results contradict some previous studies for emerging markets which restrict structural breaks to only one-time shift.

  • Research Article
  • 10.2139/ssrn.3559941
Estimating and Testing for Multiple Distributional Structural Breaks via a Characteristic Function Approach
  • Jan 1, 2020
  • SSRN Electronic Journal
  • Zhonghao Fu + 2 more

We estimate and test for multiple structural breaks in distribution with unknown break dates via a characteristic function approach. By minimizing the sum of squared generalized residuals, we can consistently estimate the break fractions. We propose a sup-F type test for structural breaks in distribution and suggest an information criterion and a sequential testing procedure to determine the number of breaks. We further construct a class of derivative tests to gauge the possible source of structural breaks, which is asymptotically more powerful than the nonparametric-based tests for structural breaks. Simulation studies show that our method performs well in determining the appropriate number of breaks and estimating the unknown breaks. Furthermore, the proposed tests have reasonable size and excellent power. In an application to exchange rate returns, our tests are able to detect structural breaks in distribution and locate the break dates. Our derivate tests indicate that the documented breaks appear to occur in variance and higher order moments, but not so often in mean.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/iolts.2008.29
False Error Study of On-line Soft Error Detection Mechanisms
  • Jul 1, 2008
  • M Kiran Kumar Reddy + 2 more

With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.

  • Research Article
  • Cite Count Icon 85
  • 10.1016/j.hrthm.2016.05.010
Real-world performance of an enhanced atrial fibrillation detection algorithm in an insertable cardiac monitor
  • May 7, 2016
  • Heart Rhythm
  • Suneet Mittal + 6 more

Real-world performance of an enhanced atrial fibrillation detection algorithm in an insertable cardiac monitor

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s10836-010-5153-z
False Error Vulnerability Study of On-line Soft Error Detection Mechanisms
  • Apr 15, 2010
  • Journal of Electronic Testing
  • Kiran Kumar Reddy + 2 more

With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the Double Sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times higher than the actual error rates. We also find that the alternate approaches of Triple Sampling and Integrate & Sample method can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The Triple Sampling method has about 1.74 times the area and 1.83 times the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The Integrate & Sample method needs about 6% more power and is 0.58 times the area of Double Sampling. It comes with more stringent implementation constraints as it requires detection of small voltage swings. We also analyse for Double Transient Faults (DTFs) and show that all the methods are prone to DTFs, with Integrate & Sample method being more vulnerable.

  • Research Article
  • Cite Count Icon 16
  • 10.1007/s10776-019-00469-0
Using Time-Location Tags and Watchdog Nodes to Defend Against Node Replication Attack in Mobile Wireless Sensor Networks
  • Oct 26, 2019
  • International Journal of Wireless Information Networks
  • Mojtaba Jamshidi + 3 more

Node replication attack is one of the well-known and dangerous attacks against Wireless Sensor Networks (WSNs) in which adversary enters the network, searches randomly and captures one or multiple normal nodes. Adversary extracts data and keying materials of the captured node and generates several copies of that node and deploys them in the network. In this paper, a novel algorithm using watchdog nodes is proposed to detect replica nodes in mobile WSNs. The main idea of the proposed algorithm is inspired by maximum predefined speed for sensor nodes and using time-location tags by the watchdog nodes to detect replica nodes. Watchdog nodes collaborate to measure sensor nodes’ speed in the environment and if they find that a node moves faster than a predefined threshold, they mark it as a malicious node, because such replica node in different regions of the network is moving faster than usual in different regions of the network. The proposed algorithm is implemented by J-SIM simulator and its performance is evaluated in terms of false detection and true detection rates through some experiments. Experiment results show that the proposed algorithm is able to detect 100% of replica nodes, while the false detection rate is less than 0.5%.

  • Research Article
  • 10.1007/s10614-024-10651-z
Comparison of the Performance of Structural Break Tests in Stationary and Nonstationary Series: A New Bootstrap Algorithm
  • Jul 8, 2024
  • Computational Economics
  • Özge Çamalan + 3 more

Structural breaks are considered as permanent changes in the series mainly because of shocks, policy changes, and global crises. Hence, making estimations by ignoring the presence of structural breaks may cause the biased parameter value. In this context, it is vital to identify the presence of the structural breaks and the break dates in the series to prevent misleading results. Accordingly, the first aim of this study is to compare the performance of unit root with structural break tests allowing a single break and multiple structural breaks. For this purpose, firstly, a Monte Carlo simulation study has been conducted through using a generated homoscedastic and stationary series in different sample sizes to evaluate the performances of these tests. As a result of the simulation study, Zivot and Andrews (J Bus Econ Stat 20(1):25–44, 1992) are the best-performing tests in capturing a single break. The most powerful tests for the multiple break setting are those developed by Kapetanios (J Time Ser Anal 26(1):123–133, 2005) and Perron (Palgrave Handb Econom 1:278–352, 2006). A new Bootstrap algorithm has been proposed along with the study’s primary aim. This newly proposed Bootstrap algorithm calculates the optimal number of statistically significant structural breaks under more general assumptions. Therefore, it guarantees finding an accurate number of optimal breaks in real-world data. In the empirical part, structural breaks in the real interest rate data of the US and Australia resulting from policy changes have been examined. The results concluded that the bootstrap sequential break test is the best-performing approach due to the general assumption made to cover real-world data.

  • Conference Article
  • Cite Count Icon 12
  • 10.1109/iembs.2008.4649302
A novel dual-stage classifier for automatic detection of epileptic seizures
  • Aug 1, 2008
  • Rajeev Yadav + 2 more

In long-term monitoring of electroencephalogram (EEG) for epilepsy, it is crucial for the seizure detection systems to have high sensitivity and low false detections to reduce uninteresting and redundant data that may be stored for review by the medical experts. However, a large number of features and the complex decision boundaries for classification of seizures eventually lead to a trade-off between sensitivity and false detection rate (FDR). Thus, no single classifier can fulfill the requirements of high sensitivity with a low FDR and at the same time be a computationally efficient system suitable for real-time application. We present a novel, simple, computationally efficient seizure detection system to enhance the sensitivity with a low FDR by proposing a dual-stage classifier. This overall system consists of a pre-processing unit, a feature extraction unit and a novel dual-stage classifier. The first stage of the proposed classifier detects all true seizures, but also many false patterns, whereas the second stage of the proposed classifier minimizes false detections by rejecting patterns that may be artifacts. The performance of the novel seizure detection system has been evaluated on 300 hours of single-channel depth electroencephalogram (SEEG) recordings obtained from fifteen patients. An overall improvement has been observed in terms of sensitivity, specificity and FDR.

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/icdiime56946.2022.00010
Image Target Detection Algorithm Based on Computer Vision Technology
  • Jun 1, 2022
  • Haidi Yuan

With the rapid development of intelligent systems and the advent of the era of big data, the continuous development of computers is being promoted. Exporting and tracking moving targets in video images is one of the most important research contents of computer vision. It combines many advanced technologies in the field of computing, such as image processing, pattern recognition, automatic control and artificial intelligence, and is widely used in intelligent surveillance. In various fields such as traffic control, machine intelligence and medical diagnosis, visual effects are obtained through image or image processing. Record videos from the computer and perform specific mechanical tasks. In terms of intelligent tracking, as the demand for applications in various complex environments continues to grow, how to improve the robustness and accuracy of moving target tracking and tracking algorithms has become the focus of ongoing target tracking research. This paper studies the image target detection algorithm based on computer vision technology. Firstly, the literature research method is used to summarize the existing problems of image target detection based on computer vision technology and the existing algorithms. The experiment is used to analyze the image target based on computer vision technology. The detection algorithm is verified, and the error rate of image target detection of the algorithm proposed in this paper is compared. According to the experimental results, it can be seen from Figure 1 that in experiment 1, the target detection of the GMM-STMRF algorithm is more accurate than other methods based on the calculation of the false detection rate. The maximum false detection rate is only 2.3%, and the other algorithms have 5.4%- 11.1% false detection rate The GMM-STMRF algorithm increases the multi-frame calculation in the time dimension, so the calculation time has increased. Algorithms such as GMM and MeanShift need to estimate the multi-frame parameters, and the time complexity is also high. In experiment 2, the target detection of the GMM-STMRF algorithm is more accurate than other methods based on the calculation of the false detection rate. The highest false detection rate is only 2.2%, and the other algorithms have a false detection rate of 6.1%-11.8%, respectively. Among them, Meanshift is the highest, Gaussian mixture model is behind, and FCM takes the second place. According to Table I, Table II, Table III, the false detection rate of picture recognition in the video library is quite different from the false detection rate of pictures taken in reality. This is related to the complexity of the picture frequency shooting background environment.

  • Research Article
  • Cite Count Icon 48
  • 10.1071/as11027
Detection Thresholds and Bias Correction in Polarized Intensity
  • Jan 1, 2012
  • Publications of the Astronomical Society of Australia
  • Samuel J George + 2 more

Detection thresholds in polarized intensity and polarization bias correction are investigated for surveys where the polarization information is obtained from rotation measure (RM) synthesis. Considering unresolved sources with a single RM, a detection threshold of 8 σQU applied to the Faraday spectrum will retrieve the RM with a false detection rate less than 10−4, but polarized intensity is more strongly biased than Ricean statistics suggest. For a detection threshold of 5 σQU, the false detection rate increases to ∼4%, depending also on λ2 coverage and the extent of the Faraday spectrum. Non-Gaussian noise in Stokes Q and U due to imperfect imaging and calibration can be represented by a distribution that is the sum of a Gaussian and an exponential. The non-Gaussian wings of the noise distribution increase the false detection rate in polarized intensity by orders of magnitude. Monte Carlo simulations assuming non-Gaussian noise in Q and U give false detection rates at 8 σQU similar to Ricean false detection rates at 4.9 σQU.

  • Research Article
  • Cite Count Icon 32
  • 10.1016/j.yebeh.2014.06.014
Critical evaluation of four different seizure detection systems tested on one patient with focal and generalized tonic and clonic seizures
  • Jul 7, 2014
  • Epilepsy & Behavior
  • Anouk Van De Vel + 2 more

Critical evaluation of four different seizure detection systems tested on one patient with focal and generalized tonic and clonic seizures

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.