- Research Article
- 10.1093/ectj/utaf015
- May 23, 2025
- The Econometrics Journal
- Margherita Borella + 4 more
Summary Although health affects many economic outcomes, its dynamics are still poorly understood. We use k-means clustering, a machine learning technique, and data from the Health and Retirement Study to identify health types during middle and old age. We identify five health types: the vigorous resilient, the fair-health resilient, the fair-health vulnerable, the frail resilient, and the frail vulnerable. They are characterized by different starting health and health and mortality trajectories. Our five health types account for 84% of the variation in health trajectories and are not explained by observable characteristics, such as age, marital status, education, gender, race, health-related behaviours, and health insurance status, but rather by one’s past health dynamics. We also show that health types are important drivers of health and mortality heterogeneity and dynamics. Our results underscore the importance of better understanding health type formation and of modelling it appropriately to properly evaluate the effects of health on people’s decisions and the implications of policy reforms.
- Research Article
- 10.1093/ectj/utaf014
- May 13, 2025
- The Econometrics Journal
- Martin Biewen + 1 more
Summary We evaluate the distributional effects of a minimum wage introduction based on a dataset with a moderate sample size, but a large number of potential covariates. In this context, the selection of relevant control variables at each distributional threshold is crucial to test hypotheses about the impact of the continuous treatment variable. To this end, we use a post-double-selection logistic distribution regression approach, which allows for uniformly valid inference about the target coefficients of our low-dimensional treatment variables across the entire outcome distribution. Our empirical results show that the minimum wage replaced hourly wages below the minimum threshold, increased monthly earnings in the lower-middle segment, but not at the very bottom of the distribution, and did not significantly affect the distribution of working hours.
- Research Article
1
- 10.1093/ectj/utaf005
- Mar 27, 2025
- The Econometrics Journal
- Andrew Wang
Summary This paper proposes a diagnostic method for evaluating the quality of the normal approximation in generalized method of moments (GMM) models through sampling from a quasi-Bayesian parameter distribution. GMM estimates are consistent and asymptotically normal under certain regularity conditions, and researchers often assume normality when conducting inference. However, the literature has identified several violations to the normal approximation, such as in cases with weak instruments or parameters on boundaries. We apply our diagnostic and find meaningful deviations from normality in three well-cited papers, which include examples of well-known violations. We also illustrate one example where the normal approximation works well. Our method is convenient to implement using Markov chain Monte Carlo algorithms and serves as a sanity check for researchers before reporting GMM estimates and standard errors. It enables visualization of the quasi-Bayesian distribution, quantification of deviations from normality, and reporting of alternative estimates and credible sets.
- Research Article
- 10.1093/ectj/utaf006
- Mar 8, 2025
- The Econometrics Journal
- Jaap H Abbring + 2 more
- Research Article
- 10.1093/ectj/utaf008
- Feb 27, 2025
- The Econometrics Journal
- Sylvain Barde + 2 more
Abstract This paper proposes a Lasso-based estimator which uses information embedded in the Moran statistic to develop a selection procedure called Moran’s I Lasso (Mi-Lasso) to solve the Eigenvector Spatial Filtering (ESF) eigenvector selection problem. ESF uses a subset of eigenvectors from a spatial weights matrix to efficiently account for any omitted spatially correlated terms in a classical linear regression framework, thus eliminating the need for the researcher to explicitly specify the spatially correlated parts of the model. We proposed the first ESF procedure accounting for post-selection inference. We derive performance bounds and show the necessary conditions for consistent eigenvector selection. The key advantages of the proposed estimator are that it is intuitive, theoretically grounded, able to provide robust inference and substantially faster than Lasso based on cross-validation or any proposed forward stepwise procedure. Our simulation results and an application on house prices demonstrate Mi-Lasso performs well compared to existing procedures in finite samples.
- Research Article
- 10.1093/ectj/utaf009
- Feb 21, 2025
- The Econometrics Journal
- Ulrich Hounyo + 2 more
Summary We propose a novel bootstrap-based test for cross-sectional dependence (CD) in panel models, maintaining robustness against serial dependence. While serial dependence is common in panel data, existing tests often assume serial independence. Our cluster wild bootstrap CD test procedure mirrors Pesaran’s original CD test and is very simple to implement. This procedure preserves serial dependence while testing for cross-sectional independence. Theoretical validity is established for our bootstrap-based test, with simulations highlighting its performance in finite samples. Using R&D investment panel data, we illustrate the utility of our bootstrap methods.
- Research Article
- 10.1093/ectj/utaf004
- Feb 3, 2025
- The Econometrics Journal
- Atsushi Inoue + 2 more
Summary Inference for impulse responses estimated with local projections presents interesting challenges and opportunities. Analysts typically want to assess the precision of individual estimates, explore the dynamic evolution of the response over particular regions, and generally determine whether the impulse generates a response that is any different from the null of no effect. Each of these goals requires a different approach to inference. In this article, we provide an overview of results that have appeared in the literature in the past twenty years along with some new procedures that we introduce here.
- Research Article
- 10.1093/ectj/utaf003
- Jan 10, 2025
- The Econometrics Journal
- Zeyang Yu
Summary In an empirical study of persuasion, researchers often use a binary instrument to encourage individuals to consume information and take some action. We show that, with a binary Imbens–Angrist instrumental variable model and the monotone treatment response assumption, it is possible to identify the joint distribution of potential outcomes among compliers. This is necessary to identify the percentage of mobilized voters and their statistical characteristic defined by the moments of the joint distribution of treatment and covariates. Specifically, we develop a method that enables researchers to identify the statistical characteristic of persuasion types: always-voters, never-voters, and mobilized voters among compliers. These findings extend Abadie’s kappa theorem. We also provide a sharp test for the two sets of identification assumptions. The test boils down to testing whether there exists a non-negative solution to a possibly underdetermined system of linear equations with known coefficients. We apply these results to a voter mobilization experiment.
- Research Article
- 10.1093/ectj/utaf002
- Jan 9, 2025
- The Econometrics Journal
- Martin O’connell + 2 more
Summary In generalized method of moments (GMM) estimators, moment conditions with additive error terms involve an observed component and a predicted component. If the predicted component is computationally costly to evaluate, it may not be feasible to estimate the model with all the available data. We propose a simple two-sample ‘large–small’ size estimator that uses the full dataset for the computationally cheap observed component, but a reduced sample size for the predicted component. We derive a practical criterion for when the large-small estimator has a lower variance than standard GMM with the reduced sample size. As an alternative, we show how a previously described asymptotically efficient conditional expectation projection based GMM estimator can also be used to reduce computational cost in our setting. We compare the performance of the estimators in a Monte Carlo study of a panel-data random coefficients logit model, and illustrate the use of our estimator in an empirical application to alcohol demand.
- Research Article
- 10.1093/ectj/utaf001
- Jan 6, 2025
- The Econometrics Journal
- Yuta Okamoto
Summary In the presence of sample selection, Lee’s (2009, Review of Economic Studies 76, 1071–102) non-parametric bounds are a popular tool for estimating a treatment effect. However, the Lee bounds rely on the monotonicity assumption, the empirical validity of which is sometimes unclear. Furthermore, the bounds are often regarded to be wide and less informative even under monotonicity. To address these issues, this study introduces a stochastic version of the monotonicity assumption alongside a non-parametric distributional shape constraint. The former enhances the robustness of the Lee bounds with respect to monotonicity, while the latter helps tighten these bounds. The obtained bounds do not rely on the exclusion restriction and can be root-n consistently estimable, making them practically viable. The potential usefulness of the proposed methods is illustrated by their application to experimental data from an after-school instruction programme.