Two-sample Age-period-cohort Models

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Age-period-cohort analysis is often done in the context of two samples. This could be samples for women and men or for two countries. It is of interest to ask if some time effects could be common across samples. We clarify how the well-known age-period-cohort problem for one sample carries over to the two sample situation. This is done through a reparametrization in terms of parameters that are invariant to the identification issues. The new parametrization shows which hypotheses can be tested and their degrees of freedom. Testable hypotheses can be formulated for non-linear effects, but not for the linear parts of the individual time effects. This conclusion remains when imposing cross-sample restrictions. The analysis is extended to the mixed frequency situation where age and period are measured at different scales. As an empirical illustration a study of Swiss suicide rates is revisited.

Similar Papers
  • Research Article
  • 10.1016/j.ajodo.2015.03.015
Inference from a sample mean--Part 1.
  • Jun 1, 2015
  • American Journal of Orthodontics and Dentofacial Orthopedics
  • Nikolaos Pandis

Inference from a sample mean--Part 1.

  • Research Article
  • Cite Count Icon 235
  • 10.1080/00949659608811740
Approximate F-tests of multiple degree of freedom hypotheses in generalized least squares analyses of unbalanced split-plot experiments
  • Jun 1, 1996
  • Journal of Statistical Computation and Simulation
  • Alex Hrong-Tai Fai + 1 more

Approximate t-tests of single degree of freedom hypotheses in generalized least squares analyses (GLS) of mixed linear models using restricted maximum likelihood (REML) estimates of variance components have been previously developed by Giesbrecht and Burns (GB), and by Jeske and Harville (JH), using method of moment approximations for the degrees of freedom (df) for the tstatistics. This paper proposes approximate Fstatistics for tests of multiple df hypotheses using one-moment and two-moment approximations which may be viewed as extensions of the GB and JH methods. The paper focuses specifically on tests of hypotheses concerning the main-plot treatment factor in split-plot experiments with missing data. Simulation results indicate usually satisfactory control of Type I error rates.

  • Research Article
  • 10.33087/jiubj.v15i1.195
PENGARUH DISIPLIN DAN MOTIVASI KERJA TERHADAP KINERJA KANTOR DINAS PENDIDIKAN DAN KABUDAYAAN KABUPATEN BATANGHARI
  • Feb 22, 2017
  • Etty Siswati

This study is intended to explain the influence of Discipline and work motivation on employee performance in the department's Office of Education and Culture, Batang day. Analissis method used is quantitative method that uses path analysis tool (part analisysis). Testing the significance of the study variables influence employee Discipline (X1) and work motivation (X2) as an independ- ent variable, and employee performance as the dependent variable (Y). Test performed F and t test. The results of hypothesis testing for the effect of variable discipline to employee performance shows that the value of t = 2.503 at 78 degrees and α = 5% greater than the value daari t table = 1.665. This significant effect and variable individual discipline on employee performance. Great influence on the performance of employee discipline is 8.80% (0.088) > The magnitude of the indirect effect of education employees through employee training on job performance is 4.10% (0.041). The results of hypothesis testing for the effect of variable motivation on employee performance showed the value t = 2.163 in the degrees of freedom = 78 and α = 5% greater than the value of the t table = 1.665. The third hypothesis testing results on the influence of variables on the performance of discipline and motivation shows that the value of F count = 11,800 in the degrees of freedom = 77 and α = 5% greater than the value of F table = 3.960. This means that together there is a significant influence of variables on the performance of discipline and motivation. The amount of influence jointly given by the variables on the performance of discipline and motivation is 0.235 or 23.5 percent of the independent variables are influential or well explain the changes in the performance of employees and 76.5 percent are influenced by other factors not included in the model. Based on these results it can be concluded that the influence of Discipline and motivation together significantly affect employee performance Keyword : Discipline and Motivation, Office of Education and Culture department, Batanghari.

  • Research Article
  • 10.1007/s10463-022-00840-8
Nonparametric inference for additive models estimated via simplified smooth backfitting
  • Jul 15, 2022
  • Annals of the Institute of Statistical Mathematics
  • Suneel Babu Chatla

We investigate hypothesis testing in nonparametric additive models estimated using simplified smooth backfitting (Huang and Yu, Journal of Computational and Graphical Statistics, 28(2), 386–400, 2019). Simplified smooth backfitting achieves oracle properties under regularity conditions and provides closed-form expressions of the estimators that are useful for deriving asymptotic properties. We develop a generalized likelihood ratio (GLR) (Fan, Zhang and Zhang, Annals of statistics, 29(1),153–193, 2001) and a loss function (LF) (Hong and Lee, Annals of Statistics, 41(3), 1166–1203, 2013)-based testing framework for inference. Under the null hypothesis, both the GLR and LF tests have asymptotically rescaled chi-squared distributions, and both exhibit the Wilks phenomenon, which means the scaling constants and degrees of freedom are independent of nuisance parameters. These tests are asymptotically optimal in terms of rates of convergence for nonparametric hypothesis testing. Additionally, the bandwidths that are well suited for model estimation may be useful for testing. We show that in additive models, the LF test is asymptotically more powerful than the GLR test. We use simulations to demonstrate the Wilks phenomenon and the power of these proposed GLR and LF tests, and a real example to illustrate their usefulness.

  • Research Article
  • Cite Count Icon 26
  • 10.1161/circulationaha.105.586461
Hypothesis Testing
  • Aug 21, 2006
  • Circulation
  • Roger B Davis + 1 more

In most biomedical research, investigators hypothesize about the relationships of various factors, collect data to test those relationships, and try to draw conclusions about those relationships from the data collected. In many cases, investigators test relationships by comparing the average level of a factor between 2 groups or between 1 group and a standard reference. This framework is as true for understanding the basic role of cardiac myosin binding protein-C phosphorylation in cardiac physiology1 as it is for evaluating non–high-density lipoprotein cholesterol (HDL-C) as a predictor of myocardial infarction in large groups of individuals.2 In this article we describe hypothesis testing, which is the process of drawing conclusions on the basis of statistical testing of collected data, and the specific approach used to test means (or average levels of a collected data element). These concepts are covered in detail in many statistical textbooks at various levels, including Pagano and Gauvreau,3 Zar,4 and Kleinbaum et al.5 The purpose of statistical inference is to draw conclusions about a population on the basis of data obtained from a sample of that population. Hypothesis testing is the process used to evaluate the strength of evidence from the sample and provides a framework for making determinations related to the population, ie, it provides a method for understanding how reliably one can extrapolate observed findings in a sample under study to the larger population from which the sample was drawn. The investigator formulates a specific hypothesis, evaluates data from the sample, and uses these data to decide whether they support the specific hypothesis. The first step in testing hypotheses is the transformation of the research question into a null hypothesis, H, and an alternative hypothesis, HA.6 The null and alternative hypotheses are concise statements, usually in …

  • Research Article
  • Cite Count Icon 6
  • 10.3970/cmes.2008.027.079
Time Variant Reliability Analysis of Nonlinear Structural Dynamical Systems using combined Monte Carlo Simulations and Asymptotic Extreme Value Theory
  • Apr 1, 2008
  • Cmes-computer Modeling in Engineering & Sciences
  • B Radhika + 2 more

Reliability of nonlinear vibrating systems under stochastic excitations is investigated using a two-stage Monte Carlo simulation strategy. For systems with white noise excitation, the governing equations of motion are interpreted as a set of Ito stochastic differential equations. It is assumed that the probability distribution of the maximum in the steady state response belongs to the basin of attraction of one of the classical asymptotic extreme value distributions. The first stage of the solution strategy consists of selection of the form of the extreme value distribution based on hypothesis tests, and the next stage involves the estimation of parameters of the relevant extreme value distribution. Both these stages are implemented using data from limited Monte Carlo simulations of the system response. The proposed procedure is illustrated with examples of linear/nonlinear systems with single/multiple degrees of freedom, driven by random excitations. The predictions from the proposed method are compared with the results from large-scale Monte Carlo simulations, and also with the classical analytical results, when available, from the theory of out-crossing statistics. Applications of the proposed method for large-scale problems and for vibration data obtained from field/laboratory conditions, are also discussed.

  • Research Article
  • Cite Count Icon 1
  • 10.3760/cma.j.issn.0254-6450.2012.06.018
Gene-based principal component logistic regression model and its application on genome-wide association study
  • Jun 1, 2012
  • Chinese journal of epidemiology
  • Hong-Gang Yi + 6 more

To explore the gene-based principal component logistic regression model and its application in genome-wide association study. Using the simulated genome-wide single nucleotide polymorphism (SNPs) genotypes data, we proposed a practical statistical analysis strategy-'the principal component logistic regression model', based on the gene levels to assess the association between genetic variations and complex diseases. The simulation results showed that the P value of genes in related diseases was the smallest among the results from all the genes. The results of simulation indicated that not only it could reduce the degree of freedom through hypothesis testing but could also better understand the correlations between SNPs. The gene-based principal component logistic regression model seemed to have certain statistical power for testing the association between genetic genes and diseases in the genome-wide association studies.

  • Research Article
  • 10.3760/cma.j.issn.0254-6450.2013.06.023
Application of gene-based logistic kernel-machine regression model on studies related to the genome-wide association
  • Jun 1, 2013
  • Chinese journal of epidemiology
  • Hong-Gang Yi + 5 more

To explore the gene-based logistic kernel-machine regression model and its application in genome-wide association study(GWAS). Using the simulated genome-wide single-nucleotide polymorphism(SNPs)genotypes data, we proposed a practical statistical analysis strategy-named 'the logistic kernel-machine regression model', based on the gene levels to assess the association between genetic variations and complex diseases. The results from simulation showed that the P value of genes in related diseases was the smallest among all the genes. The results of simulation indicated that not only it could borrow information from different SNPs that were grouped in genes and reducing the degree of freedom through hypothesis testing, but could also incorporate the covariate effects and the complex SNPs interactions. The gene-based logistic kernel-machine regression model seemed to have certain statistical power for testing the association between genetic genes and diseases in GWAS.

  • Book Chapter
  • 10.1016/b978-0-12-811555-8.00008-8
Chapter 8 - Hypothesis Testing
  • Jan 1, 2017
  • Statistical Techniques for Transportation Engineering
  • Kumar Molugaram + 1 more

Chapter 8 - Hypothesis Testing

  • Research Article
  • Cite Count Icon 4
  • 10.12989/sss.2020.25.1.001
A modified index for damage detection of structures using improved reduction system method
  • Jan 1, 2020
  • Smart Structures and Systems
  • Shahin Lale Arefi + 2 more

The modal strain energy method is one of the efficient methods for detecting damage in the structures. Due to existing some limitations in real-world structures, sensors can only be located on a limited number of degrees of freedom (DOFs) of a structure. Therefore, the mode shape values in all DOFs of structures cannot be measured. In this paper, a modified modal strain energy based index (MMSEBI) is introduced to locate damaged elements of structures when a limited number of sensors are used. The proposed MMSEBI is based on the reconstruction of mode shapes using Improved Reduction System (IRS) method. Therefore, in the first step by employing IRS method, mode shapes in slave degrees of freedom are estimated by those of master degrees of freedom. In the second step, the proposed MMSEBI is used to located damage elements. In order to evaluate the efficiency of the proposed method, two numerical examples are considered under different damage patterns considering the measurement noise. Moreover, the universal threshold based on statistical hypothesis testing principles is applied to damage index values. The results show the effectiveness of the proposed MMSEBI for the structural damage localization when comparing with the available damage index named MESBI. The results demonstrate that the presented method can be used as a practical strategy for structural damage identification, especially when a limited number of sensors are installed on the structure. Finally, the combination of MMSEBI and IRS method can provide a reliable tool to identify the location of damage accurately.

  • Research Article
  • 10.14710/jsmo.v15i1.21241
SUPPLY CHAIN DAN KINERJA PERUSAHAAN EVENT ORGANIZER DI INDONESIA
  • Nov 30, 2018
  • Edmund Kussumawara + 1 more

The growth of the tourism industry in Indonesia is on the rise, one of which is the growth of event organizers in various regions in Indonesia. However, the event organizer has the problems faced such as market competition, causing decreased income, which then results in not achieving sales targets and reducing the value of the company's assets over time. On the other hand, event organizers are required to be responsive to changes that occur in the market and are able to meet consumer demand. This study aimed the effect of supply chain agility and flexibility towards the firm performance with supply chain performance as a mediator. In this study, supply chain agility and flexibility as independent variable, supply chain performance as intervening variable, and firms performance as dependent variable.The population in this study are event organizers who were members of the Association of Indonesian Exhibition Companies (ASPERAPI) totaling 632. This study used a sample of 150 respondents through a questionnaire. All of them are event organizers who work on the government and BUMN sectors.This study tested by structural equation modeling (SEM). Which was later on developed into a theoritical model and procesed by a computing program AMOS 24.0. based on the result, the theoritical model has achived the cut off value of goodness of fit as follows, Chi-square = 67.509; probability = 0.236; CMIN / DF = 1.125; GFI = 0.931; AGFI = 0.896; TLI = 0.984; CFI = 0.988 RMSEA = 0.031, and degree of freedom (DF) = 60. Based on the results obtained, the model is declared feasible to use. The results of hypothesis testing show that supply chain performance has a positive effect on firm performance (0.54), supply chain agility does not affect firm performance (0.18), supply chain flexibility has a positive effect on firm performance (0.31), supply chain agility has a positive effect on supply chain performance (0.30), supply chain flexibility has a positive effect on supply chain performance (0.54).

  • Research Article
  • Cite Count Icon 15467
  • 10.1093/biomet/52.3-4.591
An analysis of variance test for normality (complete samples)
  • Dec 1, 1965
  • Biometrika
  • S S Shapiro + 1 more

The main intent of this paper is to introduce a new statistical procedure for testing a complete sample for normality. The test statistic is obtained by dividing the square of an appropriate linear combination of the sample order statistics by the usual symmetric estimate of variance. This ratio is both scale and origin invariant and hence the statistic is appropriate for a test of the composite hypothesis of normality. Testing for distributional assumptions in general and for normality in particular has been a major area of continuing statistical research-both theoretically and practically. A possible cause of such sustained interest is that many statistical procedures have been derived based on particular distributional assumptions-especially that of normality. Although in many cases the techniques are more robust than the assumptions underlying them, still a knowledge that the underlying assumption is incorrect may temper the use and application of the methods. Moreover, the study of a body of data with the stimulus of a distributional test may encourage consideration of, for example, normalizing transformations and the use of alternate methods such as distribution-free techniques, as well as detection of gross peculiarities such as outliers or errors. The test procedure developed in this paper is defined and some of its analytical properties described in ? 2. Operational information and tables useful in employing the test are detailed in ? 3 (which may be read independently of the rest of the paper). Some examples are given in ? 4. Section 5 consists of an extract from an empirical sampling study of the comparison of the effectiveness of various alternative tests. Discussion and concluding remarks are given in ?6. 2. THE W TEST FOR NORMALITY (COMPLETE SAMPLES) 2 1. Motivation and early work This study was initiated, in part, in an attempt to summarize formally certain indications of probability plots. In particular, could one condense departures from statistical linearity of probability plots into one or a few 'degrees of freedom' in the manner of the application of analysis of variance in regression analysis? In a probability plot, one can consider the regression of the ordered observations on the expected values of the order statistics from a standardized version of the hypothesized distribution-the plot tending to be linear if the hypothesis is true. Hence a possible method of testing the distributional assumptionis by means of an analysis of variance type procedure. Using generalized least squares (the ordered variates are correlated) linear and higher-order

  • Research Article
  • Cite Count Icon 174
  • 10.1161/circulationaha.107.714618
A Primer in Longitudinal Data Analysis
  • Nov 4, 2008
  • Circulation
  • Garrett M Fitzmaurice + 1 more

Longitudinal data, comprising repeated measurements of the same individuals over time, arise frequently in cardiology and the biomedical sciences in general. For example, Frison and Pocock1 used repeated measurements of the liver enzyme creatine kinase in serum of cardiac patients to study changes in liver function over a 12-month study period. The main goal, indeed the raison d’etre , of a longitudinal study is characterization of changes in the response of interest over time. Ordinarily, changes in the response are also related to selected covariates. For example, Frison and Pocock1 compared changes in creatine kinase between patients randomized to active drug and placebo. The past 25 years have witnessed remarkable developments in statistical methods for the analysis of longitudinal data. Despite these important advances, researchers in the biomedical sciences have been somewhat slow to adopt these methods and often rely on statistical techniques that fail to adequately account for longitudinal study designs. The goal of the present report is to provide an overview of some recently developed methods for longitudinal analyses that are more appropriate, with a focus on 2 methods for continuous responses: the analysis of response profiles and linear mixed-effects models. The analysis of response profiles is better suited to settings with a relatively small number of repeated measurements, obtained on a common set of occasions, whereas linear mixed-effects models are suitable in more general settings. Before describing these methods, we review some of the defining features of longitudinal studies and highlight the main aspects of longitudinal data that complicate their analysis. ### Covariance Structure A common feature of repeated measurements on an individual is correlation; that is, knowledge of the value of the response on one occasion provides information about the likely value of the response on a future occasion. Another common feature of longitudinal data is heterogeneous …

  • Research Article
  • 10.1080/00031305.1962.10479556
The Teacher's Corner
  • Apr 1, 1962
  • The American Statistician
  • Nura D Turner

As an experiment, toss 4 coins 50 times and construct the frequency distribution for the number of heads per toss. That is a typical problem in the usual t:ext in elementary statistics and occurs in what is actually the first chapter of the text that I am using. When I have assigned such a problem in the past, after combining the results of all tosses, I have asked my students to keep the information because we can make use of it later on. Later on, however, no one but myself can do the producing, and, besides, by that time the information is somewhat cold. This semester I decided that I would do something about the matter right away, even though the chapters on elementary probability, the binomial distribution, the testing of hypotheses, and the Chi Square distribution were chapters away. At the next class meeting, I presented several things, the results of the 1850 tosses, how to determine the frequencies of heads that we could expect from the 1850 tosses, the definition of the Chi Square function, the approximation to that function, the table showing the probability of obtaining a larger value than a given Chi Square value, the expression, degrees of freedom, a brief outline of the steps in the testing of a hypothesis, and assigned the class the job of determining if we would have to reject the hypothesis that there was no difference between our set of actually obtained frequencies and the set of expected frequencies.

  • Research Article
  • 10.2308/1558-7991-41.3.204
Editorial Policy
  • Aug 1, 2022
  • AUDITING: A Journal of Practice & Theory

Editorial Policy

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.