Abstract
ABSTRACTWeighting and variance estimation are two statistical issues involved in survey data analysis for large‐scale assessment programs such as the Higher Education Information and Communication Technology (ICT) Literacy Assessment. Because survey data are always acquired by probability sampling, to draw unbiased or almost unbiased inferences for the populations, weights are required in making use of estimators such as a Horvitz‐Thompson type. Variance estimation provides the basis for reporting errors. The weighting procedure generates weights based on statistical principles that are consistent with the sampling design. The estimation of the variance from survey data uses the delete‐k jackknife resampling replicate (JRR) approach, which can be adapted for variant institutional sampling designs and for dissimilarity in institute conditions. To form clusters of k cases, a merge‐dilute algorithm is proposed. The algorithm merges the cases of different groups into a queue and then allocates the cases of the queue to form homogeneous clusters of required sizes. The new algorithm is applied to the ICT sample from an institute taking the 2004 fall trial assessment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.