Classification in high dimensions has been highlighted for the past two decades since Fisher's linear discriminant analysis (LDA) is not optimal in a smaller sample size n comparing the number of covariates p, i.e. p>n, which is mostly due to the singularity of the sample covariance matrix. Rather than modifying how to estimate the sample covariance and sample mean vector in constructing a classifier, we build two types of high-dimensional classifiers using data splitting, i.e. single data splitting (SDS) and multiple data splitting (MDS). Moreover, we introduce a weighted version of MDS classifier that improves classification performance as illustrated in numerical studies. Each of the split data sets has a smaller size of covariates compared to the sample size so that LDA is applicable, and classification results can be combined with respect to minimizing the misclassification rate. We present theoretical justification backing up our proposed methods by comparing misclassification rates with LDA in high dimension. We also conduct a wide range of simulations and analyse four microarray data sets, which demonstrates that our proposed methods outperform some existing methods or at least yield comparable performances.