Random vector functional link (RVFL) network has been successfully employed in diverse domains such as computer vision and machine learning, due to its universal approximation capability. Recently, the shallow RVFL architecture has been extended to deep architectures. In deep architectures, multiple hidden layers are stacked for extracting informative features from the original feature space. Therefore, having rich features, deep models are very successful compared to shallow models. In this article, we propose an extended feature RVFL (efRVFL) model that is trained over extended feature space generated analytically from the original feature space. The proposed efRVFL model has three types of features, i.e., original features, supervised randomized (newly generated) features, and unsupervised randomized features, in its feature matrix. The proposed efRVFL model with additional features has capability to capture nonlinear hidden relationships within the dataset. The proposed efRVFL model is an unstable classifier, and thus, its performance can be improved further via ensemble learning. Ensemble models are stable and accurate and have better generalization performance than single models. Therefore, we also propose an ensemble of extended feature RVFL (en-efRVFL) model. Each base model of en-efRVFL is trained over different feature spaces so that more accurate and diverse base models can be generated. The outcome of the base models is integrated via average voting scheme. Empirical evaluation over <inline-formula> <tex-math notation="LaTeX">$46$</tex-math> </inline-formula> UCI classification datasets demonstrates that the proposed efRVFL and en-efRVFL models have better performance than RVFL and other given deep models. Furthermore, the experimental results over <inline-formula> <tex-math notation="LaTeX">$12$</tex-math> </inline-formula> sparse datasets show that the proposed en-efRVFL model has a winning performance among several deep feedforward neural networks (FNNs).