We present a novel approach to automatic indoor scene reconstruction from RGB-D images acquired from a single viewpoint using active vision. The proposed method is designed to select the next view with sufficient information for reliable registration. The next view is selected based on the percentage of unexplored scene regions captured inside the field of view and the information content in the overlapping region between the image acquired from the next view and one of the previously acquired images. It is required that this overlapping region contains surfaces with different orientations, whose alignment provides a reliable estimation of the relative camera orientation. The point correspondences between views are established using the assumption of fixed viewpoint, imprecise information about relative view orientation and local surface normal, without need for features based on texture or distinctive local shape. After completing a scan, the 3D scene model is constructed by performing registration of the acquired depth images. Two algorithms are considered for that purpose: a point-to-plane ICP with point weighting based on the properties of the measurement noise and TEASER++. The proposed method is tested on the synthetic dataset Replica.