Omnidirectional images or videos are commonly generated via the stitching of multiple images or videos, and the quality of omnidirectional stitching strongly influences the quality of experience (QoE) of the generated scenes. Although there were many studies research the omnidirectional image quality assessment (IQA), the evaluation of the omnidirectional stitching quality has not been sufficiently explored. In this paper, we focus on the IQA for the omnidirectional stitching of dual fisheye images. We first establish an omnidirectional stitching image quality assessment (OSIQA) database, which includes 300 distorted images and 300 corresponding reference images generated from 12 raw scenes. The database contains a variety of distortion types caused by omnidirectional stitching, including color distortion, geometric distortion, blur distortion, and ghosting distortion, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">etc.</i> A subjective quality assessment study is conducted on the database and human opinion scores are collected for the distorted omnidirectional images. We then devise a deep learning based objective IQA metric termed Attentive Multi-channel IQA Net. In particular, we extend hyper-ResNet by developing a subnetwork for spatial attention and propose a spatial regularization item. Experimental results show that our proposed FR and NR models achieve the best performance compared with the state-of-the-art FR and NR IQA metrics on the OSIQA database. The OSIQA database as well as the proposed Attentive Multi-channel IQA Net will be released to facilitate future research.