The manipulating robots receive much attention by offering better services, where object grasping is still challenging especially under background interferences. In this article, a novel two-stream grasping convolutional neural network (CNN) with simultaneous detection and segmentation is proposed. The proposed method is cascaded by an improved simultaneous detection and segmentation network BlitzNet and a two-stream grasping CNN TsGNet. The improved BlitzNet introduces the channel-based attention mechanism, and achieves an improvement of detection accuracy and segmentation accuracy with the combination of the learning of multitask loss weightings and background suppression. Based on the obtained bounding box and the segmentation mask of the target object, the target object is separated from the background, and the corresponding depth map and grayscale map are sent to TsGNet. By adopting depthwise separable convolution and designed global deconvolution network, TsGNet achieves the best grasp detection with only a small amount of network parameters. This best grasp in the pixel coordinate system is converted to a desired 6-D pose for the robot, which drives the manipulator to execute grasping. The proposed method combines a grasping CNN with simultaneous detection and segmentation to achieve the best grasp with a good adaptability to background. With the Cornell grasping dataset, the image-wise accuracy and object-wise accuracy of the proposed TsGNet are 93.13% and 92.99%, respectively. The effectiveness of the proposed method is verified by the experiments.