Object detection gives a computer ability to classify objects in an image or video. However, high specified devices are needed to get a good performance. To enable devices with low specifications performs better, one way is offloading the computation process from a device with a low specification to another device with better specifications. This paper investigates the performance of object detection strategies on all-in-one Android mobile phone computation versus Android mobile phone computation with computational offloading on Nvidia Jetson Nano. The experiment carries out the video surveillance from the Android mobile phone with two scenarios, all-in-one object detection computation in a single Android device and decoupled object detection computation between an Android device and an Nvidia Jetson Nano. Android applications send video input for object detection using RTSP/RTMP streaming protocol and received by Nvidia Jetson Nano which acts as an RTSP/RTMP server. Then, the output of object detection is sent back to the Android device for being displayed to the user. The results show that the android device Huawei Y7 Pro with an average FPS performance of 1.82 and an average computing speed of 552 ms significantly improves when working with the Nvidia Jetson Nano, the average FPS becomes ten and the average computing speed becomes 95 ms. It means decoupling object detection computation between an Android device and an Nvidia Jetson Nano using the system provided in this paper successfully improves the detection speed performance.