Advances in wearable and machine learning technologies have caused smartwatches to emerge as promising input devices across disciplines. In this study, we propose a user interface, designed for an automotive environment for recognizing input commands. As the use of extra elements to extend the scope of interactions may visually distract users during driving, we focus on using body parts as an interaction space. Specifically, we utilized the lap as an interaction hyperplane due to its near-flat surface, which provides users interaction analogous to a touchpad. To substantiate the proposed approach, we collected motion signals using an off-the-shelf smartwatch and trained them in supervised learning settings. As target gestures, we defined ten different hand gestures, such as tapping and folding/spreading all fingers on the lap. The experimental results exhibited a test accuracy of 94%, thereby validating the feasibility of the proposed approach.