A virtual fixture (VF) is a constraint built into software that assists a human operator in moving a remote tool along a preferred path via an augmented guidance force to improve teleoperation performance. However, teleoperation generally applies to unknown or dynamic environments, which are challenging for VF use. Most researchers have assumed that VFs are pre-defined or generated automatically; however, these processes are complicated and unreliable in unknown environments where teleoperation is in high demand. Recently, a few researchers have addressed this issue by introducing a user-interactive method of generating VFs in unknown environments. However, these methods are limited to generating a single type of primitive for a single robot tool. Moreover, the accuracy of the VF generated by these methods depends on the accuracy of the human input. Thus, applications of these methods are limited. To overcome those limitations, this work introduces a novel interactive VF generation method that includes a new method of representing VFs as a composition of components. A feature-based user interface allows the human operator to intuitively specify the VF components. The new VF representation accommodates a variety of robot tools and actions. Using the feature-based interface, the process of VF generation is more intuitive and accurate. In this study, the proposed method is evaluated with human subjects in three teleoperation experiments: peg-in-hole, pipe-sawing, and pipe-welding. The experimental results show that the VFs generated by the proposed approach result in a higher manipulation quality while demonstrating the lowest total workload in all experiments. The peg-in-hole task teleoperation was the safest in terms of failure proportion and exerted force of the robot tool. In the pipe-sawing task, the positioning of the robot tool was the most accurate. In the pipe-welding task, the quality of weld was the best in terms of measured tool-trajectory smoothness and visual weld observation.