This thesis introduces a new framework for data-driven grasping. We assume, with neuropsychological justification, that human grasping is fundamentally example based rather than rule based. This means that planning grasps for a novel object will usually reduce to identifying it as similar to known objects with known grasp affordances. Our framework is intended to allow robots to mimic this approach. For robots to succeed in the real world, it is essential that they be able to grasp objects based on realistically available sensor data. However, most existing grasp planners assume that the robot has access to the full 3D geometry of all objects to be grasped, which is unscalable, or abandon 3D geometry entirely to plan grasps based on appearance, which is difficult to extend to dexterous hands. The core advantage of our data-driven framework is that it naturally allows grasps to be planned for partially sensed objects. We accomplish this by using the partial sensor data to find similar 3D models, which can be used as proxy geometries for grasp planning. Along with the framework, we present a new set of shape descriptors suitable for matching partial sensor data to similar - but not identical - 3D models. This is in contrast to most previous descriptors for partial matching, which tend to rely on local feature correspondences that will often not exist in our problem setting. In a similar vein we also present new algorithms for aligning the pose and scale of partial sensor data to the best matching models, where no local correspondences may be assumed to exist. Our grasp planner makes use of a grasp database, consisting of example grasps for a large number of 3D models. As no such database has previously existed, this thesis introduces the Columbia Grasp Database, a freely available collection of hundreds of thousands of grasps for thousands of 3D models using a variety of robotic hands. To construct this database we modified the Eigengrasp grasp planner, which uses a low dimensional control space to simplify the grasp search space. We also discuss some limitations of this planner and show how they can be addressed by model decomposition. Our use of a database of 3D models annotated with precomputed grasps suggests the possibility of annotating the models with other forms of information as well. With this in mind, we show how to leverage noisy user data downloaded from the internet to suggest likely text tags for previously unlabeled 3D models. Although this work has not yet been applied to the problem of grasp planning, we demonstrate a content-based 3D model search engine that implements our automatic labeling algorithm. The title of this thesis, “Data-Driven Grasping,” represents a vision of robotic grasping that is larger than our particular implementation. Instead, the major contribution of this thesis is bridging the worlds of robotics and 3D model similarity search. Content based search is an important area of research in its own right, but the combination of content based search and robotics is particularly exciting because it opens up the possibility of using the entire internet as a knowledge base for intelligent robots.