Evaluation of surgical competency has important implications for training new surgeons, accreditation, and improving patient outcomes. A method to specifically evaluate dissection performance does not yet exist. This project aimed to design a tool to assess surgical dissection quality. Delphi method was used to validate structure and content of the dissection evaluation. A multi-institutional and multi-disciplinary panel of 14 expert surgeons systematically evaluated each element of the dissection tool. Ten blinded reviewers evaluated 46 de-identified videos of pelvic lymph node and seminal vesicle dissections during the robot-assisted radical prostatectomy. Inter-rater variability was calculated using prevalence-adjusted and bias-adjusted kappa. The area under the curve from receiver operating characteristic curve was used to assess discrimination power for overall DART scores as well as domains in discriminating trainees (≤100 robotic cases) from experts (>100). Four rounds of Delphi method achieved language and content validity in 27/28 elements. Use of 3- or 5-point scale remained contested; thus, both scales were evaluated during validation. The 3-point scale showed improved kappa for each domain. Experts demonstrated significantly greater total scores on both scales (3-point, p< 0.001; 5-point, p< 0.001). The ability to distinguish experience was equivalent for total score on both scales (3-point AUC= 0.92, CI 0.82-1.00, 5-point AUC= 0.92, CI 0.83-1.00). We present the development and validation of Dissection Assessment for Robotic Technique (DART), an objective and reproducible 3-point surgical assessment to evaluate tissue dissection. DART can effectively differentiate levels of surgeon experience and can be used in multiple surgical steps.