Current methods for ergonomic assessment often use video-analysis to estimate wrist postures during occupational tasks. Wearable sensing and machine learning have the potential to automate this tedious task, and in doing so greatly extend the amount of data available to clinicians and researchers. A method of predicting wrist posture from inertial measurement units placed on the wrist and hand via a deep convolutional neural network has been developed. This study has quantified the accuracy and reliability of the postures predicted by this system relative to the gold standard of optoelectronic motion capture. Ten participants performed 3 different simulated occupational tasks on 2 occasions while wearing inertial measurement units on the hand and wrist. Data from the occupational task recordings were used to train a convolutional neural network classifier to estimate wrist posture in flexion/extension, and radial/ulnar deviation. The model was trained and tested in a leave-one-out cross validation format. Agreement between the proposed system and optoelectronic motion capture was 65% with κ = 0.41 in flexion/extension and 60% with κ = 0.48 in radial/ulnar deviation. The proposed system can predict wrist posture in flexion/extension and radial/ulnar deviation with accuracy and reliability congruent with published values for human estimators. This system can estimate wrist posture during occupational tasks in a small fraction of the time it takes a human to perform the same task. This offers opportunity to expand the capabilities of practitioners by eliminating the tedium of manual postural assessment.