Today, the majority of music performers and vocalists are not able to read mensural notation fluently, and so conductors and music ensembles require modern editions for performing historical music. However, the conversion of printed white mensural sheet music into audible and performable modern notation currently requires elaborate manual editing by specialized music scholars. To close this gap, the present research proposes an algorithm that automatically converts scanned music score sheets of that historic period (the sixteenth and seventeenth centuries) into a file format that is readable in current notation software. This includes the optical recognition of musical symbols and respective semantic interpretation, for example note pitch determination. Based on works by the composers Sebastian Ertel and Paul Peuerl, the article presents a case study that combines convolutional neural networks with further computational process steps towards an integrated algorithm within a four-step optical music recognition (OMR) approach. As a result, the used musical material could be correctly converted into the MusicXML format with a recognition rate of 99 per cent. In the wake of these promising results, in future research we propose to extend our work to other materials, epochs, note symbols and advanced semantic analyses.