Zooplankton play a crucial role in the ocean’s ecology, as they form a foundational component in the food chain by consuming phytoplankton or other zooplankton, supporting various marine species and influencing nutrient cycling. The vertical distribution of zooplankton in the ocean is patchy, and its relation to hydrographical conditions cannot be fully deciphered using traditional net casts due to the large depth intervals sampled. The Lightframe On-sight Keyspecies Investigation (LOKI) concentrates zooplankton with a net that leads to a flow-through chamber with a camera taking images. These high-resolution images allow for the determination of zooplankton taxa, often even to genus or species level, and, in the case of copepods, developmental stages. Each cruise produces a substantial volume of images, ideally requiring onboard analysis, which presently consumes a significant amount of time and necessitates internet connectivity to access the EcoTaxa Web service. To enhance the analyses, we developed an AI-based software framework named DeepLOKI, utilizing Deep Transfer Learning with a Convolution Neural Network Backbone. Our DeepLOKI can be applied directly on board. We trained and validated the model on pre-labeled images from four cruises, while images from a fifth cruise were used for testing. The best-performing model, utilizing the self-supervised pre-trained ResNet18 Backbone, achieved a notable average classification accuracy of 83.9%, surpassing the regularly and frequently used method EcoTaxa (default) in this field by a factor of two. In summary, we developed a tool for pre-sorting high-resolution black and white zooplankton images with high accuracy, which will simplify and quicken the final annotation process. In addition, we provide a user-friendly graphical interface for the DeepLOKI framework for efficient and concise processes leading up to the classification stage. Moreover, performing latent space analysis on the self-supervised pre-trained ResNet18 Backbone could prove advantageous in identifying anomalies such as deviations in image parameter settings. This, in turn, enhances the quality control of the data. Our methodology remains agnostic to the specific imaging end system used, such as Loki, UVP, or ZooScan, as long as there is a sufficient amount of appropriately labeled data available to enable effective task performance by our algorithms.