Liver Imaging Reporting and Data System (LI-RADS) uses multiphasic contrast-enhanced imaging for hepatocellular carcinoma (HCC) diagnosis. The goal of this feasibility study was to establish a proof-of-principle concept towards automating the application of LI-RADS, using a deep learning algorithm trained to segment the liver and delineate HCCs on MRI automatically. In this retrospective single-center study, multiphasic contrast-enhanced MRIs using T1-weighted breath-hold sequences acquired from 2010 to 2018 were used to train a deep convolutional neural network (DCNN) with a U-Net architecture. The U-Net was trained (using 70% of all data), validated (15%) and tested (15%) on 174 patients with 231 lesions. Manual 3D segmentations of the liver and HCC were ground truth. The dice similarity coefficient (DSC) was measured between manual and DCNN methods. Postprocessing using a random forest (RF) classifier employing radiomic features and thresholding (TR) of the mean neural activation was used to reduce the average false positive rate (AFPR). 73 and 75% of HCCs were detected on validation and test sets, respectively, using > 0.2 DSC criterion between individual lesions and their corresponding segmentations. Validation set AFPRs were 2.81, 0.77, 0.85 for U-Net, U-Net + RF, and U-Net + TR, respectively. Combining both RF and TR with the U-Net improved the AFPR to 0.62 and 0.75 for the validation and test sets, respectively. Mean DSC between automatically detected lesions using the DCNN + RF + TR and corresponding manual segmentations was 0.64/0.68 (validation/test), and 0.91/0.91 for liver segmentations. Our DCNN approach can segment the liver and HCCs automatically. This could enable a more workflow efficient and clinically realistic implementation of LI-RADS.