Despite recent evolution in magnetic resonance imaging (MRI) techniques for musculoskeletal applications, computed tomography (CT) remains the reference modality for the assessment of bone structure. The use of generative deep learning models, such as U-Nets, was shown to enable the synthesis of CT-like contrasts from MRI images. However, the development and validation of such tools have been hindered by the need for large datasets with paired CT and MRI acquisitions. In this preliminary work, we propose to train a U-Net, a supervised deep learning technique, to generate synthetic CT (sCT) knee images from three-dimensional T1-weighted MRI scans by leveraging a large knee dataset with paired acquisitions. The synthetic CT images were then assessed quantitatively and qualitatively. A cohort consisting of 249 patients (39.7±16.0 years old, 133 females) received both a knee MR examination (3T MAGNETOM Prismafit, Siemens Healthcare, Erlangen, Germany) and a CT scan (Revolution, General Electronics Healthcare). T1-weighted MR images (TR 700ms, TE 11ms, 0.5mm isotropic) were spatially registered to the down-sampled CT data (originally 0.3mm isotropic). To ensure a good voxel-to-voxel correspondence, the 99 best registered image pairs were selected and split between training (80%), testing (10%) and validation sets (10%). During training, 100 central slices were extracted for each orientation (axial, coronal and sagittal) and fed to a 2.5D network as stacks of three consecutive MRI slices. At inference, sCT slices were generated for each orientation, and the voxel-wise median across orientations was computed. Qualitatively, our method successfully generated images with a CT-like contrast exhibiting satisfactory levels of anatomical details, including bone contours, and the femoral and tibial physes. However, the sCT images looked generally oversmoothed compared to the original CT data, hindering the visualization of some of the bone trabeculae, especially in the epiphyses. Some anatomical details such as vascular canals were not depicted accurately. In terms of quantitative evaluation, our model achieved a mean average error of 167±23.1 and 49.5±6.76 Hounsfield Units (HU) in bone and soft tissue, respectively. In this preliminary work, we showed the feasibility of generating sCT images from T1-weighted MR data with a good level of anatomical details and quantitative HU estimation. Future work will focus on reducing the impact of registration errors to further improve the model accuracy.