Abstract Malignant cancers are characterized by microenvironmental heterogeneity, which is a leading cause of genomic heterogeneity. Microenvironmental heterogeneity can be viewed radiographically, wherein non-uniform patterns of enhancement or attenuation can be associated with poor outcome. In order to systematically investigate this, the process of “Radiomics” extracts quantitative Texture, Shape, and Density image feature data that can be mined with patient outcomes data for prognostic, diagnostic and predictive models. The radiomics enterprise is divided into five processes with definable inputs and outputs: (i) image acquisition and reconstruction; (ii) image segmentation and rendering; (iii) feature extraction and qualification; (iv) databases and data sharing; and (v) informatic analyses. Each of these steps poses discrete challenges that have to be met. Even though this field is young, meaningful classifier models have been generated in detecting and diagnosing a number of cancer subtypes. To date, the radiomics effort has focused on agnostic (e.g. texture) and semantic (e.g. speculated) image features, which quantify indescribable and describable features, respectively. These number in the hundreds and have been shown to have high prognostic value in non-small cell lung cancer, NSCLC, and are being used to classify indeterminate lung nodules in lung screening CTs. More recently, we have been combining orthogonal MR images (e.g. STIR, Diffusion and contrast enhanced T1) to develop data cubes for each voxel, which can then be clustered using fuzzy logic to identify specific sub-tumoral “habitats”; each with their own unique combination of perfusion, lipid/water ratio and cellular density. Such habitats have been extracted from brain cancers, sarcomas and prostate cancers and are being shown to relate to underlying pathophysiology and genomics. The biggest current challenge in Radiomics is generating sufficient data with which to build classifier models. The largest data sets to date have contained only a few hundred patients, when thousands of patient data sets are needed. For example, the National Lung Screening Trial, NLST, has images from over 25,000 subjects that are being parsed with Radiomic features. Even this seemingly large “big data” data set is underpowered once cohorts are assembled with similar histories, and co-variates are accounted for. A possible solution to the problem of generating sufficiently powered data sets is to capture the radiomic images at the point of care, i.e. by the radiologists while they are performing their evaluations. Such a platform, the “Radiology Reading Room of the Future” allows for the radiologist to identify and delineate volumes of interest, and from these radiomic features are captured. As there are 10's of millions of CT and MRI scans every year in the U.S. this present an unparalleled opportunity to capture large repositories of quantitative imaging data during standard of care treatment. Citation Format: Robert J. Gillies. The radiology reading room of the future. [abstract]. In: Proceedings of the AACR-NCI-EORTC International Conference: Molecular Targets and Cancer Therapeutics; 2015 Nov 5-9; Boston, MA. Philadelphia (PA): AACR; Mol Cancer Ther 2015;14(12 Suppl 2):Abstract nr CN01-01.