A novel approach for calculating downwelling surface longwave (DSLW) radiation under all sky conditions is presented. The DSLW model (hereafter, DSLW/UMD v2) similarly to its predecessor, DSLW/UMD v1, is driven with a combination of Moderate Resolution Imaging Spectroradiometer (MODIS) level‐3 cloud parameters and information from the European Centre for Medium‐Range Weather Forecasts (ECMWF) ERA‐Interim model. To compute the clear sky component of DSLW a two layer feed‐forward artificial neural network with sigmoid hidden neurons and linear output neurons is implemented; it is trained with simulations derived from runs of the Rapid Radiative Transfer Model (RRTM). When computing the cloud contribution to DSLW, the cloud base temperature is estimated by using an independent artificial neural network approach of similar architecture as previously mentioned, and parameterizations. The cloud base temperature neural network is trained using spatially and temporally co‐located MODIS and CloudSat Cloud Profiling Radar (CPR) and the Cloud‐Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) Cloud‐Aerosol Lidar with Orthogonal Polarization (CALIOP) observations. Daily average estimates of DSLW from 2003 to 2009 are compared against ground measurements from the Baseline Surface Radiation Network (BSRN) giving an overall correlation coefficient of 0.98, root mean square error (rmse) of 15.84 W m−2, and a bias of −0.39 W m−2. This is an improvement over an earlier version of the model (DSLW/UMD v1) which for the same time period has an overall correlation coefficient 0.97 rmse of 17.27 W m−2, and bias of 0.73 W m−2.