During the last few years, a great deal of attention has been focused on Lasso and Dantzig selector in high-dimensional linear regression when the number of variables can be much larger than the sample size. Under a sparsity scenario, the authors (see, e.g., Bickel et al., 2009, Bunea et al., 2007, Candes and Tao, 2007, Candès and Tao, 2007, Donoho et al., 2006, Koltchinskii, 2009, Koltchinskii, 2009, Meinshausen and Yu, 2009, Rosenbaum and Tsybakov, 2010, Tsybakov, 2006, van de Geer, 2008, and Zhang and Huang, 2008) discussed the relations between Lasso and Dantzig selector and derived sparsity oracle inequalities for the prediction risk and bounds on the <svg style="vertical-align:-5.73167pt;width:17.512501px;" id="M1" height="18.3125" version="1.1" viewBox="0 0 17.512501 18.3125" width="17.512501" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg"> <g transform="matrix(.017,-0,0,-.017,.062,11.113)"><path id="x1D43F" d="M559 163q-23 -66 -68 -163h-474l6 26q62 4 79.5 19.5t28.5 75.5l78 409q7 35 8.5 49t-8 25t-24 13t-51.5 5l5 28h266l-6 -28q-65 -5 -79.5 -18t-25.5 -74l-76 -406q-10 -57 14 -75q12 -13 96 -13q93 0 126 29q41 40 76 109z" /></g> <g transform="matrix(.012,-0,0,-.012,9.763,15.188)"><path id="x1D45D" d="M570 304q0 -108 -87 -199q-40 -42 -94.5 -74t-105.5 -43q-41 0 -65 11l-29 -141q-9 -45 -1.5 -58t45.5 -16l26 -2l-5 -29l-241 -10l4 26q51 10 67.5 24t26.5 60l113 520q-54 -20 -89 -41l-7 26q38 28 105 53l11 49q20 25 77 58l8 -7l-17 -77q39 14 102 14q82 0 119 -36
t37 -108zM482 289q0 114 -113 114q-26 0 -66 -7l-70 -327q12 -14 32 -25t39 -11q59 0 118.5 81.5t59.5 174.5z" /></g> </svg> estimation loss. In this paper, we point out that some of the authors overemphasize the role of some sparsity conditions, and the assumptions based on this sparsity condition may cause bad results. We give better assumptions and the methods that avoid using the sparsity condition. As a comparison with the results by Bickel et al., 2009, more precise oracle inequalities for the prediction risk and bounds on the <svg style="vertical-align:-5.73167pt;width:17.512501px;" id="M2" height="18.3125" version="1.1" viewBox="0 0 17.512501 18.3125" width="17.512501" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg"> <g transform="matrix(.017,-0,0,-.017,.062,11.113)"><use xlink:href="#x1D43F"/></g> <g transform="matrix(.012,-0,0,-.012,9.763,15.188)"><use xlink:href="#x1D45D"/></g> </svg> estimation loss are derived when the number of variables can be much larger than the sample size.