With the web being such a universally popular medium, accounting forever more a people‘s information seeking behaviour, and with every move a person makes on the web being routinely monitored, web logs offer a treasure trove of data. This data is breathtaking in its sheer volume, detail and potential. Unlike previous computerised logs ‐ like those of OPACs for instance, web logs are capable of tracking literally millions of users worldwide and they are not confined to the actions of niche groups with specialised and largely academic needs. The data are of enormous strategic and widespread concern. Unfortunately the logs turn out to be good on volume and (certain) detail but bad at precision and attribution. They raise many questions ‐ what actually constitutes use being the biggest of them ‐ but provide far fewer answers. There are also many ways of reading logs. All the problems really arise from the fact that, in the case of the web, the virtual user is the computer. Resolving use to an individual is extremely difficult. Nevertheless, there is much that can be gleaned from web logs. Before this can be done, however, it is necessary to take precautions. First, do not rely on proprietary log analysis software. Second, do employ statistical methods to fill in the knowledge gap. Third, try to improve/enhance the data capture through other methods, like linking subscriber details to the web log. Fourth, bring an understanding of what users do when online to the interpretation of the data. The benefits (and problems) of web log analysis are demonstrated in the light of the experience of evaluating The Times and Sunday Times web sites. These sites are subscribed to by nearly a million people from around the globe and it is the online actions of these people ‐ the new international and information consumers ‐ that will be the subject of the paper.