Monitoring and evaluating national and subnational responses in support of children orphaned or made vulnerable by AIDS (OVC) present a number of challenges. Most OVC programmes require multiple interventions across different sectors. In most countries, targeting parameters for identifying ‘vulnerable’ children are fluid and difficult to define or measure concretely. Along with weak national monitoring systems and a lack of development partner or government interest and funds, these factors make the monitoring and evaluation (M&E) of OVC programmes difficult. Of the 15 countries in East and Southern Africa (ESA), which have developed national plans of action (NPAs) in support of OVC, nine have M&E plans. In their early periods of implementation, most countries are piloting the development of data collection systems and tools at community, district and national levels. Along with a desk review, semi-structured consultations were held with over 100 national OVC M&E stakeholders from 10 countries in ESA as part of a regional capacity-building exercise. This paper reviews their progress, constraints and lessons learnt, before making recommendations. Most countries do not have reliable baseline information or a clear ‘denominator’. Subsequently, numbers rather than percentages are used frequently when reporting on coverage, which gives little indication of the scope or efficacy of interventions. Many countries also reported challenges in defining and monitoring the quality of care provided, therefore the consistency and quality of the support being provided is largely unknown. Another major gap is the fact that OVC indicators are rarely included across key sectors' data collection systems (e.g. health, education). To help overcome these challenges, recommendations include the following. While for national monitoring and evaluation purposes, measurable, discrete, evidence-informed criteria should be employed – for project or programming purposes, targeting criteria should be sufficiently flexible and responsive based on community realities and the child's immediate needs. Every opportunity to use and disaggregate existing data should be exploited, while the establishment of parallel data collection mechanisms should be avoided. While quality assurance mechanisms should be established to monitor service coverage, no pilot should be undertaken without the design and implementation of an evaluation to accompany it. These should include a baseline and the establishment of comparison control groups, isolating and assessing the outcomes that the intervention is having on children.