The plasma HIV-RNA level has been used as the primary efficacy measurement in clinical trials to evaluate antiretroviral regimens in HIV-infected patients. It is measured by polymerase chain reaction (PCR) assays, which usually have limits of reliable quantification (LoQ). For example, the commercially available Amplicor Standard assay has a reliable range of 400–750,000 copies/mL while the Ultrasensitive assay has a range of 50–75,000 copies/mL. Values below the lower LoQ are usually reported as categorical variables such as “ < 400 copies/mL” for the Standard assay and “ < 50 copies/mL” for the Ultrasensitive assay. The Standard assay, which has a higher ceiling of 750,000 copies/mL, is typically used as the first tool to measure HIV-RNA levels; if a value of “ < 400 copies/mL” is reported by the Standard assay, the plasma sample may be re-tested by the Ultrasensitive assay, which has a lower LoQ of 50 copies/mL, in an effort to quantify the HIV-RNA level. However, for the calculation of change from baseline in log10 HIV-RNA, which is an important efficacy endpoint, the additional data measured by the Ultrasensitive assay are usually ignored due to a lack of simple and appropriate statistical methods. The conventional approach, which only uses the Standard assay data, may result in loss of information; the naïve approach, which simply replaces “ < 400 copies/mL” reported by the Standard assay with corresponding Ultrasensitive assay results, may lead to a biased estimate because the two assays may have different assay variability; the likelihood-based approach, which can utilize all data from both assays, is computationally intensive and requires a large sample size, which may limit its use in practice. In this paper, we propose a simple imputation approach that, unlike the naïve method, accounts for the different variability in the two assays. A simulation study is used to compare these approaches. An example from a clinical trial in HIV-infected patients is used to illustrate the proposed approach.