Controlled Query Evaluation (CQE) is a framework for the protection of confidential data, where a policy given in terms of logic formulae indicates which information must be kept private. Functions called censors filter query answering so that no answers are returned that may lead a user to infer data protected by the policy. The preferred censors, called optimal censors, are the ones that conceal only what is necessary, thus maximizing the returned answers. Typically, given a policy over a data or knowledge base, several optimal censors exist.Our research on CQE is based on the following intuition: confidential data are those that violate the logical assertions specifying the policy, and thus censoring them in query answering is similar to processing queries in the presence of inconsistent data as studied in Consistent Query Answering (CQA). In this paper, we investigate the relationship between CQE and CQA in the context of Description Logic ontologies. We borrow the idea from CQA that query answering is a form of skeptical reasoning that takes into account all possible optimal censors. This approach leads to a revised notion of CQE, which allows us to avoid making an arbitrary choice on the censor to be selected, as done by previous research on the topic.We then study the data complexity of query answering in our CQE framework, for conjunctive queries issued over ontologies specified in the popular Description Logics DL-LiteR and EL⊥. In our analysis, we consider some variants of the censor language, which is the language used by the censor to enforce the policy. Whereas the problem is in general intractable for simple censor languages, we show that for DL-LiteR ontologies it is first-order rewritable, and thus in AC0 in data complexity, for the most expressive censor language we propose.