Distributed systems built in open competitive and highly dynamic pervasive environments are composed of autonomous entities that act and interact in an intelligent and flexible manner so as to achieve their own goals and aims. System entities may be classified into two main categories that are, in principle, in conflict. These are the service resource requestors (SRRs) wishing to use services and/or exploit resources offered by the other system entities and the service resource providers (SRPs) that offer the services/resources requested. Seeking for the maximization of their welfare, entities may misbehave, thus leading to a significant deterioration of system's performance. The scope of this paper is to present a computational model for trust establishment based on a reputation mechanism, which incorporates direct SRRs' experiences and information disseminated from witness SRRs on the basis of their past experiences with SRPs. The designed mechanism discriminates between unfair feedback ratings intentionally and unintentionally provided, takes into consideration potential changes to providers' behavior, and weighs more recent events in the evaluation of the overall reputation ratings. The proposed model has been extensively evaluated through simulation experiments. It exhibits good performance, as the reputation computation error introduced due to false feedback provision decreases significantly.