Abstract Dealing with preferences is a ubiquitous part of our daily lives, and thus, preference forms an integral part of our daily decision-making processes. If we consider preferences among groups of agents, deliberation often leads to changes in preferences of an agent, influenced by the opinions of others, depending on how reliable these agents are according to the agent under consideration. Sometimes, it also leads to changes in the opposite direction, i.e. reliability over agents gets updated depending on their preferences. There are various formal studies of preference change based on reliability and/or trust, but not the other way around—this work contributes to the formal study of the latter aspect, i.e. on reliability change based on agent preferences. In process, some policies of preference change based on agent reliabilities are also discussed. We introduce a novel formal study of the relevant changes in agent reliabilities in decision-making scenarios, which is an integral part of artificial intelligence studies in the current day. A two-dimensional hybrid language is proposed to describe such processes, and axiomatizations and decidability are discussed.