Abstract

Deciding what argument to utter during a negotiation is a key part of the strategy to reach an expected agreement. An agent, which is arguing during a negotiation, must decide what arguments are the best to persuade the opponent. In fact, in each negotiation step, the agent must select an argument from a set of candidate arguments by applying some selection policy. By following this policy, the agent observes some factors of the negotiation context (for instance, trust in the opponent and expected utility of the negotiated agreement). Usually, argument selection policies are defined statically. However, as the negotiation context varies from a negotiation to another, defining a static selection policy is not useful. Therefore, the agent should modify its selection policy in order to adapt it to the different negotiation contexts as the agent gains experience. In this paper, we present a reinforcement learning approach that allows the agent to improve the argument selection effectiveness by updating the argument selection policy. To carry out this goal, the argument selection mechanism is represented as a reinforcement learning model. We tested this approach in a multiagent system, in a stationary as well as in a dynamic environment. We obtained promising results in both.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call