Q-learning, SARSA, on/off policy learning

Dong has given a very nice talk (slides here) on reinforcement learning (RL) earlier. I learned Q-learning from an online Berkeley lecture several years ago. But I never had a chance to look into SARSA and grasp the concept of on-policy learning before. I think the talk sorted out some of my thoughts.

A background of RL

A typical RL problem involves the interaction between an agent and an environment. The agent will decide on an action based on the current state as it interacts with the environment. Based on this action, the model reaches a new state stochastically with some reward. The agent’s goal is to devise a policy (i.e., determining what action to take under each state) that maximizes the total expected reward. We usually model this setup as the Markov decision process (MDP), where the probability of reaching the next state s' only depends on the current state s and current choice of action a (independent of all earlier states) and hence is a Markov model.

Policy and value functions

A policy \pi is just a mapping from each state s to an action a.  The value function is defined as the expected utility (total reward) of a state given a policy. There are two most common value functions: the state-value function, V_\pi(s), which is the expected utility given the current state and policy, and the action-value function, Q_\pi(s,a), which is the expected utility given the current action, current state, and policy.

Q-Learning

The main difference of RL from MDP is that the probability p(s'|s,a) is not known in general. If we know this and also the reward r(s,a,s') for each state, current action, and next state, we can compute the expected utility for any state and appropriate action always. For Q-learning, the goal is precisely to estimate the Q function for the optimal policy by updating Q as

    \[Q(s,a) \leftarrow (1-\alpha) Q(s,a) + \alpha [r + \gamma (\max_{a'} Q(s',a'))],\]

where we have 0<\alpha\le 1 to control the degree of exponential smoothing in approximating Q(s,a). When \alpha =1, we do not use exponential smoothing at all.

Note that the equation above only describes how we are going to update Q(s,a), but it does not describe what action should be taken. Of course, we can exploit the estimate of Q and select the action that maximizes it. However, the early estimates of Q(s,a) is bad and so exploiting will work very poorly. Instead, we may simply select the action a randomly initially. And as the prediction of Q improves, exploit the knowledge of Q and take the action maximizing Q at times. This is the exploration vs exploitating trade-off. We often denote the probability of exploitation as \epsilon and we set an algorithm is \epsilon-greedy when with a probability of \epislon that the best action according to the current estimate of Q is taken.

On policy/Off-policy

In Q-learning, the Q-value is not updated according to data obtained from the actual action that has been taken. There are two terminologies that sometimes confuse me.

  • Behavior policy: policy that actually determines the next action
  • Target policy: policy that used to evaluate an action and that we are trying to learn

For Q-learning, the behavior policy and target policy apparently are not the same as the action that maximizes Q does not necessarily be the action that was actually taken.

SARSA

Given an experience (s,a,r,s',a') (that is why it is called SARSA), we update an estimate of Q instead by

    \[Q(s,a) \leftarrow (1-\alpha) Q(s,a) + \alpha [r + \gamma  Q(s',a'))].\]

It is on-policy as the data used to update Q (target policy) is directly from the behavior policy that was used to generate the data

Off-policy has the advantage to be more flexible and sample efficient but it could be less stable as well (see [6], for example).

Reference:

 

 

 

Copyright OU-Tulsa Lab of Image and Information Processing 2024
Tech Nerd theme designed by Siteturner