Notes tagged with "reinforcement-learning": Nonlinear Function

35 notes tagged with "reinforcement-learning"

DAgger

The problem of [ exposure bias ] (where an autoregressive sequence model goes off the rails of its training distribution) comes up as a…

Tagged with: #reinforcement-learning

LeCun's Cherry

Yann Lecun's famous cake analogy is that: "If intelligence is a cake, the bulk of the cake is unsupervised learning, the icing on the cake…

Tagged with: #machine-learning#reinforcement-learning

Q-learning

Following the pattern of [ state values, then action values ], the one-step [ temporal difference ] update for action values is called…

Tagged with: #reinforcement-learning

actor-critic

(references: https://julien-vitay.net/deeprl/ActorCritic.html ) Advantage actor-critic The advantage function is a 'centered' version of…

Tagged with: #reinforcement-learning

advantage

In reinforcement learning, the advantage of a state-action pair under a policy is the improvement in value from taking action a (and…

Tagged with: #reinforcement-learning

cooperative inverse reinforcement learning

References: Cooperative Inverse Reinforcement Learning The Off-Switch Game Incorrigibility in the CIRL Framework The CIRL setting models…

Tagged with: #machine-learning#reinforcement-learning#alignment

decision transformer

paper: Chen, Lu, et al. 2021, https://arxiv.org/abs/2106.01345 Trajectories are represented as sequences: where is the return-to-go, i.e…

Tagged with: #ai#reinforcement-learning#papers

deep deterministic policy gradient

Deep deterministic policy gradient (DDPG) is an interesting RL algorithm with a somewhat misleading name. Although its name indicates that…

Tagged with: #reinforcement-learning

deep RL notes

Notes from John Schulman's Berkeley course on deep [ reinforcement learning ], Spring 2016. Value vs Policy-based learning Value-based…

Tagged with: #machine-learning#ai#reinforcement-learning

differentiable environments

Maybe a stupid idea, but I wonder if the idea behind differentiable physics simulators (like Brax) can be extended more broadly to rich…

Tagged with: #reinforcement-learning#ai

direct preference optimization

References: Direct Preference Optimization: Your Language Model is Secretly a Reward Model This seems like a compelling reframing of…

Tagged with: #ai#reinforcement-learning

eligibility trace

A few ways to think about eligibility traces: an explicit accounting of credit assignment a [ sufficient statistic ] for the history of the…

Tagged with: #reinforcement-learning

experience replay

The state transitions we observe in [ reinforcement learning ] are typically correlated over time, both within a trajectory (obviously) and…

Tagged with: #reinforcement-learning

maximum-entropy reinforcement learning

For any reward function and policy , consider the entropy-regularized reward Taking as our objective the (expected, discounted…

Tagged with: #reinforcement-learning

mesa optimizer

References: Risks from Learned Optimization in Advanced Machine Learning Systems A [ reinforcement learning ] algorithm attempts to find the…

Tagged with: #ai#reinforcement-learning

normalized advantage function

References: Gu et al., Continuous Deep Q-Learning with Model-based Acceleration (2016). Instead of modeling directly, we build a network…

Tagged with: #reinforcement-learning

objectives are big

A very incomplete and maybe nonsensical intuition I want to explore. Classically, people talk about very simple [ reward ] functions like…

Tagged with: #ai#reinforcement-learning#alignment

off-policy

A few (relatively uninformed) thoughts about on- vs off-policy [ reinforcement learning ]. Advantages of on-policy learning: On-policy…

Tagged with: #reinforcement-learning

policy gradient

(see also my [ deep RL notes ] from John Schulman's class several years ago, which cover much of the same material) We can approach…

Tagged with: #reinforcement-learning

proximal policy optimization

references: paper: https://arxiv.org/abs/1707.06347 great blog post on implementation details: https://iclr-blog-track.github.io/2022/0…

Tagged with: #reinforcement-learning

proof of the policy gradient theorem

The policy gradient theorem says that For simplicity we'll assume a fixed initial state and fixed-length finite trajectories, but the…

Tagged with: #reinforcement-learning

reinforcement learning

Note : see [ reinforcement learning notation ] for a guide to the notation I'm attempting to use through my RL notes. Three paradigmatic…

Tagged with: #ai#machine-learning#reinforcement-learning

reinforcement learning notation

There tends to be a lot going on in RL algorithms, with a whole mess of different quantities defined across timesteps. It's useful to try to…

Tagged with: #reinforcement-learning

reward is enough

Silver, Singh, Precup, and Sutton argue that Reward is enough : maximizing a reward signal implies, on its own, a very broad range of…

Tagged with: #ai#reinforcement-learning

reward shaping

Suppose we have a [ Markov decision process ] in which we get reward only at the very end of a long trajectory. Until that point, we have no…

Tagged with: #reinforcement-learning

reward funnel

When thinking about the [ reward ] function for a real-world AI system, there is always some causal process that determines reward. For…

Tagged with: #alignment#reinforcement-learning

rl diagnostics

Things that might be useful to log in a [ reinforcement learning ] algorithm: Return of each trajectory. (summarize as mean/std/min/max…

Tagged with: #reinforcement-learning

reward uncertainty

See also: [ cooperative inverse reinforcement learning ], [ love is value alignment ]

Tagged with: #reinforcement-learning#alignment

rl with proxy objectives

Suppose we want to maximize reward, but we only get a couple bits of reward data every few hundreds/thousands of actions, whereas we get…

Tagged with: #reinforcement-learning

state values, then action values

A common pattern in [ reinforcement learning ] pedagogy is to develop some idea first in the context of estimating state values , and then…

Tagged with: #reinforcement-learning

target network

A general issue with [ temporal difference ] learning methods, which 'update a guess towards a guess', is that they can end up 'chasing…

Tagged with: #reinforcement-learning

temporal difference

From David Silver's slides : TD-learning 'updates a guess towards a guess'. Sutton and Barto define the temporal difference error as the…

Tagged with: #reinforcement-learning

trust region policy optimization

(notes loosely based on the Berkeley deep RL course lecture ) Setup: RL with policy gradients The basic setup is that we want to optimize…

Tagged with: #reinforcement-learning

values all the way down

The standard [ Markov decision process ] formalism includes a reward function ; the total (discounted) reward across a trajectory is its…

Tagged with: #reinforcement-learning

weighted importance sampling

Reference: Mahmood et al., 2014. Weighted importance sampling for off-policy learning with linear function approximation Here's a situation…

Tagged with: #reinforcement-learning

See All tags