On exact energy guided flow matching for offline reinforcement learning

Date:

On exact energy guided flow matching for offline reinforcement learning

Guided generative models are pivotal in advancing the applications of generative modeling. In this talk, I will explore ene rgy guidance in flow matching models–a generalized formulation that extends beyond conventional diffusion models. By leveraging energy guidance, generative models are encouraged to produce samples with higher energy from the target data distribution. I will introduce energy-weighted flow matching, a method that provides a closed-form solution for continuous normalizing flows (CNFs), enabling efficient implementation and offering new theoretical insights. In the second half of the presentation, I will discuss the extension of this approach to offline reinforcement learning through Q-weighted iterative policy optimization, which shows notable performance improvements across various offline RL tasks.

This talk also appears in Dr. Yun Li and Dr. Di Wu’s Group at UNC