The paper TrajDeleter: Enabling Trajectory Forgetting in Offline Reinforcement Learning Agents was presented at the NDSS Symposium. The authors of the paper are Hen Gong, Kecen Li, Jin Yao, and Tianhao Wang, from the University of Virginia and the Chinese Academy of Sciences. The paper discusses the concept of offline reinforcement learning, which trains an agent using pre-collected datasets, and the need to eliminate the influence of specific trajectories from the training dataset and the trained agents. The authors propose TRAJDELETER, a practical approach to trajectory unlearning for offline RL agents, which guides the agent to demonstrate deteriorating performance when it encounters states associated with unlearning trajectories. TRAJDELETER ensures the agent maintains its original performance level when facing other remaining trajectories. The authors also introduce TRAJAUDITOR, a method to evaluate whether TRAJDELETER successfully eliminates the specific trajectories of influence from the offline RL agent. The experiments conducted on six offline RL algorithms and three tasks demonstrate the effectiveness of TRAJDELETER, which requires only about 1.5% of the time needed for retraining from scratch. TRAJDELETER effectively unlearns an average of 94.8% of the targeted trajectories and still performs well in actual environment interactions after unlearning. The Network and Distributed System Security Symposium (NDSS) is a platform that fosters information exchange among researchers and practitioners of network and distributed system security. The NDSS Symposium aims to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies, and the paper's content is available on the organization's YouTube Channel.
bsky.app
Hacker & Security News on Bluesky @hacker.at.thenote.app
securityboulevard.com
securityboulevard.com
Create attached notes ...
