The author developed Talos-XII, a project initially conceived for simulating Arknights: Endfield gacha pulls. This project evolved into a highly optimized system featuring a custom Deep Learning engine built entirely in Rust. The core objective is to employ Reinforcement Learning algorithms, specifically PPO and DQN, to discover optimal gacha pulling strategies for budget-conscious players.
The engine avoids Python, utilizing pure Rust and featuring a custom reverse-mode Autograd system. Performance is enhanced through Rayon for parallel tensor operations and hand-written SIMD kernels for critical paths. The model incorporates a Deep Belief Network for environment noise simulation and a Transformer architecture for the agent.
Optimization draws inspiration from the DeepSeek mHC paper, presenting an interesting implementation challenge. Talos-XII simulates millions of pulls to determine the probability of obtaining specific characters with free resources. The project, essentially a "Neural Luck Optimiser," aims to guide players on optimal resource saving.
Currently, Talos-XII is a command-line interface only, with no graphical user interface developed yet. The project repository and a relevant reference paper are provided for those interested. The author specifically acknowledges the DeepSeek team's mHC paper for its significant influence on the optimizer's design.
dev.to
dev.to
Create attached notes ...
