Large Language Models (LLMs) have transformed AI but often struggle with tasks that require domain-specific reasoning and logical alignment. Traditional fine-tuning methods do not leverage the vast amount of symbolic domain-knowledge available to us via symbolic reasoning tools (e.g., provers), and are further limited by sparse rewards and unreliable reward models. We introduce Reinforcement Learning via Symbolic Feedback (RLSF), a novel fine-tuning paradigm where symbolic reasoning tools (e.g., solvers, provers, and algebra systems) provide fine-grained feedback to LLMs. RLSF uses poly-sized certificates (e.g., proofs) generated by symbolic tools to identify and correct errors in model outputs, offering token-level guidance without requiring differentiable reasoning systems. This paradigm bridges the gap between symbolic reasoning and LLM fine-tuning, enabling precise alignment with domain-specific constraints while addressing key limitations of traditional reward signals. Via extensive evaluations, we show that our RLSF-based fine-tuning of LLMs outperforms traditional approaches on five different applications (that have some associated logical or domain constraints), namely, program synthesis from natural language pseudo-code to programming language, three chemistry tasks, and solving the Game of 24. A key takeaway is that fine-tuning via RLSF enables relatively smaller LLMs to significantly outperform closed-source models that are orders of magnitude larger.
Our experiments demonstrate that RLSF consistently outperforms traditional fine-tuning approaches across five different applications with domain-specific constraints, showing that symbolic feedback enables more effective learning than conventional methods.
TLDR: We introduce RLSF, a novel fine-tuning paradigm that leverages symbolic reasoning tools to provide fine-grained feedback to LLMs, enabling precise alignment with domain-specific constraints and significantly improving performance on tasks with logical or mathematical requirements.
RLSF addresses key limitations of traditional reward signals by using symbolic reasoning tools such as:
1. Solvers and Provers: Provide formal verification and generate poly-sized certificates (e.g., proofs) that identify errors in model outputs.
2. Domain-Specific Tools: Chemistry simulators, compilers, and mathematical verification systems offer token-level guidance for precise error correction.
3. Symbolic Validators: Enable non-differentiable feedback without requiring gradient computation, making the approach broadly applicable across domains.
Unlike traditional RLHF approaches that rely on sparse, scalar rewards from human preferences or simple reward models, RLSF leverages poly-sized certificates generated by symbolic tools to provide token-level feedback. This enables more precise learning by pinpointing specific areas that need improvement rather than providing only binary pass/fail signals.
RLSF bridges the gap between symbolic reasoning and LLM fine-tuning by using external symbolic tools to generate rich feedback signals. Unlike traditional RLHF approaches that rely on human preferences or simple reward models, RLSF leverages the vast amount of symbolic domain knowledge available through reasoning tools.
The key innovation is the use of poly-sized certificates—formal proofs or verification results—that provide detailed information about where and why errors occur in model outputs. This enables token-level corrections and more efficient learning compared to sparse reward signals.
Our extensive evaluations across five different applications demonstrate that RLSF enables smaller LLMs to outperform much larger closed-source models, highlighting the effectiveness of symbolic feedback in guiding the learning process toward domain-specific constraints and logical consistency.
@article{jha2024rlsf,
title={RLSF: Fine-tuning LLMs via Symbolic Feedback},
author={Jha, Piyush and Jana, Prithwish and Suresh, Pranavkrishna and Arora, Arnav and Ganesh, Vijay},
journal={28th European Conference on Artificial Intelligence; arXiv preprint arXiv:2405.16661},
year={2024}
}