RLSF: Fine-tuning LLMs via Symbolic Feedback

Georgia Tech
ECAI 2025

Abstract

Large Language Models (LLMs) have transformed AI but often struggle with tasks that require domain-specific reasoning and logical alignment. Traditional fine-tuning methods do not leverage the vast amount of symbolic domain-knowledge available to us via symbolic reasoning tools (e.g., provers), and are further limited by sparse rewards and unreliable reward models. We introduce Reinforcement Learning via Symbolic Feedback (RLSF), a novel fine-tuning paradigm where symbolic reasoning tools (e.g., solvers, provers, and algebra systems) provide fine-grained feedback to LLMs. RLSF uses poly-sized certificates (e.g., proofs) generated by symbolic tools to identify and correct errors in model outputs, offering token-level guidance without requiring differentiable reasoning systems. This paradigm bridges the gap between symbolic reasoning and LLM fine-tuning, enabling precise alignment with domain-specific constraints while addressing key limitations of traditional reward signals. Via extensive evaluations, we show that our RLSF-based fine-tuning of LLMs outperforms traditional approaches on five different applications (that have some associated logical or domain constraints), namely, program synthesis from natural language pseudo-code to programming language, three chemistry tasks, and solving the Game of 24. A key takeaway is that fine-tuning via RLSF enables relatively smaller LLMs to significantly outperform closed-source models that are orders of magnitude larger.

Key Results

Our experiments demonstrate that RLSF consistently outperforms traditional fine-tuning approaches across five different applications with domain-specific constraints, showing that symbolic feedback enables more effective learning than conventional methods.

  • Program Synthesis: RLSF achieves significant improvements in converting natural language pseudo-code to C++ programming language, demonstrating the effectiveness of symbolic feedback from compilers and static analysis tools.
  • Chemistry Tasks: RLSF shows superior performance across three chemistry applications: molecule generation, forward synthesis prediction, and retrosynthesis prediction, leveraging domain-specific chemical reasoning tools for feedback.
  • Game of 24: RLSF enables smaller LLMs to significantly outperform much larger closed-source models, demonstrating the power of symbolic feedback in mathematical reasoning tasks.
  • Model Efficiency: A key finding is that RLSF enables relatively smaller LLMs to achieve performance that significantly exceeds closed-source models that are orders of magnitude larger, highlighting the efficiency gains from symbolic feedback.

RLSF: Reinforcement Learning via Symbolic Feedback

TLDR: We introduce RLSF, a novel fine-tuning paradigm that leverages symbolic reasoning tools to provide fine-grained feedback to LLMs, enabling precise alignment with domain-specific constraints and significantly improving performance on tasks with logical or mathematical requirements.

RLSF addresses key limitations of traditional reward signals by using symbolic reasoning tools such as:

1. Solvers and Provers: Provide formal verification and generate poly-sized certificates (e.g., proofs) that identify errors in model outputs.

2. Domain-Specific Tools: Chemistry simulators, compilers, and mathematical verification systems offer token-level guidance for precise error correction.

3. Symbolic Validators: Enable non-differentiable feedback without requiring gradient computation, making the approach broadly applicable across domains.

Unlike traditional RLHF approaches that rely on sparse, scalar rewards from human preferences or simple reward models, RLSF leverages poly-sized certificates generated by symbolic tools to provide token-level feedback. This enables more precise learning by pinpointing specific areas that need improvement rather than providing only binary pass/fail signals.


Framework Overview

Contrasting RLHF with RLSF

Contrasting RLHF with RLSF: The image depicts two distinct fine-tuning paradigms. (Top) RLHF operates within an environment governed by a black-box reward model, typically offering scalar feedback. (Bottom) By contrast, the environment in RLSF leverages sound symbolic reasoning tools and also provides token-level feedback that is, in turn, based on poly-sized certificates produced by these symbolic tools.

RLSF for Chemistry Tasks - Molecule Generation

RLSF for one of the chemistry tasks - Molecule Generation: In this illustration, the symbolic environment utilizes RDKit to generate a token-level reward vector as feedback based on the presence or absence of any syntactical errors. Moreover, for the semantic errors, we again use RDKit to check for the presence of the required functional groups mentioned in the input natural language description and penalize the entire generated molecule if it lacks the required functional groups. Each element in the reward vector corresponds to a token in the response, where erroneous tokens are penalized with a value of 0 and correct ones are assigned 1. The last element of the reward vector (corresponding to the <EOS> token) is 1 only if the entire response is correct, otherwise, it is 0.

RLSF bridges the gap between symbolic reasoning and LLM fine-tuning by using external symbolic tools to generate rich feedback signals. Unlike traditional RLHF approaches that rely on human preferences or simple reward models, RLSF leverages the vast amount of symbolic domain knowledge available through reasoning tools.

The key innovation is the use of poly-sized certificates—formal proofs or verification results—that provide detailed information about where and why errors occur in model outputs. This enables token-level corrections and more efficient learning compared to sparse reward signals.

Our extensive evaluations across five different applications demonstrate that RLSF enables smaller LLMs to outperform much larger closed-source models, highlighting the effectiveness of symbolic feedback in guiding the learning process toward domain-specific constraints and logical consistency.


Citation

@article{jha2024rlsf,
  title={RLSF: Fine-tuning LLMs via Symbolic Feedback},
  author={Jha, Piyush and Jana, Prithwish and Suresh, Pranavkrishna and Arora, Arnav and Ganesh, Vijay},
  journal={28th European Conference on Artificial Intelligence; arXiv preprint arXiv:2405.16661},
  year={2024}
}