When the Chain Breaks: Interactive Diagnosis of LLM Chain-of-Thought Reasoning Errors

Current Large Language Models (LLMs), especially Large Reasoning Models, can generate Chain-of-Thought (CoT) reasoning traces to illustrate how they produce final outputs, thereby mitigating the interpretability issues of LLMs to some extent. However, these CoT reasoning traces are usually lengthy and tedious, and can contain various issues, such as logical and factual errors, which make it difficult for users to interpret the reasoning traces efficiently and accurately. To address these challenges, we develop an error detection pipeline that combines external fact-checking with symbolic formal logical validation to identify errors at the step level. Building on this pipeline, we propose ReasonDiag, an interactive visualization system for diagnosing CoT reasoning traces. ReasonDiag provides 1) an integrated arc diagram to show reasoning-step distributions and error-propagation patterns, and 2) a hierarchical node-link diagram to visualize high-level reasoning flows and premise dependencies. We evaluate ReasonDiag through a technical evaluation for the error detection pipeline, two case studies, and user interviews with 16 participants. The results indicate that ReasonDiag helps users effectively understand CoT reasoning traces, identify erroneous steps, and determine their root causes.

Case Study 1: Diagnose Error Cause and Reasoning Patterns

A mathematical reasoning sample from the DeltaBench dataset

In the subtraction shown, $K, L, M$, and $N$ are digits. What is the value of $K+L+M+N$? $$\begin{array}{r}6 K 0 L \\ -\quad M 9 N 4 \\ \hline 2011\end{array}$$

Case Study 2: Diagnose Illusory Truth and Logical Gaps

A multi-hop query from the GAIA benchmark, reasoning trace generated by DeepSeek

How many at bats did the Yankee with the most walks in the 1977 regular season have that same season?