← ML Research Wiki / 2506.17180

CLEAR-3K: ASSESSING CAUSAL EXPLANATORY CAPABILITIES IN LANGUAGE MODELS

(2025)

Paper Information
arXiv ID

Abstract

We introduce CLEAR-3K, a dataset of 3,000 assertion-reasoning questions designed to evaluate whether language models can determine if one statement causally explains another.Each question present an assertion-reason pair and challenge language models to distinguish between semantic relatedness and genuine causal explanatory relationships.Through comprehensive evaluation of 21 state-of-the-art language models (ranging from 0.5B to 72B parameters), we identify two fundamental findings.First, language models frequently confuse semantic similarity with causality, relying on lexical and semantic overlap instead of inferring actual causal explanatory relationships.Second, as parameter size increases, models tend to shift from being overly skeptical about causal relationships to being excessively permissive in accepting them.Despite this shift, performance measured by the Matthews Correlation Coefficient plateaus at just 0.55, even for the best-performing models.Hence, CLEAR-3K provides a crucial benchmark for developing and evaluating genuine causal reasoning in language models, which is an essential capability for applications that require accurate assessment of causal relationships.

Summary

This paper presents CLEAR-3K, a dataset of 3,000 assertion-reasoning questions aimed at evaluating causal explanatory reasoning in language models. The study finds that language models often confuse semantic similarity with causative explanations, with larger models becoming overly permissive in acceptance of causal relationships. The best performance measured by Matthews Correlation Coefficient (MCC) was 0.55 across different model sizes and types. The paper emphasizes the importance of improving the models' understanding of causal relationships for effective application in critical domains such as education and policy analysis.

Methods

This paper employs the following methods:

  • Causal Explanation Task

Models Used

  • LLaMA3
  • Qwen2.5
  • Qwen3
  • Gemma3
  • Phi-4

Datasets

The following datasets were used in this research:

  • CLEAR-3K

Evaluation Metrics

  • Matthews Correlation Coefficient
  • Explanatory Accuracy
  • Rejection Accuracy

Results

  • MCC plateaued at 0.55 for best-performing models
  • Language models confuse semantic similarity with causality

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified
  • Compute Requirements: None specified

External Resources