Shared Nature, Unique Nurture: PRISM for Pluralistic Reasoning via In-context Structure Modeling
Abstract
Large language models are approaching a unified artificial hivemind that reduces diversity; PRISM addresses this by introducing individualized epistemic trajectories through dynamic on-the-fly epistemic graphs, enhancing both creative output and diagnostic accuracy.
Large Language Models (LLMs) are converging towards a singular Artificial Hivemind, where shared Nature (pre-training priors) result in a profound collapse of distributional diversity, limiting the distinct perspectives necessary for creative exploration and scientific discovery. To address this, we propose to equip models with inference-time Nurture (individualized epistemic trajectories) using Epistemic Evolution paradigm, progressing through explore, internalize, and express. We instantiate this via PRISM (Pluralistic Reasoning via In-context Structure Modeling), a model-agnostic system that augments LLM with dynamic On-the-fly Epistemic Graphs. On three creativity benchmarks, PRISM achieves state-of-the-art novelty and significantly expands distributional diversity. Moreover, we evaluate the real-world utility via a challenging rare-disease diagnosis benchmark. Results demonstrate that PRISM successfully uncovers correct long-tail diagnoses that standard LLM miss, confirming that its divergence stems from meaningful exploration rather than incoherent noise. Overall, this work establishes a new paradigm for Pluralistic AI, moving beyond monolithic consensus toward a diverse ecosystem of unique cognitive individuals capable of collective, multi-perspective discovery.
Community
Moving beyond shared pre-training 'Nature', PRISM injects unique cognitive 'Nurture' into LLMs via dynamic inference-time epistemic graphs, instantly unlocking pluralistic reasoning and state-of-the-art novelty and discovery on multiple benchmarks without any retraining.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DIVERGE: Diversity-Enhanced RAG for Open-Ended Information Seeking (2026)
- Circular Reasoning: Understanding Self-Reinforcing Loops in Large Reasoning Models (2026)
- Mirroring the Mind: Distilling Human-Like Metacognitive Strategies into Large Language Models (2026)
- From Atoms to Chains: Divergence-Guided Reasoning Curriculum for Unlabeled LLM Domain Adaptation (2026)
- Finding RELIEF: Shaping Reasoning Behavior without Reasoning Supervision via Belief Engineering (2026)
- Knowledge Integration Decay in Search-Augmented Reasoning of Large Language Models (2026)
- SIN-Bench: Tracing Native Evidence Chains in Long-Context Multimodal Scientific Interleaved Literature (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper