NoesisLab advances machine learning research in deep contemplation and reflective reasoning to enable more profound and self-aware artificial intelligence.
Geilim-1B-SR-Instruct โ Serbian Intelligence for Deep Reasoning ๐ง ๐ท๐ธ NoesisLab/Geilim-1B-SR-Instruct Geilim-1B-SR-Instruct is a lightweight Large Language Model (LLM) designed to bring advanced reasoning capabilities to low-resource languages. It focuses on Serbian understanding and generation while maintaining robust English reasoning. Built on the LLaMA-3 architecture with a proprietary hybrid reasoning mechanism, it delivers deep logic while keeping outputs concise and natural. ๐
Core Innovations ๐ก
Implicit Deep Reasoning: Combines standard attention mechanisms with graph-structured reasoning components for rigorous logic and causal inference. ๐ธ๏ธ
ASPP & -flow Hybrid Design: High-efficiency structured propagation + internal probability space optimization for high-quality reasoning without long-winded intermediate steps. โก Bilingual Adaptation: Primarily focused on Serbian while preserving English logic, making it perfect for multilingual chats and cross-lingual tasks. ๐ Lightweight & Efficient: At ~1.3B parameters, it runs smoothly on consumer-grade GPUs, ideal for edge devices and research. ๐ป
Use Cases ๐ ๏ธ
Serbian Chatbots: Intelligent assistants with local linguistic nuance. ๐ฃ๏ธ Educational Tools: Multi-turn interactive tasks and learning support. ๐
Key Advantages โจ
Clean Output: Avoids messy "thinking" tags; reasoning happens internally, delivering clear and direct results. โ Open Access: Licensed under Apache-2.0, making it easy for research and engineering integration. ๐ AI Democratization: Empowering low-resource language ecosystems with cutting-edge intelligence. ๐ค
๐ Geilim-1B-Instruct โ Implicit Deep Reasoning, Zero Verbosity NoesisLab/Geilim-1B-Instruct https://huggingface.co/collections/NoesisLab/geilim-large-language-models No <think> tags. No long CoT. Reasoning happens inside the hidden states, not in the output. Whatโs different ๐ง Implicit reasoning: deep causal reasoning without exposing chains ๐ธ๏ธ ASPP (Adjacency-Structured Parallel Propagation): parent-only causal graph, O(n) message passing ๐ ฯ-flow: internal probability-space refinement instead of token-level deliberation โ๏ธ Hybrid gating: learns when to use structure vs attention Why it matters Lower latency & token cost Cleaner, production-ready outputs CoT-level reasoning depth without verbosity tax Built on Llama-3.2-1B-Instruct, trained for math, logic, and commonsense. Designed for small-model reasoning at the edge. #ImplicitReasoning #SmallLLM #EfficientAI #ReasoningModels #ASPP #PiFlow
Weโre excited to launch Asterisk, a cutting-edge language model by NoesisLab on Hugging Face! ๐ Built on top of SmolLM2-135M-Instruct, Asterisk integrates Adjacency-Structured Parallel Propagation (ASPP) with standard attention to bring structured reasoning power into language modeling.
โจ Key Highlights:
๐น Hybrid Architecture โ Fuses graph-centric ASPP local reasoning with global attention for richer representations. ๐น Enhanced Reasoning โ ASPP enables iterative local state evolution that complements traditional transformer layers. ๐น Efficient Design โ ~171M parameters with smart supervised fine-tuning (Capybara dataset). ๐น Flexible & Open โ Apache-2.0 licensed and ready to integrate via Hugging Face ๐ค Transformers.
๐ Asterisk showcases how hybrid operators โ inspired by theoretical frameworks like the Asterisk Operator โ can bring structured reasoning into modern LMs in a scalable way.
๐ Try it out, explore the code, and start building: huggingface.co/NoesisLab/Asterisk