Difference between revisions of "AI research trends"

From GISAXS
Jump to: navigation, search
(Context Remaking)
(Reviews)
 
(15 intermediate revisions by the same user not shown)
Line 3: Line 3:
  
 
=Memory=
 
=Memory=
 +
==Reviews==
 +
* 2024-04: [https://arxiv.org/abs/2404.13501 A Survey on the Memory Mechanism of Large Language Model based Agents]
 +
* 2026-01: [https://arxiv.org/abs/2601.09113 The AI Hippocampus: How Far are We From Human Memory?]
 +
 +
==Big Ideas==
 +
* 2026-02: [https://arxiv.org/abs/2602.07755 Learning to Continually Learn via Meta-learning Agentic Memory Designs]
  
 
==LLM Weights Memory==
 
==LLM Weights Memory==
Line 8: Line 14:
 
* 2025-10: [https://arxiv.org/abs/2510.15103 Continual Learning via Sparse Memory Finetuning]
 
* 2025-10: [https://arxiv.org/abs/2510.15103 Continual Learning via Sparse Memory Finetuning]
 
* 2026-01: [https://developer.nvidia.com/blog/reimagining-llm-memory-using-context-as-training-data-unlocks-models-that-learn-at-test-time/ Reimagining LLM Memory: Using Context as Training Data Unlocks Models That Learn at Test-Time] (Nvidia)
 
* 2026-01: [https://developer.nvidia.com/blog/reimagining-llm-memory-using-context-as-training-data-unlocks-models-that-learn-at-test-time/ Reimagining LLM Memory: Using Context as Training Data Unlocks Models That Learn at Test-Time] (Nvidia)
 +
* 2026-01: [https://arxiv.org/abs/2601.02151 Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting]
  
 
==Context Length==
 
==Context Length==
Line 44: Line 51:
  
 
==Context Remaking==
 
==Context Remaking==
 +
* 2021-01: [https://arxiv.org/abs/2101.00436 Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval]
 +
* 2025-08: [https://blog.plasticlabs.ai/blog/Memory-as-Reasoning Memory as Reasoning (Memory is Prediction)]
 
* 2025-09: [https://arxiv.org/abs/2509.25140 ReasoningBank: Scaling Agent Self-Evolving with Reasoning Memory]
 
* 2025-09: [https://arxiv.org/abs/2509.25140 ReasoningBank: Scaling Agent Self-Evolving with Reasoning Memory]
 
* 2025-10: [https://arxiv.org/abs/2510.04618 Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models]
 
* 2025-10: [https://arxiv.org/abs/2510.04618 Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models]
 
* 2025-12: [https://arxiv.org/abs/2512.24601 Recursive Language Models] (model searches/queries the full context)
 
* 2025-12: [https://arxiv.org/abs/2512.24601 Recursive Language Models] (model searches/queries the full context)
 
* 2026-01: [https://arxiv.org/abs/2601.02553 SimpleMem: Efficient Lifelong Memory for LLM Agents]
 
* 2026-01: [https://arxiv.org/abs/2601.02553 SimpleMem: Efficient Lifelong Memory for LLM Agents]
 +
* 2026-01: [https://arxiv.org/abs/2601.07190 Active Context Compression: Autonomous Memory Management in LLM Agents]
  
 
==Retrieval beyond RAG==
 
==Retrieval beyond RAG==
Line 55: Line 65:
 
* 2024-12: [https://arxiv.org/abs/2412.11919 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation]
 
* 2024-12: [https://arxiv.org/abs/2412.11919 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation]
 
* 2025-03: Microsoft: [https://www.microsoft.com/en-us/research/blog/introducing-kblam-bringing-plug-and-play-external-knowledge-to-llms/ Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs]
 
* 2025-03: Microsoft: [https://www.microsoft.com/en-us/research/blog/introducing-kblam-bringing-plug-and-play-external-knowledge-to-llms/ Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs]
 +
* 2025-06: [https://arxiv.org/abs/2506.06266 Cartridges: Lightweight and general-purpose long context representations via self-study]
 
* 2025-07: [https://arxiv.org/pdf/2507.07957 MIRIX: Multi-Agent Memory System for LLM-Based Agents] ([https://mirix.io/ mirix])
 
* 2025-07: [https://arxiv.org/pdf/2507.07957 MIRIX: Multi-Agent Memory System for LLM-Based Agents] ([https://mirix.io/ mirix])
 
* 2025-08: [https://arxiv.org/abs/2508.16153 Memento: Fine-tuning LLM Agents without Fine-tuning LLMs]
 
* 2025-08: [https://arxiv.org/abs/2508.16153 Memento: Fine-tuning LLM Agents without Fine-tuning LLMs]
Line 60: Line 71:
 
==Working Memory==
 
==Working Memory==
 
* 2024-12: [https://www.arxiv.org/abs/2412.18069 Improving Factuality with Explicit Working Memory]
 
* 2024-12: [https://www.arxiv.org/abs/2412.18069 Improving Factuality with Explicit Working Memory]
 +
* 2026-01: [https://arxiv.org/abs/2601.03192 MemRL: Self-Evolving Agents via Runtime Reinforcement Learning on Episodic Memory]
  
 
==Long-Term Memory==
 
==Long-Term Memory==
Line 70: Line 82:
 
===Storage and Retrieval===
 
===Storage and Retrieval===
 
* 2025-09: [https://arxiv.org/abs/2509.04439 ArcMemo: Abstract Reasoning Composition with Lifelong LLM Memory]
 
* 2025-09: [https://arxiv.org/abs/2509.04439 ArcMemo: Abstract Reasoning Composition with Lifelong LLM Memory]
 +
* 2026-01: [https://www.arxiv.org/abs/2601.07372 Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models]
  
 
===Episodic Memory===
 
===Episodic Memory===
Line 76: Line 89:
  
 
==Continual Learning==
 
==Continual Learning==
 +
* 2022-02: [https://arxiv.org/abs/2202.00275 Architecture Matters in Continual Learning]
 
* 2025-10: [https://arxiv.org/abs/2510.15103 Continual Learning via Sparse Memory Finetuning]
 
* 2025-10: [https://arxiv.org/abs/2510.15103 Continual Learning via Sparse Memory Finetuning]
 
* 2025-11: [https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/ Introducing Nested Learning: A new ML paradigm for continual learning]
 
* 2025-11: [https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/ Introducing Nested Learning: A new ML paradigm for continual learning]
 +
* 2026-01: [https://arxiv.org/abs/2601.16175 Learning to Discover at Test Time]
 +
* 2026-01: [https://arxiv.org/abs/2601.19897 Self-Distillation Enables Continual Learning]
 +
* 2026-02: [https://arxiv.org/abs/2602.07755 Learning to Continually Learn via Meta-learning Agentic Memory Designs]
  
 
=Updating Weights at Inference-time=
 
=Updating Weights at Inference-time=
Line 144: Line 161:
 
=Daydreaming, brainstorming, pre-generation=
 
=Daydreaming, brainstorming, pre-generation=
 
* Gwern: [https://gwern.net/ai-daydreaming Daydreaming]
 
* Gwern: [https://gwern.net/ai-daydreaming Daydreaming]
 +
* 2026-02: [https://arxiv.org/abs/2602.01689 What LLMs Think When You Don't Tell Them What to Think About?]
  
 
'''Pre-generation'''
 
'''Pre-generation'''
Line 149: Line 167:
 
* 2025-04: [https://arxiv.org/abs/2504.13171 Sleep-time Compute: Beyond Inference Scaling at Test-time]
 
* 2025-04: [https://arxiv.org/abs/2504.13171 Sleep-time Compute: Beyond Inference Scaling at Test-time]
 
* 2025-11: [https://inference.net/blog/project-aella Project OSSAS: Custom LLMs to process 100 Million Research Papers] ([https://huggingface.co/inference-net models], [https://aella.inference.net/embeddings visualization])
 
* 2025-11: [https://inference.net/blog/project-aella Project OSSAS: Custom LLMs to process 100 Million Research Papers] ([https://huggingface.co/inference-net models], [https://aella.inference.net/embeddings visualization])
 
  
 
=Missing Elements=
 
=Missing Elements=
Line 174: Line 191:
 
** 2025-06: [https://x.com/karpathy/status/1938626382248149433 LLMs as "cognitive cores"]
 
** 2025-06: [https://x.com/karpathy/status/1938626382248149433 LLMs as "cognitive cores"]
 
** 2025-11: [https://x.com/karpathy/status/1990116666194456651?s=20 Software 1.0 easily automates what you can specify. Software 2.0 easily automates what you can verify.]
 
** 2025-11: [https://x.com/karpathy/status/1990116666194456651?s=20 Software 1.0 easily automates what you can specify. Software 2.0 easily automates what you can verify.]
 +
** 2026-01: [https://x.com/karpathy/status/2008664551445963083?s=20 The majority of the ruff ruff is people who look at the current point and people who look at the current slope]
 +
** 2026-02: [https://x.com/karpathy/status/2019137879310836075 Agentic Engineering]
  
 
=See Also=
 
=See Also=
 
* [[Increasing AI Intelligence]]
 
* [[Increasing AI Intelligence]]

Latest revision as of 10:25, 18 February 2026

System 2 Reasoning

See: Increasing AI Intelligence

Memory

Reviews

Big Ideas

LLM Weights Memory

Context Length

Extended Context

Context Remaking

Retrieval beyond RAG

See also: AI tools: Retrieval Augmented Generation (RAG)

Working Memory

Long-Term Memory

Storage and Retrieval

Episodic Memory

Continual Learning

Updating Weights at Inference-time

Parameters as Tokens

Internal Thought Representation Space

Visual Thinking

Neural (non-token) Latent Representation

Altered Transformer

Tokenization

Generation Order

Diffusion Language Models

Related: Image Synthesis via Autoregression/Diffusion

Sampling

Daydreaming, brainstorming, pre-generation

Pre-generation

Missing Elements

  • Memory
  • Continuous learning/update
  • Robust contextual model
  • Long-time-horizon coherence
  • Fluid intelligence
  • Agency
  • Modeling of self
  • Daydreaming

Memes

See Also