Sidebar

AI Horizons: Memory Decoder - A Pretrained, Plug-and-Play Memory for Large Language Models

Speaker: Dr Zhouhan Lin (Associate Professor, Shanghai Jiao Tong University)

Date: Tuesday March 3, 2026

Time: 10.00AM SGT

Host: Dr Dianbo Liu (Assistant Professor, National University of Singapore)

Please register for the talk here

Abstract:

Large Language Models (LLMs) have shown strong abilities in general language tasks,yet adapting them to specific domains remains a challenge. Current method like Domain Adaptive Pretraining (DAPT) requires costly full-parameter training and suffers from catastrophic forgetting. Meanwhile, Retrieval-Augmented Generation (RAG) introduces substantial inference latency due to expensive nearest-neighbor searches and longer context.

This paper introduces Memory Decoder, a plug-and-play pretrained memory that enables efficient domain adaptation without changing the original model’s parameters. Memory Decoder employs a small transformer decoder that learns to imitate the behavior of an external non-parametric retriever. Once trained, Memory Decoder can be seamlessly integrated with any pretrained language model that shares the same tokenizer, requiring no model-specific modifications. Experimental results demonstrate that Memory Decoder enables effective adaptation of various Qwen and Llama models to three distinct specialized domains: biomedicine, finance, and law, reducing perplexity by an average of 6.17 points. Overall, Memory Decoder introduces a novel paradigm centered on a specially pretrained memory component designed for domain-specific adaptation. This memory architecture can be integrated in a plug-and-play manner, consistently enhancing performance across multiple models within the target domain. 

  • Home
  • [Online] AI Horizons: Memory Decoder – A Pretrained, Plug-and-Play Memory for Large Language Models