Just Think AIStart thinking

GlossaryTerm

Agent Memory

How agents retain information within and across sessions.

LLMs are stateless by default — every request starts fresh. "Memory" in an agent is the engineering scaffolding that makes it seem otherwise. There are three kinds: (1) In-context memory — just include the conversation history in the prompt. Works until you hit the context limit. (2) External memory — write important facts to a database and retrieve them at the start of each session. Scales indefinitely but requires good retrieval. (3) Summary memory — periodically compress old conversation into a shorter summary and carry that forward instead.

The right pattern depends on your use case. Customer support agents usually need a few turns of in-context memory plus a persistent user profile in a database. Research agents need multi-session memory about prior findings. Simple chat assistants often need nothing more than the last N turns.

The failure mode everyone hits: dumping the entire conversation history into context without any compression strategy. You'll blow through the context window on your second user session.

Bring this to your business

Knowing the term is one thing. Shipping it is another.

We do two-week AI Sprints — one term, one workflow, into production by Day 10.