The following walkthrough shows a production workflow for LLM memory and context assembly — one of the many use cases HexxlaDB supports. Every code block is copy-pasteable.
When a new query arrives, embed it and search. HexxlaDB uses the HNSW graph for fast approximate nearest-neighbor lookup, then applies your filters as post-predicates.
db.View(func(tx *hexxladb.Tx) error { results, err := tx.QueryCells(ctx, hexxladb.CellQuery{ Embedding: queryVector, // "How do I test my HTTP handlers?" ExcludeTags: []string{"preference"}, // keep preferences separate MinConfidence: 0.5, MaxResults: 8, SortBy: hexxladb.SortByScore, }) // results: ranked cells with score, content, tags, provenance})
Preferences are just cells with a "preference" tag. Query them separately so they always appear in your context, regardless of what the user is asking about.
Take the top search results as seed coordinates and expand outward. The assembler walks concentric rings, fills your budget, and automatically replaces superseded cells with their successors.
db.View(func(tx *hexxladb.Tx) error { // Use the top-3 search results as seeds seeds := []hexxladb.Coord{results[0].Cell.Coord, results[1].Cell.Coord, results[2].Cell.Coord} pack, err := tx.LoadContextPackFrom(ctx, 2, // max ring radius 4096, // budget hexxladb.ByteLenBudgeter{}, hexxladb.LoadContextBudgetConfig{ FilterSuperseded: true, // old preferences auto-replaced by new ones IncludeSeams: true, // surface contradictions for the system }, seeds..., ) // pack.Cells: ordered context, pack.TotalTokens: fits your budget})
When preferences change, HexxlaDB doesn’t silently overwrite — it records the relationship so context assembly can handle it automatically.
db.Update(func(tx *hexxladb.Tx) error { // User now wants verbose explanations (previously wanted brevity) return tx.MarkSupersedes(newPrefCoord, oldPrefCoord, "User changed communication preference")})// Or flag an outright contradiction between two factsdb.Update(func(tx *hexxladb.Tx) error { return tx.MarkConflict(cellA, cellB, "Conflicting architecture recommendations")})
Understand cells, seams, edges, facets, and coordinates.
API reference
Explore the complete API surface.
Storage internals
Learn about the storage layout and key encoding.
Production operations
Backups, encryption, changefeed, and retention.
That’s the full pipeline. Embed → search → filter → assemble → output.
Every step runs in-process, deterministically, with no network calls to the
database layer. See the llm_context_engine
example
for a complete runnable version of this LLM memory workflow with advanced
patterns including multi-signal retrieval, preference supersession, and full
prompt assembly.