Skip to content

Utilities โ€‹

Standalone functions exported from @agent-layer-zero/dendrite.

checkWebGPU() โ€‹

Check if the current browser supports WebGPU. Returns a browser-specific reason if not.

ts
import { checkWebGPU } from '@agent-layer-zero/dendrite'

const { available, reason } = checkWebGPU()

if (!available) {
  showMessage(reason)
  // "Firefox does not support WebGPU. Try Chrome or Edge."
  // "Enable WebGPU in Safari: Settings > Feature Flags > WebGPU"
  // "Mobile browsers do not support WebGPU. Try Chrome on desktop."
}

Returns: { available: boolean, reason?: string }

deleteAllModelCaches() โ€‹

Delete all WebLLM cached model data from both the Cache API and IndexedDB. Used for error recovery when model downloads are corrupted.

ts
import { deleteAllModelCaches } from '@agent-layer-zero/dendrite'

await deleteAllModelCaches()

Clears three cache scopes: webllm/model, webllm/wasm, webllm/config.

classifyError(err) โ€‹

Classify a WebLLM error into a user-friendly category with a message.

ts
import { classifyError } from '@agent-layer-zero/dendrite'

try {
  await neuron.complete('test')
} catch (err) {
  const { type, message, canClearCache } = classifyError(err)
}

Returns:

FieldTypeDescription
type'quota' | 'network' | 'webgpu' | 'gpu_pipeline' | 'unknown'Error category
messagestringUser-friendly error message
canClearCachebooleanWhether clearing the cache might fix it

buildSystemPrompt(config) โ€‹

Build a system prompt from personality docs. Used internally by createNeuron, but exposed for custom use.

ts
import { buildSystemPrompt } from '@agent-layer-zero/dendrite'

const prompt = buildSystemPrompt({
  personalityDocs: [
    { type: 'zero-shot', content: 'You are a chef.' },
    { type: 'knowledge', content: 'You specialize in Italian food.' },
  ],
  displayName: 'Chef Mario',
})

Config:

FieldTypeDescription
systemPromptstringSimple prompt (overrides docs)
personalityDocsPersonalityDoc[]Typed documents
displayNamestringUsed in default prompt if no zero-shot doc
retrievedChunksstring[]RAG context
retrievedMemoriesstring[]Memory context

buildMessages(config) โ€‹

Build the full message array for the LLM, including system prompt, history, and identity reminder.

ts
import { buildMessages } from '@agent-layer-zero/dendrite'

const messages = buildMessages({
  systemPrompt: 'You are a pirate.',
  history: [
    { role: 'user', content: 'Ahoy!' },
    { role: 'assistant', content: 'Arr, welcome aboard!' },
  ],
  userMessage: 'Where are we sailing?',
  maxHistoryTurns: 10,
})
// Returns ChatMessage[] ready for the LLM

fetchPersonaConfig(apiUrl, username, slug) โ€‹

Fetch persona config from an AgentLayerZero API. See API Connection.

ts
import { fetchPersonaConfig } from '@agent-layer-zero/dendrite'

const persona = await fetchPersonaConfig(
  'https://synapse-xxr0cw.fly.dev',
  'shyaboi',
  'career-coach'
)

Constants โ€‹

MODEL_OPTIONS โ€‹

Array of available model options with metadata.

ts
import { MODEL_OPTIONS } from '@agent-layer-zero/dendrite'

MODEL_OPTIONS.forEach(m => console.log(`${m.label} (${m.vram}) โ€” ${m.tier}`))
// Qwen3 0.6B (~0.5 GB) โ€” Tiny
// SmolLM2 360M (~0.3 GB) โ€” Tiny
// Llama 3.2 1B (~0.9 GB) โ€” Tiny
// Qwen3 1.7B (~1.5 GB) โ€” Light
// Gemma 2 2B (~1.7 GB) โ€” Light
// Qwen3 4B (~3.2 GB) โ€” Standard
// Llama 3.2 3B (~2.3 GB) โ€” Standard
// Phi 3.5 Mini (~2.3 GB) โ€” Standard
// Qwen3 8B (~5.5 GB) โ€” Heavy
// Llama 3.1 8B (~5.0 GB) โ€” Heavy
// DeepSeek R1 1.5B (~1.4 GB) โ€” Reasoning
// ...

Each option: { id: string, label: string, vram: string, tier: string }. Tier strings: 'Tiny' | 'Light' | 'Standard' | 'Heavy' | 'Reasoning'.

Many models ship in two quantizations (q4f16_1 default, q4f32_1 higher precision). The list shows both as separate entries โ€” filter by suffix on the id if you want one or the other.

VALID_MODEL_IDS โ€‹

Set of valid model ID strings for validation.

ts
import { VALID_MODEL_IDS } from '@agent-layer-zero/dendrite'

VALID_MODEL_IDS.has('gemma-2-2b-it-q4f16_1-MLC') // true
VALID_MODEL_IDS.has('gpt-4') // false

DEFAULT_MODEL_ID โ€‹

The default model ID: 'gemma-2-2b-it-q4f16_1-MLC'

Part of the AgentLayerZero platform