7.8 KiB
7.8 KiB
Discord AI Companion
Nova is a friendly, slightly witty Discord companion that chats naturally in DMs or when mentioned in servers. It runs on Node.js, uses discord.js v14, and leans on OpenAI's cost-efficient models plus lightweight local memory for persistent personality.
Features
- Conversational replies in DMs automatically; replies in servers when mentioned or in a pinned channel.
- OpenAI chat model (
gpt-4o-miniby default) for dialogue andtext-embedding-3-smallfor memory. - Short-term, long-term, and summarized memory layers with cosine-similarity retrieval.
- Automatic memory pruning, importance scoring, and transcript summarization when chats grow long.
- Local JSON vector store (no extra infrastructure) plus graceful retries for OpenAI rate limits.
- Optional "miss u" pings that DM your coder at random intervals (0–6h) when
CODER_USER_IDis set. - Dynamic per-message prompt directives that tune Nova's tone (empathetic, hype, roleplay, etc.) before every OpenAI call.
- Lightweight DuckDuckGo scraping for "Google-like" answers without paid APIs (locally cached).
Prerequisites
- Node.js 18+
- Discord bot token with Message Content Intent enabled
- OpenAI API key
Setup
- Install dependencies:
npm install - Copy the environment template:
cp .env.example .env - Fill
.envwith your secrets:DISCORD_TOKEN: Discord bot tokenOPENAI_API_KEY: OpenAI keyOPENAI_MODEL: Optional chat model override (defaultgpt-4o-mini)OPENAI_EMBED_MODEL: Optional embedding model (defaulttext-embedding-3-small)BOT_CHANNEL_ID: Optional guild channel ID where the bot can reply without mentionsCODER_USER_ID: Optional Discord user ID to receive surprise DMs every 0–6 hoursENABLE_WEB_SEARCH: Set tofalseto disable DuckDuckGo lookups (defaulttrue)
Running
- Development:
npm run dev - Production:
npm start
Optional PM2 Setup
npm install -g pm2
pm2 start npm --name nova-bot -- run start
pm2 save
PM2 restarts the bot if it crashes and keeps logs (pm2 logs nova-bot).
File Structure
src/
bot.js # Discord client + routing logic
config.js # Environment and tuning knobs
openai.js # Chat + embedding helpers with retry logic
memory.js # Multi-layer memory engine
.env.example
README.md
How Memory Works
- Short-term (recency buffer): Last 10 conversation turns kept verbatim for style and continuity. Stored per user in
data/memory.json. - Long-term (vector store): Every user message + bot reply pair becomes an embedding via
text-embedding-3-small. Embeddings, raw text, timestamps, and heuristic importance scores are stored in the JSON vector store. Retrieval uses cosine similarity plus a small importance boost; top 5 results feed the prompt. - Summary layer: When the recency buffer grows past ~3000 characters, Nova asks OpenAI to condense the transcript to <120 words, keeps the summary, and trims the raw buffer down to the last few turns. This keeps token usage low while retaining story arcs.
- Importance scoring: Messages mentioning intent words ("plan", "remember", etc.), showing length, or emotional weight receive higher scores. When the store exceeds its cap, the lowest-importance/oldest memories are pruned. You can also call
pruneLowImportanceMemories()manually if needed.
Memory Deep Dive
- Embedding math:
text-embedding-3-smallreturns 1,536 floating-point numbers for each text chunk. That giant array is a vector map of the message’s meaning; similar moments land near each other in 1,536-dimensional space. - What gets embedded: After every user→bot turn,
recordInteraction()(see src/memory.js) bundles the pair, scores its importance, asks OpenAI for an embedding, and stores{ content, embedding, importance, timestamp }insidedata/memory.json. - Why so many numbers: Cosine similarity needs raw vectors to compare new thoughts to past ones. When a fresh message arrives,
retrieveRelevantMemories()embeds it too, calculates cosine similarity against every stored vector, adds a small importance boost, and returns the top five memories to inject into the system prompt. - Self-cleaning: If the JSON file grows past the configured limits, low-importance items are trimmed, summaries compress the short-term transcript, and you can delete
data/memory.jsonto reset everything cleanly.
Conversation Flow
- Incoming message triggers only if it is a DM, mentions the bot, or appears in the configured channel.
- The user turn is appended to short-term memory immediately.
- The memory engine retrieves relevant long-term memories and summary text.
- A compact system prompt injects personality, summary, and relevant memories before passing short-term history to OpenAI.
- The reply is sent back to Discord. If Nova wants to send a burst of thoughts, she emits the
<SPLIT>token and the runtime fans it out into multiple sequential Discord messages. - Long chats automatically summarize; low-value memories eventually get pruned.
Dynamic Prompting
- Each turn, Nova inspects the fresh user message (tone, instructions, roleplay cues, explicit “split this” requests) plus the last few utterances.
- A helper (
composeDynamicPromptin src/bot.js) emits short directives like “User mood: fragile, be gentle” or “They asked for roleplay—stay in character.” - These directives slot into the system prompt ahead of memories, so OpenAI gets real-time guidance tailored to the latest vibe without losing the core persona.
Local Web Search
src/search.jsscrapes DuckDuckGo's HTML endpoint with a normal browser user-agent, extracts the top results (title/link/snippet), and caches them for 10 minutes to avoid hammering the site.bot.jsdetects when a question sounds “live” (mentions today/news/google/etc.) and injects the formatted snippets into the prompt as "Live intel". No paid APIs involved—it’s just outbound HTTPS from your machine.- Toggle this via
ENABLE_WEB_SEARCH=falseif you don’t want Nova to look things up.
Proactive Pings
- When
CODER_USER_IDis provided, Nova spins up a timer on startup that waits a random duration (anywhere from immediate to 6 hours) before DMing that user. - Each ping goes through OpenAI with the prompt "you havent messaged your coder in a while, and you wanna chat with him!" so responses stay playful and unscripted.
- The ping gets typed out (
sendTyping) for realism and is stored back into the memory layers so the next incoming reply has context.
Update Log
- 2026-02-13 — Dynamic personality + multi-message riffs: Added the instinctive persona prompt with tone mirroring,
<SPLIT>-based multi-bubble replies, and proactive coder pings so Nova feels alive in DMs. - 2026-02-13 — Memory intelligence: Implemented embeddings-backed long-term memory, short-term buffers, transcript summarization, and heuristic importance pruning stored in
data/memory.json. - 2026-02-13 — Live intel & directives: Introduced DuckDuckGo scraping, per-turn dynamic prompt directives (tone, roleplay, instruction compliance), and env toggles (
ENABLE_WEB_SEARCH,CODER_USER_ID). - 2026-02-13 — UX polish: Added typing indicators, persona-aware fallback replies, mention cleaning, and README/docs covering setup, memory internals, web search, and deployment tips.
- 2026-02-13 — Conversational control: Tuned system prompt to avoid forced follow-up questions, raised temperature for looser banter, and reinforced Nova's awareness of DuckDuckGo lookups plus
<SPLIT>usage.
Notes
- The bot retries OpenAI requests up to 3 times with incremental backoff when rate limited.
data/memory.jsonis ignored by git but will grow with usage; back it up if you want persistent personality.- To reset persona, delete
data/memory.jsonwhile the bot is offline.
Happy chatting!