I built ThreadLink after repeatedly losing context between long LLM sessions. If you're working on something multi-session, you end up copying large chunks of prior conversation into every new chat.
ThreadLink compresses full transcripts into portable "context cards" that can be reused across platforms or sessions. It runs entirely client-side — no transcripts are sent to any server. You provide your own API keys (OpenAI, Gemini, Mistral, Groq).
How it works:
- Removes platform boilerplate
- Segments conversations into token-aware sequential chunks
- Processes chunks in parallel
- Reassembles results deterministically
- Preserves partial output if some chunks fail
- Includes adaptive rate limiting with exponential backoff
- Optional recency weighting to bias token allocation toward recent content
- Prompts are editable, so power users can repurpose the pipeline beyond summarization (e.g. extraction, transformation, structured rewriting)
The focus wasn’t just summarization quality, but orchestration: chunking strategy, concurrency control, and failure handling.
I built ThreadLink after repeatedly losing context between long LLM sessions. If you're working on something multi-session, you end up copying large chunks of prior conversation into every new chat.
ThreadLink compresses full transcripts into portable "context cards" that can be reused across platforms or sessions. It runs entirely client-side — no transcripts are sent to any server. You provide your own API keys (OpenAI, Gemini, Mistral, Groq).
How it works:
- Removes platform boilerplate - Segments conversations into token-aware sequential chunks - Processes chunks in parallel - Reassembles results deterministically - Preserves partial output if some chunks fail - Includes adaptive rate limiting with exponential backoff - Optional recency weighting to bias token allocation toward recent content - Prompts are editable, so power users can repurpose the pipeline beyond summarization (e.g. extraction, transformation, structured rewriting)
The focus wasn’t just summarization quality, but orchestration: chunking strategy, concurrency control, and failure handling.
Built for the Bolt.new 2025 Hackathon.
Live demo: https://threadlink.xyz Source: https://github.com/Skragus/ThreadLink Demo video: https://youtu.be/WNVgECm5cVc?si=yFYXMJxF6GBB0DAY