The Living
Context Engine.
Context infrastructure for AI agents. Adaptive memory, semantic graph, sub-millisecond retrieval. Deploy embedded, self-hosted, or in the cloud.
Context infrastructure for AI agents. One engine, every capability.
Feather Core
Open source, embedded, zero-server. Single .feather file. Python + Rust SDK. Ships in 5 minutes.
Feather Cloud
Managed, scalable, API-first. Your context layer, delivered globally. Keep your data in your VPC if you want to.
Five layers. Complete context.
Most memory solutions give you one layer. We give you all five — working together.
Knowledge that evolves, not just stores.
Not another vector database. A custom engine, built from scratch in C++ and Rust, for one job: holding the living context your agents actually need.
Memory that ages gracefully.
Every record tracks recall count, last access, and inherent importance. At query time, three scores combine into one — no cron, no eviction queue.
Typed edges. Real reasoning.
Weighted, directional edges with BFS traversal. Your knowledge doesn't live as isolated points — it becomes a graph the engine can walk.
context_chainSub-millisecond retrieval. Built in C++.
HNSW graph index with M=16, ef=200. Similarity kernels hand-written for AVX2 and AVX512. The .feather binary format is zero-copy — memory-mapped, not parsed.
Setup in 5 minutes.
Install. Open a file. Add vectors with metadata. Link them. Query with context. That's it.
# Install pip install feather-db import feather_db # Open an embedded .feather file (or create one) db = feather_db.DB.open("context.feather", dim=768) # Attach rich metadata — namespace, entity, attributes, importance meta = feather_db.Metadata() meta.importance = 0.85 meta.set_attribute("type", "campaign_brief") # Add a vector + its metadata db.add(id=1001, vec=embed("your context"), meta=meta) # Connect it into the knowledge graph db.link(from_id=1001, to_id=1002, rel_type="informed_by", weight=0.9) # Semantic search + 2-hop graph traversal, in one call chain = db.context_chain(query_vec, k=5, hops=2)
Plugs into every stack you already use
Deploy your way.
Start embedded. Scale to the cloud when you're ready. The context layer is always yours — same engine, same semantics, your choice of surface.
Feather Core
Open source · embedded
Feather Cloud
Managed · horizontally scalable
Feather Core
- Status
- Available now
- Deployment
- In-process, single file
- Latency
- <1ms
- Data
- 100% yours, on disk
- Scale
- Single node
- Ops
- Zero — it's a file
- Price
- Free · Open source
Feather Cloud
- Status
- Coming Q3 2026
- Deployment
- Managed API
- Latency
- <50ms
- Data
- Your VPC option
- Scale
- Horizontal, auto
- Ops
- Fully managed
- Price
- Usage-based
Built for every context-hungry system.
AI Agents
Memory that updates as the agent acts.
Agents fail when their context is stale. Feather writes back every retrieval, strengthens what worked, fades what didn't — so the next turn starts smarter.
- No hallucinations from outdated context
- Self-updating knowledge per run
- Plug-in layer for LangGraph, CrewAI
Performance Marketing
Every brief knows every campaign.
Creative briefs, competitor ads, winning hooks, brand guardrails — stored as vectors, linked as a graph. One query surfaces the full campaign memory instantly.
- Multimodal: copy + creative + video
- Brand-safe context per namespace
- Hawky.ai native integration
Enterprise AI
The context layer your LLM stack is missing.
Wikis, specs, calls, tickets — the private knowledge that makes your business yours. Feather keeps it fresh, filtered, and sub-millisecond to retrieve.
- Multi-tenant per workspace
- Deploy in your VPC (Cloud tier)
- Role-based metadata filters
Developer Tools
Memory for the tools that build software.
IDE assistants, repo-aware agents, autonomous workflows. Feather's embedded mode drops into any toolchain — no server, no network hop, just a file.
- Embedded in-process
- Zero infra for CLI tools
- Works offline, syncs when online
What builders are saying.
Shipped in early preview. Open source since day one. Here's what the community has to say.
Context_chain replaced 400 lines of our retrieval+rerank code. One call, and the agent has everything it needs.
Feather is weirdly fast. Sub-millisecond at 100k vectors without tuning anything. The C++ core is doing real work.
MIT license. C++ core. Python bindings. Rust CLI. It's every box ticked and then some.
The adaptive decay is the piece every other vector DB is missing. Our memory actually stays relevant week to week.
We ripped out Pinecone for local-first development. Ship speed is 3x.
I expected an early-stage OSS project. I got a production engine with clean APIs and benchmarks that hold up.
A single .feather file on disk. No server, no container. For our edge deployments this is genuinely the only thing that works.
The graph + vector unification is the right mental model. I stopped maintaining two stores.
Hawky.ai's creative memory runs on Feather. It's the core of why our agents know what they're doing.
Simple, transparent pricing.
Start free with Core. Pay only when you're at scale. No seat taxes, no surprises.
Feather Core
For solo devs, OSS, edge
- Embedded, single .feather file
- Python + Rust SDK, CLI
- Semantic + graph + metadata
- BM25 + hybrid RRF search (v0.8)
- SIMD AVX2/AVX512 core
- MIT license · Community support
Feather Cloud
For teams scaling up
- Everything in Core
- Managed API
- Horizontal auto-scale
- VPC deployment option
- Priority support
- 99.9% SLA
Enterprise
For regulated & large scale
- Everything in Cloud
- On-prem or VPC
- Custom SLAs
- Dedicated engineer
- Security review & SOC2
- Training & migration
Open source under MIT. Your .feather file is yours, forever.