Feather DB v0.8.0 · Part ofHawky

The Living
Context Engine.

Context infrastructure for AI agents. Adaptive memory, semantic graph, sub-millisecond retrieval. Deploy embedded, self-hosted, or in the cloud.

<1ms Retrieval
Open Source · MIT
Rust + C++ Core
Feather Cloud — Coming Soon
Scroll to explore
What we do

Context infrastructure for AI agents. One engine, every capability.

Available Now
v0.8.0

Feather Core

Open source, embedded, zero-server. Single .feather file. Python + Rust SDK. Ships in 5 minutes.

Embedded in-process
MIT licensed
Semantic + graph
SIMD AVX2/AVX512
Install now
Coming Q3 2026
Managed

Feather Cloud

Managed, scalable, API-first. Your context layer, delivered globally. Keep your data in your VPC if you want to.

Managed API
Horizontal scale
VPC deployment
Usage-based pricing
Join waitlist
<1ms
Retrieval
MIT
License
Rust + C++
Core
Python · Rust
SDKs
The Context Stack
[ 01 / 05 ]

Five layers. Complete context.

Most memory solutions give you one layer. We give you all five — working together.

01ADAPTIVE MEMORY02CONTEXT GRAPH03SEMANTIC SEARCH04METADATA INTELLIGENCE.feather05DEPLOY ANYWHERE
Under the hood

Knowledge that evolves, not just stores.

Not another vector database. A custom engine, built from scratch in C++ and Rust, for one job: holding the living context your agents actually need.

Adaptive Decay
01

Memory that ages gracefully.

Every record tracks recall count, last access, and inherent importance. At query time, three scores combine into one — no cron, no eviction queue.

# stickiness grows with use
stickiness = 1 + log(1 + recall_count)
# freshness scales inversely
effective_age = age_days / stickiness
# final blended score
final_score = similarity × recency × importance
Graph Engine
02

Typed edges. Real reasoning.

Weighted, directional edges with BFS traversal. Your knowledge doesn't live as isolated points — it becomes a graph the engine can walk.

informed_bycontradicts
Bidirectional index
BFS via context_chain
Ontology-aware
SIMD Search
03

Sub-millisecond retrieval. Built in C++.

HNSW graph index with M=16, ef=200. Similarity kernels hand-written for AVX2 and AVX512. The .feather binary format is zero-copy — memory-mapped, not parsed.

10K nodes0.3ms
100K nodes0.9ms
1M nodes3.2ms
10M nodes9.8ms
HNSWAVX2AVX512C++17
Developer Experience

Setup in 5 minutes.

Install. Open a file. Add vectors with metadata. Link them. Query with context. That's it.

# Install
pip install feather-db

import feather_db

# Open an embedded .feather file (or create one)
db = feather_db.DB.open("context.feather", dim=768)

# Attach rich metadata — namespace, entity, attributes, importance
meta = feather_db.Metadata()
meta.importance = 0.85
meta.set_attribute("type", "campaign_brief")

# Add a vector + its metadata
db.add(id=1001, vec=embed("your context"), meta=meta)

# Connect it into the knowledge graph
db.link(from_id=1001, to_id=1002, rel_type="informed_by", weight=0.9)

# Semantic search + 2-hop graph traversal, in one call
chain = db.context_chain(query_vec, k=5, hops=2)

Plugs into every stack you already use

Python
Rust
LangChain
LangGraph
CrewAI
OpenAI
Anthropic
Gemini
Vercel AI SDK
Deployment

Deploy your way.

Start embedded. Scale to the cloud when you're ready. The context layer is always yours — same engine, same semantics, your choice of surface.

Available

Feather Core

Status
Available now
Deployment
In-process, single file
Latency
<1ms
Data
100% yours, on disk
Scale
Single node
Ops
Zero — it's a file
Price
Free · Open source
Waitlist

Feather Cloud

Status
Coming Q3 2026
Deployment
Managed API
Latency
<50ms
Data
Your VPC option
Scale
Horizontal, auto
Ops
Fully managed
Price
Usage-based
Use cases

Built for every context-hungry system.

Autonomous systems

AI Agents

Memory that updates as the agent acts.

Agents fail when their context is stale. Feather writes back every retrieval, strengthens what worked, fades what didn't — so the next turn starts smarter.

  • No hallucinations from outdated context
  • Self-updating knowledge per run
  • Plug-in layer for LangGraph, CrewAI
Creative intelligence

Performance Marketing

Every brief knows every campaign.

Creative briefs, competitor ads, winning hooks, brand guardrails — stored as vectors, linked as a graph. One query surfaces the full campaign memory instantly.

  • Multimodal: copy + creative + video
  • Brand-safe context per namespace
  • Hawky.ai native integration
Business knowledge

Enterprise AI

The context layer your LLM stack is missing.

Wikis, specs, calls, tickets — the private knowledge that makes your business yours. Feather keeps it fresh, filtered, and sub-millisecond to retrieve.

  • Multi-tenant per workspace
  • Deploy in your VPC (Cloud tier)
  • Role-based metadata filters
Coding agents · IDEs

Developer Tools

Memory for the tools that build software.

IDE assistants, repo-aware agents, autonomous workflows. Feather's embedded mode drops into any toolchain — no server, no network hop, just a file.

  • Embedded in-process
  • Zero infra for CLI tools
  • Works offline, syncs when online
Community

What builders are saying.

Shipped in early preview. Open source since day one. Here's what the community has to say.

Builder

Context_chain replaced 400 lines of our retrieval+rerank code. One call, and the agent has everything it needs.

LM
Lea M.
Staff Eng, autonomous agents
GitHub

Feather is weirdly fast. Sub-millisecond at 100k vectors without tuning anything. The C++ core is doing real work.

V
@vectornerd
on GitHub
GitHub

MIT license. C++ core. Python bindings. Rust CLI. It's every box ticked and then some.

TB
Tomás B.
OSS maintainer
GitHub

The adaptive decay is the piece every other vector DB is missing. Our memory actually stays relevant week to week.

DK
Daniel K.
Founder, AI copilot startup
Builder

We ripped out Pinecone for local-first development. Ship speed is 3x.

AS
Ana S.
Head of AI, fintech
Builder

I expected an early-stage OSS project. I got a production engine with clean APIs and benchmarks that hold up.

MW
Marcus W.
Platform eng, enterprise SaaS
Builder

A single .feather file on disk. No server, no container. For our edge deployments this is genuinely the only thing that works.

PR
Priya R.
Infra lead
Community

The graph + vector unification is the right mental model. I stopped maintaining two stores.

D
@davidz
Researcher
Builder

Hawky.ai's creative memory runs on Feather. It's the core of why our agents know what they're doing.

AR
Ashwath R.
Founder, Hawky.ai
Pricing

Simple, transparent pricing.

Start free with Core. Pay only when you're at scale. No seat taxes, no surprises.

Available

Feather Core

For solo devs, OSS, edge

Free
forever
Deployment
Self-hosted
Support
Community
  • Embedded, single .feather file
  • Python + Rust SDK, CLI
  • Semantic + graph + metadata
  • BM25 + hybrid RRF search (v0.8)
  • SIMD AVX2/AVX512 core
  • MIT license · Community support
Install now
Most Loved
Coming Q3 2026

Feather Cloud

For teams scaling up

Usage-based
pay-as-you-go
Deployment
Managed
Support
Priority
  • Everything in Core
  • Managed API
  • Horizontal auto-scale
  • VPC deployment option
  • Priority support
  • 99.9% SLA
Join waitlist

Enterprise

For regulated & large scale

Custom
tailored to you
Deployment
VPC / On-prem
Support
Dedicated
  • Everything in Cloud
  • On-prem or VPC
  • Custom SLAs
  • Dedicated engineer
  • Security review & SOC2
  • Training & migration
Talk to us

Open source under MIT. Your .feather file is yours, forever.