Back to projects
2025

Project Aeon: Local-First Assistant Platform

FastAPIChromaDBVue 3TypeScriptNaive UIOllama
Screenshot of Project Aeon: Local-First Assistant Platform

Project Aeon grew out of frustration with cloud-based AI assistants that require sending your documents to external servers. The idea: build something that gives you RAG-powered conversations over your own files, with everything running locally.

The architecture has three clear layers. FastAPI serves the backend API and orchestrates requests. ChromaDB handles vector storage and similarity search over document embeddings. Ollama runs the LLM locally — no API keys, no cloud calls. The Vue 3 frontend with Naive UI provides a clean chat interface.

The interesting part was getting retrieval quality right with local models. Cloud APIs can brute-force relevance with bigger models; locally, you have to be smarter about chunking, embedding quality, and context window management.

Security focus

Making the privacy guarantees real instead of aspirational. All data flows stay local by default, service boundaries are enforced, and runtime configuration is locked down.

Key learnings

  • ·Vector search and retrieval flows
  • ·FastAPI for ML-adjacent services
  • ·Vue 3 Composition API with TypeScript
  • ·Local LLM runtime integration
  • ·Privacy-first architecture