Back to platform

VeeAI

Intelligent document engine

VeeAI turns your document vault into a searchable, summarizable knowledge base. Hybrid search combines keyword precision with semantic understanding. AI summarization extracts insights from any document. A RAG pipeline powers an intelligent research assistant.

BM25Vector SearchRRFRAG PipelineLLM Orchestration
VeeAI interface

Architecture

Search and AI Architecture

VeeAI runs a dual-index search architecture. BM25 keyword search (via PostgreSQL full-text) handles exact matches. Vector search (pgvector embeddings) captures semantic similarity. Results are fused using Reciprocal Rank Fusion (RRF) for best-of-both-worlds ranking. For AI features, documents are chunked, embedded, and fed into a RAG pipeline that grounds LLM responses in your actual data.

Hybrid search: BM25 keyword + pgvector semantic, fused via RRF
Map-reduce summarization for documents of any length
Chunking with configurable overlap and strategy
Pluggable LLM providers (local or cloud — Ollama, OpenAI, Anthropic)
RAG pipeline with context-grounded responses
OCR pipeline: Tesseract + AI extraction with confidence scoring

Key Capabilities

VeeAI

01

Hybrid Search (BM25 + Vector)

Keyword search finds exact matches. Semantic search understands meaning. Reciprocal Rank Fusion (RRF) merges both result sets into a single ranked list. Best precision, best recall.

02

Map-Reduce Summarization

Long documents are split into chunks, each summarized independently (map), then combined into a coherent summary (reduce). Works on documents of any length without context window limitations.

03

OCR and Extraction Pipeline

Tesseract-based OCR with automatic language detection. AI-powered metadata extraction from text. Document classification and tagging. Confidence scores for quality assessment.

04

RAG Research Assistant

Retrieval-Augmented Generation pipeline. User questions are embedded, matched against document chunks via vector similarity, and fed to an LLM with source context. Answers are grounded in your actual documents.

05

Document Embeddings

Automatic vector embedding generation using configurable models. Chunking strategies optimized for different document types. Stored in pgvector for efficient similarity search.

06

Pluggable LLM Providers

Run local models via Ollama for full data sovereignty. Or connect to OpenAI, Anthropic, or any OpenAI-compatible API. Switch providers without code changes — configuration only.

Request a Demo

See VeeAI in action — deployed on your infrastructure.

Request a Demo