AI Functions
Panofind currently supports two AI tools for working with your documents:
Document Summary
Select a file, choose Summarise, and Panofind sends the document (either as text or as page images) to the LLM you have configured. Within seconds you receive a concise overview and the key points of the document.
Chat with Your Document
Open the new chat pane, ask follow-up questions (“Explain section 4 in plain English”, “Compare chapter 3 with chapter 7”), and receive contextual answers that reference the original file.
Pick the LLM Provider That Suits You
Each service has its own strengths; the listing below will help you choose the one that best fits your use case and budget.
- OpenAI offers a balanced trade-off between answer quality and cost. The service is available in many countries. Every prompt and completion is stored for up to 30 days for abuse monitoring, yet — unless you explicitly opt in — none of your documents is used to train new models.
- Anthropic focuses on safety and also does not use your data for training new models. Your data is, however, held for up to 30 days for abuse management. Anthropic’s service is among the best available but also the most expensive. It is offered in most countries.
- Google Gemini has a free tier that provides these services at no charge — but it uses all submissions (for summarising or chatting) to train future AI models and other Google services. This could make your documents (or parts of them) public; specific prompts might reproduce much of your submitted content. To cite, Google says: Do not submit sensitive, confidential, or personal information to the Unpaid Services. Google also offers an upgrade to a paid tier in which your data is not used for training. For policy-violation detection, your submissions are logged for a limited time. Even then it is the cheapest option in this selection while still providing decent quality. It is available in the majority of countries.
- Ollama is the choice when total data sovereignty matters: it is an MIT-licensed server that runs open-weight models (Llama 3 & Nemotron, DeepSeek, Mistral, Microsoft …) entirely on your own Mac, PC, or Linux machine — no cloud account and no rate limits. There are zero per-token fees; your only costs are local hardware and electricity, so expenses stay predictable even under heavy use. Because Panofind communicates only with the local computer, no document ever leaves the machine, though you are responsible for GPU VRAM, updates, and uptime. Be aware, however, that inference on consumer hardware can be noticeably slower than with commercial cloud providers, and the overall answer quality of the available (quantised) open models typically lags behind the top-tier paid services. For meaningful performance and accuracy you should use a powerful GPU — ideally an Nvidia RTX 4090 or 5090 — even when hosting a quantised model.
How to Enable AI Features in Panofind
- Open Settings → Summary & Chat in Panofind.
- Tick Activate AI functionality to summarise texts or ask questions about them.
- Select the appropriate LLM provider from the dropdown list.
- Paste your API key (see the per-provider instructions below).
- Click Save.
The Summarise and Chat buttons will appear in the document actions (for supported document types).
Panofind never stores your keys on our servers. They are encrypted locally and transmitted only to the selected provider when needed.
Provider-Specific Setup Guides
Follow these links for detailed, step-by-step instructions with screenshots:
Cost & Budget Tips
- Monitor usage – most providers offer dashboards that show daily spending. The costs shown in Panofind are only estimates; actual charges can be higher, though they are usually lower.
- Control spend – set hard limits in each provider’s console (OpenAI organisation limit, Anthropic workspace budget, etc.).
- Cached summaries & chats – Panofind stores generated summaries and chats, so you’re billed only once per document unless you refresh them.