Overview
Welcome to the official user documentation for Privata Ai version 1.0. Privata Ai is a cutting-edge, privacy-centric Artificial Intelligence assistant engineered to operate entirely offline. By leveraging powerful open-source AI models locally on your device, Privata Ai ensures that your data, prompts, and conversations remain confidential and under your complete control.
This guide provides comprehensive instructions covering installation, configuration, core features, advanced usage, and troubleshooting to help you maximize the potential of Privata Ai. Whether you are a new user or exploring advanced functionalities like the API or document integration, this documentation serves as your primary resource.
Table of Contents
- Overview
- System Requirements
- Installation Guide
- Getting Started with Privata Ai
- User Interface Overview
- Core Features Explained
- Advanced Usage Techniques
- Troubleshooting Common Issues
- Frequently Asked Questions (FAQs)
- Support and Feedback Channels
- Version Changelog
System Requirements
Ensure your system meets the minimum specifications for optimal performance:
- Operating System: Windows 10/11, macOS (Intel & Apple Silicon), Linux (Debian/Ubuntu-based recommended)
- RAM: Minimum 4GB required. 8GB or more strongly recommended for most models (e.g., Llama 3 8B). Some larger models may require 16GB+.
- Storage: At least 10GB of free disk space is recommended for the application and downloaded AI models. Model sizes vary significantly (typically 2GB - 10GB+ each).
- Processor: A modern multi-core CPU (Intel i5/Ryzen 5 or equivalent) is recommended.
- GPU (Optional): For GPU acceleration, an NVIDIA graphics card with CUDA support (Compute Capability 3.5+) and sufficient VRAM (typically 4GB+ depending on the model) is required. Ensure latest NVIDIA drivers are installed.
- Internet Connection: Required only for the initial download of the Privata Ai application and for downloading AI models. Core functionality is fully offline.
Installation Guide
Privata Ai v1.0 is distributed primarily as a standalone application for ease of use.
-
Download Privata Ai:
- Visit the official Privata Ai website (Homepage) and navigate to the Downloads section.
- Select the appropriate installer/package for your operating system (e.g.,
.exe
for Windows,.dmg
for macOS,.zip
containing executable for Linux). (Note: Downloads marked as 'SOON' on the homepage).
-
Install the Application:
- Windows: Double-click the downloaded
.exe
file and follow the on-screen installation wizard. - macOS: Open the downloaded
.dmg
file. Drag the Privata Ai application icon into your Applications folder. You may need to grant permissions upon first launch (System Settings > Privacy & Security). - Linux: Extract the downloaded
.zip
file. Open a terminal in the extracted directory, make the application executable (e.g.,chmod +x PrivataAi_Linux
), and run it (./PrivataAi_Linux
).
- Windows: Double-click the downloaded
-
First Launch & Configuration Directory:
- Upon launching Privata Ai for the first time, it will automatically create a configuration directory to store settings, chat history, and models. The default locations are:
- Windows:
%APPDATA%\PrivataAi
(e.g.,C:\Users\YourUser\AppData\Roaming\PrivataAi
) - macOS:
~/.config/PrivataAi
(e.g.,/Users/YourUser/.config/PrivataAi
) - Linux:
~/.local/share/PrivataAi
(e.g.,/home/YourUser/.local/share/PrivataAi
)
- Windows:
- You may be prompted to download or select an initial AI model if none are found.
- Upon launching Privata Ai for the first time, it will automatically create a configuration directory to store settings, chat history, and models. The default locations are:
Getting Started with Privata Ai
Follow these steps to begin using Privata Ai:
- Launch the Application: Open Privata Ai from your Applications folder (macOS/Linux) or Start Menu (Windows). The main window displaying the web-based interface will appear.
- Load an AI Model:
- Privata Ai requires a compatible AI model file (GGUF format) to function.
- Navigate to the Settings section using the sidebar menu.
- Under Model Settings, click Browse to select a GGUF model file already downloaded to your computer.
- Alternatively, go to the Downloads section in the sidebar. Select a download location using Browse, then click Download next to a desired model from the list.
- Once a model is selected or downloaded, return to Settings > Model Settings and click Load Model. The status bar should indicate the model is loading and then display its name when ready.
- Start Your First Chat:
- Navigate back to the Chat section (usually the default view).
- Type your question or prompt into the text input area at the bottom.
- Click the Send button or press
Ctrl+Enter
(orCmd+Enter
on macOS). - The AI's response will appear in the chat window, typically streamed word by word. Your conversation is automatically saved.
User Interface Overview
Privata Ai v1.0 features a modern web-based interface rendered within a desktop application window. Key components include:
- Sidebar: Provides navigation between major sections:
- Chat Dropdown: Access Chat view, Chat History, Clear Chat, Export options (JSON/PDF), and Import Chat.
- Downloads: Interface for downloading supported AI models.
- Settings: Configure application behavior, models, documents, API, and themes.
- About: Links to the Privata Ai website and resources.
- Status Bar: Displays real-time information: Loaded Model, GPU Status, Document Status, Memory Usage, and Software Version.
- Main Content Area: Displays the currently selected section (Chat, Settings, etc.).
- Chat View: The primary area for interacting with the AI, displaying the conversation history.
- Input Area: Text box for typing prompts, Send button, and character counter.
- Messages: Individual chat bubbles for user prompts and AI responses, including timestamps and action buttons (Copy, Edit).
- Code Blocks: Automatically syntax-highlighted code snippets within AI responses, featuring a dedicated copy button.
- Error/Notification Area: Displays messages for errors (e.g., model loading failure) or confirmations (e.g., chat exported).
Core Features Explained
Using the Chat Interface
The chat interface is where you interact directly with the loaded AI model.
- Sending Prompts: Type your query into the input box. A character counter shows usage (limit: 4096 characters). Press
Ctrl+Enter
or click Send. - Streaming Responses: AI responses are displayed in real-time as they are generated, often with a typing indicator.
- Copying Text:
- Click the clipboard icon next to any message bubble to copy its full text.
- Click the dedicated copy button within a code block to copy only the code snippet.
- Editing Messages: Click the edit icon on your own messages to modify them. Resubmit the edited message to get a revised AI response based on the change.
- Contextual Awareness: If a document is loaded (see Document Upload), the AI will use its content when relevant to your query. The status bar will indicate "Document: Loaded".
Managing Chat History
Privata Ai automatically saves your conversations locally.
- Accessing History: Select Chat History from the Chat dropdown menu in the sidebar.
- Viewing Chats: A list of saved conversations is displayed, showing titles (auto-generated or edited) and timestamps.
- Searching: Use the search bar at the top to filter chats by keywords in their titles or content.
- Loading Chats: Click on a chat title to load the full conversation back into the main chat view.
- Editing Titles: Click the "Edit Title" button next to a chat to rename it for better organization.
- Deleting Chats: Click the "Delete" button next to a chat. A confirmation prompt will appear before deletion.
- Starting New Chats: Click the "New Chat" button in the history view or select Chat from the sidebar dropdown to begin a fresh conversation.
- Clearing Current Chat: Select Clear Chat from the Chat dropdown to erase the messages in the currently active chat window (this does not delete it from history unless it was never saved).
Model Selection & Management
Privata Ai supports various open-source AI models in the GGUF format.
- Supported Models List: The Downloads section lists curated models tested for compatibility, along with their estimated RAM requirements and license information. Current examples include:
Model Name Est. RAM License Description Reasoner v1 ~8GB Apache 2.0 Strong logical reasoning & coding. Llama 3 8B Instruct ~8GB LLaMA License Excellent instruction following & natural language. DeepSeek-R1-Distill-Qwen-7B ~8GB MIT Specialized for knowledge & complex topics. Phi-3 Mini Instruct ~4GB MIT Fast, lightweight model for general tasks. - Downloading Models: In the Downloads section, choose a download folder, then click "Download" next to the desired model. Progress is shown in the UI.
- Loading Local Models: Go to Settings > Model Settings. Click Browse next to "Model File" and select a
.gguf
file from your computer. - Activating a Model: After selecting or downloading, click Load Model in the Settings section. The status bar will confirm when the model is active.
- Switching Models: Simply load a different model via the Settings section. The previously loaded model will be unloaded to free up resources.
- Model Parameters: Adjust inference settings like Temperature (creativity vs. predictability) and Max Tokens (response length) in the Model Settings section.
Uploading & Using Documents
Enhance AI responses by providing context from your documents.
- Supported Formats:
.txt
(Plain Text) and.pdf
(Portable Document Format). - How to Upload:
- Go to Settings > Document Upload.
- Click Browse and select the desired
.txt
or.pdf
file. - Click Upload Document. The application will process the file (extracting text, limited to approx. 10,000 characters for performance).
- The status bar will update to "Document: Loaded [Filename]".
- Usage in Chat: Once loaded, the AI will automatically consider the document's content when answering relevant questions. You might explicitly refer to it (e.g., "Summarize the uploaded document") or ask questions related to its content.
- Clearing Document Context: To remove the document context, return to Settings > Document Upload and click Clear Document.
Configuring Settings
Customize Privata Ai's behavior via the Settings section.
- Theme: Toggle between Light and Dark visual themes. Your preference is saved automatically.
- Model Settings:
- Model File: Select the path to your desired
.gguf
model. - Temperature: Slider (0.1 - 1.0) to control response randomness. Lower values = more deterministic, higher values = more creative. Default: 0.7.
- Max Tokens: Slider (100 - 4096) to set the maximum length of AI responses. Default: 2048.
- Use GPU: Checkbox to enable NVIDIA GPU acceleration (requires compatible hardware and drivers).
- System Prompt: Text area to define the AI's persona or instructions (see Advanced Usage).
- Model File: Select the path to your desired
- Document Upload: Manage document loading and clearing (see previous section).
- API Settings: Configure the optional REST API (see next section).
- Saving Changes: Click the Save Settings button at the bottom of the Settings page to apply any modifications to model parameters, API settings, etc. Theme changes are applied instantly.
Utilizing the REST API
Privata Ai includes an optional local REST API for programmatic control (disabled by default).
- Enabling the API:
- Go to Settings > API Settings.
- Check the Enable API box.
- Set a desired API Port (default: 5000, must be between 1024-65535 and not in use).
- Optionally, enter an API Key for basic authentication.
- Click Save Settings and restart Privata Ai for changes to take effect.
- Accessing the API: The API runs on
http://localhost:{port}
. Send requests using tools likecurl
, Postman, or integrate into your own scripts/applications. - Authentication: If an API key is set, include it in the request headers:
X-API-Key: your_secret_key
. - Key Endpoints (Examples):
POST /chat
: Send a prompt, receive a streamed response. Body:{"prompt": "Your query"}
GET /config
: Retrieve current application settings.POST /load_model
: Load a specific model file. Body:{"model_path": "/path/to/model.gguf"}
GET /chat_history
: List saved chats.POST /upload_document
: Upload a document via API (requires file handling).
- Security Note: The API is intended for local use. Exposing it to external networks requires careful security considerations (firewall rules, reverse proxy, etc.).
Advanced Usage Techniques
Customizing System Prompts
Tailor the AI's personality, role, or response style using custom system prompts.
- Location: Edit the System Prompt text area under Settings > Model Settings.
- Purpose: Provide high-level instructions that guide the AI's behavior across the entire conversation.
- Examples:
You are a helpful assistant that always provides answers in the form of bullet points.
Act as a senior software engineer reviewing code. Be critical and provide detailed suggestions.
Translate any user input into French.
- Model Compatibility: The effectiveness of system prompts can vary between different AI models. Experiment to find what works best for your chosen model. Privata Ai sets reasonable defaults based on the loaded model type.
- Saving: Remember to click Save Settings after modifying the prompt.
Exporting Conversations
Save your chats for backup, sharing, or external analysis.
- Export to JSON:
- From the Chat dropdown menu, select Export Chat.
- Choose a location and filename (e.g.,
my_research_chat.json
). - The JSON file contains the chat ID, title, timestamp, and a list of all messages (role, content, timestamp).
- Export to PDF:
- From the Chat dropdown menu, select Export to PDF.
- Choose a location and filename (e.g.,
project_summary.pdf
). - A formatted PDF document is generated, presenting the conversation clearly with user/AI roles and timestamps.
Importing Conversations
Load previously exported chats back into Privata Ai.
- How to Import:
- From the Chat dropdown menu, select Import Chat.
- Select a valid
.json
file that was previously exported from Privata Ai. - The chat will be added to your Chat History list and loaded into the main chat view.
- Note: Only JSON files exported from Privata Ai are guaranteed to be compatible.
Enabling GPU Acceleration
Leverage your NVIDIA graphics card for faster AI processing.
- Prerequisites:
- An NVIDIA GPU compatible with CUDA (check NVIDIA's documentation for your card's Compute Capability).
- Up-to-date NVIDIA drivers installed on your system.
- Sufficient VRAM on the GPU (varies by model, often 4GB+).
- Activation:
- Go to Settings > Model Settings.
- Check the box labeled Use GPU.
- Click Save Settings.
- Reload the AI model by clicking Load Model again (or loading a new one).
- Verification: The status bar in the sidebar should update to show "GPU: Active". If it shows "Inactive" or an error occurs, your system may not meet the requirements, or there might be a driver issue.
- Performance: GPU acceleration significantly speeds up response generation, especially for larger models. However, it also increases system power consumption and VRAM usage.
Troubleshooting Common Issues
- Model Fails to Load:
- Verify the model file path in Settings is correct and the file exists.
- Ensure the model is a compatible GGUF format.
- Check if your system meets the minimum RAM requirements for the selected model (see Model Management).
- Look for specific error messages in the application's error notification area or console/log files.
- Try loading a known working, smaller model (like Phi-3 Mini) to isolate the issue.
- No Response / Stuck "Generating":
- Confirm a model is successfully loaded (check status bar).
- Ensure the prompt is not excessively long or complex for the model/settings.
- Try clearing the chat and sending a simple prompt like "Hello".
- Restart Privata Ai.
- Check system resources (RAM/CPU usage) - the system might be overloaded.
- Slow Performance:
- Enable GPU acceleration if you have a compatible NVIDIA card (see above).
- Use a smaller, less demanding AI model.
- Reduce the "Max Tokens" setting for shorter responses.
- Close other applications consuming significant RAM or CPU resources.
- API Not Accessible:
- Confirm the API is enabled in Settings and Privata Ai was restarted after enabling.
- Verify you are using the correct port number (
http://localhost:{port}
). - Ensure no other application or firewall is blocking the specified port.
- If using an API key, double-check it's included correctly in the
X-API-Key
header.
- Document Upload Issues:
- Ensure the file is a
.txt
or.pdf
format. - Verify the file is not corrupted and can be opened normally.
- Check file permissions if running on Linux/macOS.
- If the document is very large, processing might take time or fail if it exceeds internal limits (approx. 10k characters). Try a smaller document.
- Ensure the file is a
- UI Glitches / Freezes:
- Try restarting Privata Ai.
- Ensure your operating system and graphics drivers are up-to-date.
- Report the issue with details about your system and steps to reproduce (see Support).
Frequently Asked Questions (FAQs)
- Q: Is Privata Ai truly offline?
A: Yes. Once the application and desired AI models are downloaded, all AI processing happens locally on your device. No prompts or data are sent over the internet during core operation. An internet connection is only needed for initial downloads or optional software updates. - Q: Is Privata Ai free to use?
A: Yes, the core Privata Ai software is free and intended to remain open-source. The associated $Prvta token is a separate utility mechanism to fund development. - Q: Can I use models other than the ones listed in Downloads?
A: Yes, Privata Ai is designed to work with models in the GGUF format. You can download compatible models from sources like Hugging Face and load them manually via the Settings page. Compatibility and performance may vary. - Q: How is my chat history stored? Is it secure?
A: Chat history is stored locally on your computer in achat_history.json
file within the application's configuration directory. The security of this file depends on your overall system security. Privata Ai does not encrypt this file by default. - Q: What data does Privata Ai collect?
A: The offline application itself collects no user prompts or conversation data that is sent to external servers. If you interact with the website or use the optional update checker, standard anonymous usage data might be collected (refer to the Privacy Policy). - Q: Does GPU acceleration work on AMD or Intel graphics cards?
A: Currently, GPU acceleration in Privata Ai v1.0 relies on NVIDIA CUDA, so it primarily supports compatible NVIDIA GPUs. Support for other GPU types (like AMD ROCm or Intel OpenVINO) may be considered in future updates depending on library support and community demand.
Support and Feedback
We value your input and are here to help!
- Community Support: Join our official Telegram group (link available on the Homepage) to ask questions and interact with other users and the development team.
- Bug Reports & Feature Requests: Please submit detailed reports or suggestions via our official contact channels (see Contact Page) or community forums. Include your OS, Privata Ai version, steps to reproduce the issue, and any relevant logs.
- Email: For direct inquiries, contact us at
support [at] privata-ai.com
(replace `[at]` with `@`).
Version Changelog
For a detailed list of changes between different versions of Privata Ai, please refer to the Changelog Page.
Privata Ai Documentation v1.0 | Last Updated: April 26, 2025