ResourceFlow AI Sandbox Pricing
Trust and control built-in. Privacy-first AI that scales with your needs.
Feature
Key Principles (All Tiers)
Privacy by design. Your data, your control.
Privacy by Design
Every user sandbox is logically or physically isolated
PII Redaction
Built-in preprocessing removes identifiable personal data
No Data Sharing
Your documents never train shared models
GDPR Compliance
Data processed in-region with configurable storage
RAG Isolation
Even if the LLM runtime is shared (Free tier), vectors and documents remain private. Your knowledge base is always yours alone.
Enterprise Extras
Additional capabilities available with Enterprise tier
Bring-Your-Own-Model
Use Claude, GPT-4o, Gemini, Mistral Mixtral, or any custom model of your choice.
Multi-Sandbox Orchestration
Manage team sandboxes with centralized control and monitoring.
Custom SSO / RBAC
Integrate with your identity provider and implement role-based access control with audit logging.
Hybrid Architecture
Combine OSS LLM with OpenAI API fallback for optimal cost and performance balance.
Frequently Asked Questions
What makes the Free Tier truly private?
Even in our shared runtime, each user gets a logically isolated sandbox with their own vector store namespace. PII is automatically redacted, and your data never trains external models or mixes with other users.
When should I upgrade to Pro?
Upgrade when you need persistent data storage, WebSocket support for real-time features, or better performance with your private Docker container. Pro tier gives you dedicated resources and longer runtime.
What's included in Enterprise?
Enterprise includes everything: bring-your-own-model (Claude, GPT-4, Gemini), dedicated microVMs with GPU support, multi-sandbox orchestration, custom SSO, and hybrid architecture options. Plus full compliance certifications and dedicated engineering support.
Is document upload really unlimited?
Yes, all tiers support unlimited document uploads. Free tier has a 5MB per-file limit, while Enterprise has no file size restrictions. Content generation is also unlimited across all plans.
How does RAG isolation work?
Each user's documents and embeddings are stored in isolated vector spaces. Even in the Free tier with shared compute, your RAG queries only access your own data—never another user's content.
Can I customize the LLM models?
Pro tier offers optional fine-tuning as an add-on. Enterprise includes fine-tuning and lets you bring your own models (Mixtral, GPT-4, Claude, etc.) or use hybrid architectures combining OSS and API models.
Ready to Get Started?
Start free or talk to our team about Pro and Enterprise options