ResourceFlow AI Sandbox Pricing

Trust and control built-in. Privacy-first AI that scales with your needs.

Feature

Free Tier (Starter)

Free

Get Started Free

Pro Tier (Growth)

$10/mo

Start Pro Trial

Enterprise Tier (Scale)

Custom

Contact Sales
Sandbox Type
Shared multi-tenant runtime
Private container (Docker)
Dedicated microVM or ECS task
Model (LLM)
Gemma 3 (1B, quantized)
Gemma 3 (1B–2B, full precision)
Gemma 7B / Mixtral / ChatGPT-compatible (OSS or API hybrid)
Performance
Shared CPU
4 GB RAM / 2 vCPU
16–64 GB RAM / 4–16 vCPU (+ GPU optional)
Data Privacy & Security
All data private per user
PII automatically redacted
Isolated vector store (unique namespace per user)
GDPR compliant processing
Same guarantees
Same guarantees + VPC network isolation + custom compliance addenda
RAG & Vector Database
Private logical isolation (SQLite/Redis)
Dedicated local Redis / SQLite
Persistent Redis + EFS + cross-app federation
Document Uploads
Unlimited (≤ 5 MB/file limit)
Unlimited
Unlimited (no file size limits)
Content Generation
Unlimited (shared compute)
Unlimited (private runtime)
Unlimited + custom templates + fine-tuning
Auto-shutdown Time
15 min idle
60 min idle
Configurable / always-on
Data Persistence
Ephemeral sandbox
Persistent per user
Persistent + daily encrypted backups
API & Integrations
REST only
REST + WebSocket + n8n/Zapier
REST + WebSocket + private API keys + enterprise connectors (CRM, HRIS, etc.)
Security Isolation Level
App-level auth + logical sandbox
Container-level
MicroVM / VPC network / enclave
Fine-tuning / Custom Models
Not available
Optional add-on
Included
Compliance & Audit
GDPR-ready / EU data residency
GDPR + SOC 2 alignment
GDPR + SOC 2 + ISO 27001 (optional)
Support
Community forum
Priority email + chat
Dedicated engineer + SLA

Key Principles (All Tiers)

Privacy by design. Your data, your control.

🔒

Privacy by Design

Every user sandbox is logically or physically isolated

👁️

PII Redaction

Built-in preprocessing removes identifiable personal data

🛡️

No Data Sharing

Your documents never train shared models

GDPR Compliance

Data processed in-region with configurable storage

RAG Isolation

Even if the LLM runtime is shared (Free tier), vectors and documents remain private. Your knowledge base is always yours alone.

Enterprise Extras

Additional capabilities available with Enterprise tier

Bring-Your-Own-Model

Use Claude, GPT-4o, Gemini, Mistral Mixtral, or any custom model of your choice.

Multi-Sandbox Orchestration

Manage team sandboxes with centralized control and monitoring.

Custom SSO / RBAC

Integrate with your identity provider and implement role-based access control with audit logging.

Hybrid Architecture

Combine OSS LLM with OpenAI API fallback for optimal cost and performance balance.

Frequently Asked Questions

What makes the Free Tier truly private?

Even in our shared runtime, each user gets a logically isolated sandbox with their own vector store namespace. PII is automatically redacted, and your data never trains external models or mixes with other users.

When should I upgrade to Pro?

Upgrade when you need persistent data storage, WebSocket support for real-time features, or better performance with your private Docker container. Pro tier gives you dedicated resources and longer runtime.

What's included in Enterprise?

Enterprise includes everything: bring-your-own-model (Claude, GPT-4, Gemini), dedicated microVMs with GPU support, multi-sandbox orchestration, custom SSO, and hybrid architecture options. Plus full compliance certifications and dedicated engineering support.

Is document upload really unlimited?

Yes, all tiers support unlimited document uploads. Free tier has a 5MB per-file limit, while Enterprise has no file size restrictions. Content generation is also unlimited across all plans.

How does RAG isolation work?

Each user's documents and embeddings are stored in isolated vector spaces. Even in the Free tier with shared compute, your RAG queries only access your own data—never another user's content.

Can I customize the LLM models?

Pro tier offers optional fine-tuning as an add-on. Enterprise includes fine-tuning and lets you bring your own models (Mixtral, GPT-4, Claude, etc.) or use hybrid architectures combining OSS and API models.

Ready to Get Started?

Start free or talk to our team about Pro and Enterprise options