Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.loraverse.io/llms.txt

Use this file to discover all available pages before exploring further.

This is the problem Loraverse was built to solve. We call it Generative Amnesia.Most AI tools treat every generation as a fresh start. You describe a character, generate an image, and the next time you ask for that character, the system has no memory of who they are. Eyes change. Hair drifts. Wardrobe resets. For one image, that’s fine. For a film, a series, or a campaign, it’s a wall.Loraverse is built around the opposite premise. Characters, environments, props, and styles aren’t prompts you re-describe — they’re entities with structured identity (DNA) and trusted visual references (canonical). When you cast an entity into a scene, Loraverse uses what you’ve already established. The same character looks like themselves across every shot, because they are the same character to the system.The shift is from describing a character every time to building a character once.
Most generative AI tools are prompt-based. You describe what you want, get an output, then start over. Each generation is its own moment, mostly disconnected from what came before. That’s fast for single images but breaks down the moment you need continuity, structure, or a real production.Loraverse is entity-based and production-graph-shaped. Characters, environments, props, and styles are persistent structured objects with identity and references. Scenes, beats, and shots organize your work as a real production. Every generation lives on a graph that remembers what made it.The result: you can build cinematic work that holds together across hundreds of shots — and you can hand it off to a collaborator or a downstream tool without losing the production behind it. That’s what Loraverse was built for.
Loraverse is model-agnostic. We don’t hide the models behind workflows — we label them clearly so you always know what’s making your work.Image models include the Flux family (1.1 Pro, 1.1 Pro Ultra, Dev, 2 Dev, 2 Flex, 2 Pro, Multi-Angle Pro), Nano Banana, Nano Banana 2, Gemini 3 Pro Image, Seedream, Qwen Image variants, and more.Video models include Kling V3 Pro, Kling O3 Pro, Seedance 2.0, Hailuo, and a growing catalog. Different models have different capabilities — first-frame, first-and-last-frame, omni-reference, multi-shot — and the Bench Bar adapts to whichever you pick.When new models reach production quality, they land in the model picker.
Not yet — but it’s on the way.Today, Loraverse handles model access centrally. You generate, we route to the right provider, and credits are deducted from your account. This keeps things simple for creative users and consistent for cost tracking.Bring Your Own Keys (BYOK) is on the roadmap, especially for studio and enterprise users who want to use their own provider contracts. When it ships, you’ll be able to attach your API keys to specific workflows or models and route generations through your account directly.If BYOK is critical for your workflow, let us know — it helps shape priorities.
We’re currently drafting the full Terms of Service with our lawyer.For closed beta, Loraverse is working with experienced creative professionals to shape the platform’s next stage. Beta participants receive a credit grant in exchange for helping us build this together. The full ownership and commercial-use terms will be published before the closed beta concludes — and well before public launch.If you have specific commercial requirements during the beta period, reach out and we’ll work through them with you.
Also part of the Terms being drafted with our lawyer.In closed beta, our focus is the platform, the workflows, and the creative output the cohort produces with us. We are not currently training models on user-generated content as a default policy. The full training, retention, and data-use terms will be published before they take effect, and you’ll have clear visibility into what’s used for what.We won’t change any of this quietly. Transparency on data is a foundation, not an afterthought.
Credits are how Loraverse meters generation cost.For the closed beta, each cohort participant receives a credit grant designed around the beta challenge: enough to produce roughly 200 images, 60 seconds of video, and 2 worlds. That’s the scale of project we want you to actually build during beta — not a token allowance, but enough to make something real and tell us what works.On pricing philosophy: we’re committed to transparency on cost. Every generation in Loraverse itemizes its spend in the Ledger — you always know what each image, take, or world cost. When paid plans launch publicly, the pricing will be published openly, and the cost-per-generation breakdown stays visible.We don’t believe creative tools should hide what they cost.
Today, Loraverse’s primary surfaces are CREATE, COMPOSE, DAILIES, and DISPATCH — designed for production creatives working inside the app.For developers and technical users, more is coming. Our own node canvas (with JSON sharing for pipeline portability) is in the pipeline. Bring Your Own Keys will follow. A public API for triggering generations and reading project data is further out, but it’s where we’re heading.If you’re a developer or technical lead who wants early access to the engineer-facing surface, reach out. We’re shaping it with the people who’ll actually use it.