Refracturing Media: The Quality-Control Path to VR Abundance
News & Insights
10 Min Read
VR adoption isn’t primarily stuck because headsets aren’t good enough. VR is stuck because catalog supply still doesn’t compound. Conversion manifests will make conversion investable because they make it governable.

The loop is rational, not broken
If VR feels “stuck,” it’s tempting to blame everyone for being unimaginative. That’s the lazy read.
The real read is that the incentives line up exactly the way you’d expect: users want more “this is why I own a headset” content before they commit; studios and creators want a bigger audience before they take on a heavier production stack; and platforms want growth without flooding the medium with uncomfortable output that trains users to churn. Nobody is confused. Everyone is optimizing.
There is proof that VR demand is real, even if it’s unevenly distributed. Meta’s developer-facing ecosystem update (GDC 2025) reported that cumulative spend on Quest titles has surpassed $2B, payments increased in 2024, and monthly time spent in VR was up materially year-over-year.
But demand signals don’t magically create catalog abundance — especially when the supply-side cost stack is still heavy. You can feel this in the financials: Meta’s Reality Labs segment continues to run large operating losses. That’s an institutional reminder that “people will buy and spend” does not automatically mean “content supply scales into a self-funding flywheel.”
The stalemate persists for logical reasons. That’s the important shift: if the behavior is rational, the fix isn’t more hype. The fix is changing the unit economics of supply so all three parties can say “yes” at the same time.
The inventory mismatch is the entire plot
2D isn’t “plentiful.” It’s industrial-scale, planetary-scale, borderline absurd-scale.
On video alone: YouTube has publicly stated that creators upload about 500 hours of video every minute. (That single statistic implies roughly 720,000 hours per day — before you count everything else: streaming back catalogs, broadcast archives, short-form platforms, corporate libraries, and personal storage.)
On photos: a major imaging market research firm estimates global photo capture in the trillions per year, including ~1.6 trillion photos in 2023 and ~1.8 trillion in 2024, and expected ~1.9 trillion in 2025.
Now compare that with comfort-safe immersive inventory — stereo 3D variants that hold up in motion, don’t violate perceptual constraints, and are actually pleasant to watch in a headset or theatre. Relative to the 2D universe, that catalog is tiny.
That imbalance isn’t a creativity problem. It’s a conversion-operations problem.
Why native 3D doesn’t backfill the catalog: the economics
3D production & conversion are f#cking expensive
Native immersive production can be incredible. It can also be structurally mismatched to “catalog compounding.” The math is unfriendly: more gear, more specialized staff, more constraints, and more failure modes that don’t exist in 2D.
Even in the older stereoscopic TV production era (not "VR", but the closest mass-market analog), a detailed budgeting case study for a “typical” factual episode reported that matching 2D production values in 3D could require materially more time in the field — citing 2.5× field days as a minimum in their earlier experience — plus additional specialist roles (e.g., stereographer and engineering support) that simply don’t exist in a standard 2D crew.
Studios did what capital-intensive industries always do: they reserve native 3D bets for the narrow slice of projects where the upside is obvious (tentpoles, showcase experiences, special live events). That’s rational. It’s just not a recipe for abundance.
“What about converting the back catalog?” Historically, high-end conversion has also been expensive — because “high-end” means labor, iteration, and supervision, not just a button-push. Trade reporting during the 3D boom quoted a major vendor’s charges around $50,000–$100,000 per minute (a range echoed by other industry coverage #reddit), which is why conversion decisions became board-level, not “we’ll try it this weekend.”
The outliers are instructive. The 3D conversion of Titanic was reported as a multi-month effort costing around $18 million — again, not a casual experiment, but a major production in its own right.
This is why “convert the catalog” hasn’t been the default strategy. The old conversion stack behaves like a bespoke service business: it can produce beautiful results, but it’s hard to scale.
Why now, and why “slop” is a VR-specific failure mode
The reason this conversation matters now is not that “AI exists.” It’s that the cost curve moved enough that large-scale conversion is no longer obviously impossible.
The Stanford HAI AI Index Report (2025) summarizes a set of hardware/economics trends that matter for any compute-bound media pipeline: at the hardware level, costs have declined around 30% annually while energy efficiency has improved around 40% each year. That’s the good news.
The bad news is that cheaper compute also makes it easier to mass-produce low-quality output. In flat screen media, bad output is aesthetic. In VR, bad output can be physical. There is a well-studied body of work on visual discomfort in stereoscopic displays: discomfort is influenced by factors like vergence–accommodation issues, excessive disparity, and cue conflicts; it is widely treated as a barrier to broader adoption of stereo 3D experiences.
Perceptual research explicitly calls out that automated 2D-to-3D conversion can generate depth-sign and cue-dissociation errors that are “strange and uncomfortable.”
Platforms already treat comfort as first-class. Meta’s own Quest guidance includes comfort ratings (Comfortable → Moderate → Intense) to help users anticipate motion/comfort intensity, recommending that new users begin on the more comfortable end of the scale.
Put that together and you get a blunt conclusion: scaling conversion without quality gates doesn’t create abundance. It creates distrust. Scarcity makes people complain; slop makes people quit.
Manifest, factory, flywheel
At small scale, conversion is easy to describe: “run the model.” At catalog scale, that mindset becomes an expensive hobby.
The difference between a hobby and infrastructure is governance: predictable cost, reproducibility, auditability, and continuous improvement.
That is exactly what our “conversion manifests” seek to operationalize. Our definition of a conversion manifest is simple and strict: the unit of scale.
Every asset gets a decision record. Not a vibes-based greenlight. A decision record that answers, in writing: what we observed, what we recommend, what it will cost, what quality bar it must clear, and what happens if it fails.
Why it exists is equally simple: at catalog scale, manifests turn conversion from “we rendered it again” into an auditable system. We can't improve what we don't measure.
Finance gets predictable spend and stop-loss controls instead of unbounded GPU drift.
Trust & safety gets scoring now (and enforcement later) without losing history, which matters because policy regimes change but audit trails shouldn’t.
Engineering gets reproducible recipes and model swap-ability because the manifest describes the work, not a single vendor’s magic. Monthly, quarterly, or yearly re-releases become simplified.
Content ops gets a ranked backlog instead of vibes.
The manifest is what lets you build a factory that turns into a flywheel.
The compounding loop is the core operating rhythm:
Score → Recipe → Convert → QA → Publish → Measure → Update
What’s inside a conversion manifest
A manifest doesn’t need to be complicated.
It needs to be complete enough that (a) you can reproduce the output, and (b) you can explain the decision to a skeptical operator, not just a hopeful creator.
Manifest component | What it contains | Why it matters at scale |
|---|---|---|
Asset | Location/IDs, rights/entitlements pointer, technical metadata (resolution, frame rate, codec), plus derived features (motion intensity, scene complexity proxies, cut density, estimated depth ambiguity) | Enables preflight cost estimation and predicts failure modes before spend |
Scores | Suitability score, comfort-risk flags, artifact-risk f lags, policy scoring (as applicable), confidence intervals | Filters “will-fail” content early; creates a defensible backlog order |
Recipe | Lane (low/med/high touch), steps, model family (model-agnostic), target formats (SBS, MV-HEVC, etc.), required QA thresholds | Makes runs reproducible; lets you swap models without rewriting the factory |
Governance | Spend caps, stop-loss triggers, idempotency keys, audit keys, provenance fields, retry policy | Prevents GPU bonfires; enables audit and rollback; supports compliance |
Outputs | Expected artifact index: stereo deliverable(s), depth/ disparity outputs if stored, QA report, thumbnails/ previews, metrics hooks | Downstream systems can validate completeness and trace every output to a decision record |
How platforms scale supply using manifests without flooding the world with slop
When platforms do this correctly, the playbook looks less like a creative studio and more like any other high-throughput production system.
Build the backlog with first-party data — ranked by engagement in the headset cohort, not global popularity. The point is to convert where incremental immersive value is most likely.
Run preflight scoring at intake to reject “will-fail” content before compute is spent. Candidate selection is where most ROI is won.
Assign lanes based on (a) business value and (b) suitability. High-value + high-fit assets earn higher touch recipes; lower-fit assets either route to cheap recipes or get skipped with a recorded reason.
Calibrate per category. Sports, sitcoms, animation, handheld low-light, and CGI-heavy content do not share the same failure modes, so they should not share the same default recipes.
Enforce QA gates like a factory. If it fails, you reroute or skip — but you persist failure reason codes so your scoring improves. This is how “uncomfortable content” gets pushed out of the system, not merely noticed after release.
Measure outcomes post-publish (watch time, completion, retention, comfort feedback proxies) and feed them back into ranking and recipe choice. Budgets follow outcomes. Supply compounds because every cycle improves your decision policy and quality, not just your output volume.
If you want to summarize the whole thesis in one line: manifests make conversion investable because they make it governable.
Applied data and scalable, auditable operations are the moat
When VR abundance occurs, it won’t happen because someone found a single permanently-superior model.
Model quality will keep moving. Costs will keep shifting. Rendering quality will improve. The “best model this month” will be replaced.
The durable advantage is the applied data layer plus the operating discipline to use it.
In practice, that means four things:
You use engagement and performance signals to select candidates, instead of converting everything and praying.
You predict conversion success pre-run (fit scoring), because high-scale conversion without rejection is just a new kind of spam.
You choose the most economic conversion plan per asset to hit a quality bar, because “maximum quality always” is not a scalable budget policy.
You produce auditable workstreams (manifests + recipes + QA reports) so the operation can be continuously improved — and defended.
Why? Because the economic advantages will accrue to whoever:
Scores candidates best
Predicts failure before spending
Controls cost variance
Enforces QA gates rigorously
Closes the feedback loop between publish → measure → re-rank
This is a unique opportunity where an applied systems moat may hold more value than a creative moat in the film and digital media sector.
The oil and gas refracturing analogy
Anelo is for media what refracturing is for oil and gas
Society of Petroleum Engineers describes refracturing as extracting additional value from existing wellbores without drilling new wells; that framing is useful precisely because it forces candidate selection and economic discipline.
The technical literature on refracturing is blunt that candidate selection is a primary challenge — and that structured evaluation is central to success.
Refracturing:
Doesn’t assume every well is viable
Requires structured candidate selection
Is economics-first, not optimism-first
That’s the mindset behind Anelo : treat 2D-to-3D conversion as governed operations, not a boutique art project:
Not all 2D assets should be converted.
Candidate selection is the real moat.
Governance determines profitability.
In Anelo’s own product framing, the emphasis is on scaling conversion with operational controls like audit logs and scoring metadata, and tools for extracting frames and scoring candidates — not just producing a render.
Sources:
https://developers.meta.com/horizon/blog/gdc-2025-past-present-future-developing-vr-mr-meta-audience-insights/
https://investor.atmeta.com/investor-news/press-release-details/2026/Meta-Reports-Fourth-Quarter-and-Full-Year-2025-Results/default.aspx
https://blog.youtube/news-and-events/youtube-at-15-my-personal-journey/
https://riseaboveresearch.com/2023-worldwide-image-capture-forecast-2022-2027-new-report/
https://documentarytelevision.com/producers-tool-kit/3d-versus-2dhd-whats-the-impact-on-budgets-part-1-field-production/
https://www.hollywoodreporter.com/business/business-news/debate-waging-over-2d-3d-22262/
https://www.hollywoodreporter.com/news/general-news/titanic-3d-nab-producer-jon-landau-james-cameron-312631/
https://academic.oup.com/jge/article/16/4/789/5539061
https://hai.stanford.edu/ai-index/2025-ai-index-report
https://www.sciencedirect.com/science/article/pii/S0923596516301096
https://pmc.ncbi.nlm.nih.gov/articles/PMC3490636/
https://about.fb.com/wp-content/uploads/2022/10/VR-Comfort-and-Safety.pdf
https://www.researchgate.net/publication/262237667_Depth_estimation_for_semi-automatic_2D_to_3D_conversion
https://www.postmagazine.com/Publications/Post-Magazine/2010/November-1-2010/Dispelling-the-Myths-2D-to-3D-conversion.aspx
https://www.spe-events.org/workshop/refracturing-proven-strategy-maximize-economic-recovery
https://anelo.ai/
Join our newsletter list
Sign up to get the most recent blog articles in your email every week.
