Page MenuHome

VSE: Render scene strips in original resolution
Needs ReviewPublic

Authored by Richard Antalik (ISS) on Dec 1 2021, 3:48 PM.

Details

Summary

When scene strip is used to add composition from another sequencer, it
was rendered in "host" scene resolution. This means that larger
composition can't be added to scene with smaller resolution without
result being effectively cropped. When scene strip is used to add 3D
render, it is rendered in input scene resolution.

Always render in resolution specified in input scene.


To explain visually:

This is composition (1000x1000) I would like to use as background for example:

If I add this as scene strip to new scene with 800x500 resolution and scale it to fit I get this result:

The whole image is cropped and 4 rectangles are deformed, because these are scaled color strips that always fill whole image, so they are "sensitive" to resolution

Example .blend file

Diff Detail

Repository
rB Blender
Branch
scene_size (branched from master)
Build Status
Buildable 19125
Build 19125: arc lint + arc unit

Event Timeline

Richard Antalik (ISS) requested review of this revision.Dec 1 2021, 3:48 PM
Richard Antalik (ISS) created this revision.
Richard Antalik (ISS) edited the summary of this revision. (Show Details)Dec 1 2021, 3:55 PM

Not sure what the "host" scene is. The render pipeline is currently supposed to use active scene's resolution and scale when rendering all required scenes (compositor nodes, scene strips and so on). I am not currently convinced this is something a good idea to change.

What I'm also not sure is why difference in resolution causes cropping. How can rendering with less pixels cause cropping?

The idea behind this is to support slightly more advanced Ken Burns effect. This is now possible to do with images, but not over composition. So I would like to have means to preserve original quality of composition, and transform it as one image.

As alternative workflow I was thinking of using meta strip, but not sure if that would be very feasible. Biggest problem is, that meta strip share same render size as scene, so it would have to have it's own. This is fine as long as you don't move strip to or from such meta strip. Perhaps this could be handled with toolkit to either preserve visual size or just move strips and preserve properties. In any case this would need some design first.

Technically rendering scene strip and meta strip is almost identical process - both use do_render_strip_seqbase(). Using meta strip could be more practical for users, but with scene strips this is rather simple change so I started with that.

Not sure what the "host" scene is. The render pipeline is currently supposed to use active scene's resolution and scale when rendering all required scenes (compositor nodes, scene strips and so on).

By "host scene" I meant scene used by scene strip. This principle of using active scene resolution seems to be violated for 3D scene strip renders, Not sure if that was accident or intentional.

What I'm also not sure is why difference in resolution causes cropping. How can rendering with less pixels cause cropping?

Because in VSE images don't scale with render size currently. Even if that wasn't the case, if you change aspect ratio, you must crop the image.

I am not currently convinced this is something a good idea to change.

I think this should be at least an option. For VSE I am pretty sure most users would expect this to be default behavior.

The expected behavior depends on a specific workflow. Can't say it's the most or not the most of users as it depends.
Having it as an option could work, but it needs to be done as a more generic change for the render pipeline. Such things needs to be within a common mental model for all steps of rendering pipeline (rendering, render layers, compositor, sequencer).

The expected behavior depends on a specific workflow. Can't say it's the most or not the most of users as it depends.
Having it as an option could work, but it needs to be done as a more generic change for the render pipeline. Such things needs to be within a common mental model for all steps of rendering pipeline (rendering, render layers, compositor, sequencer).

I can see how this depends on workflow. Technically even in compositor if you want to overlay fly on elephant it's unreasonable to render both at same resolution. So I will discuss this and add option for both if it makes sense.