Page MenuHome

[WIP]VSE: Allow playback of multiple framerates
Needs ReviewPublic

Authored by Richard Antalik (ISS) on Mar 24 2021, 5:35 PM.

Details

Summary

This patch implements possibility to edit videos with varying frame rates.

Currently the situation is, that user must use speed effect to change speed of movie strip, that doesn't match scene framerate. Editing such strip has some issues, but ideally you wouldn't need to use speed effect at all.

In this patch, strip playback rate is stored as absolute value in frames per second. Strip length is calculated from content length (in frames), stored playback rate and scene framerate. This method was chosen over relative playback rate that would be updated on framechange, because there is no need for update function. If update function would be used, recalculating relative playback rate is susceptibe to drift when changing framerate (dragging slider with custom framerate) because of precision issues as was the case in D4067.

This works well, until scene framerate is changed after editing strips.

There are 2 main reasons why things break after changing framerate:

  • Strip length and position on timeline is not changed
  • Strip animation is frame based and keyframes would remain on same frame, while time axis stretches or shrinks.

Problem with strip length on timeline and cut positions could be solved in 2 ways:

  • Recalculate these positions when scene framerate is changed. This approach may result in loss of precision, as mentioned earlier. It would be relatively simple change though.
  • Store these positions in values that are independent from scene framerate - seconds. No update function needed, but a lot of drawing and set/get functions would need to be changed.

Storing edit points in seconds may result in one issue that is very edge case, but it can happen: start/end points must be editable in any framerate. Therefore if user edits endpoint of 1 strip so that it is touching another strip in low framerate and switches to higher framerate, a gap or strip collision may be created, because non-edited strip has been edited in diffferent framerate and it's end point position was "rounded" in lower framerate. So this method may be error-prone as well. Solution is to make strips "aware of each other" and adjust edit points of all strips, so they can never collide each time user makes an edit.

Problem with animation keyframe position is different, because their position is rigidly based on frame values and there is no other way to move these than running update function on framechange.
Scaling time has much more intricate issues that would doubtfuly be ever resolved. For example F-curve with generated sine wave does not scale with scene frame rate.

Therefore to proceed further, I think it is necessary to better outline goals.


In summary:
If goal is to just support adding movie strips with different framerates, patch is available for functional review.
If goal is to support above + changing scene framerate effortlessly, depending on method agreed on, this can be relatively quick(but imprecise) or quite involved change. In either case, I definitely wouldn't attempt this after BCON1.

Diff Detail

Event Timeline

Richard Antalik (ISS) requested review of this revision.Mar 24 2021, 5:35 PM
Richard Antalik (ISS) created this revision.
Richard Antalik (ISS) edited the summary of this revision. (Show Details)Mar 24 2021, 6:06 PM
  • Initialize playback_rate to 0 if scene rate should be used. Also missed sound playback rate
  • initialize playback rate in SEQ_sequence_alloc, so no strip can be missed
Richard Antalik (ISS) edited the summary of this revision. (Show Details)Mar 31 2021, 7:25 AM
Richard Antalik (ISS) edited the summary of this revision. (Show Details)Mar 31 2021, 7:31 AM
Richard Antalik (ISS) edited the summary of this revision. (Show Details)Mar 31 2021, 7:33 AM

I'm not really sure how this is intended to be used in practice.

Imagine I've got 2 strips, placed linearly, one of them is 24fps footage, another one is 30fps:

[ 24 fps footage.mkv ][ 30 fps footage.mkv ]

If my scene framerate set to 24fps, then the first footage will play nicely, but the second one will skip every 4th frame? And if the scene framerate is set to 30fps then second footage will play fine, but first one will "duplicate" every 3th frame? And lets not even attempt to make math when our scene framerate is set to 29.99! ;)

I'm also not sure why there needs to be per-strip user-editable framerate?

Overall, to me it seems to be a problem which deserves a more thorough investigation and better focus, possibly involving frame interpolation. But it will always be a tradeoff between non-even frame drops and blurry footage caused by interpolation. Neither of those are desirable for video edit, so to me I'm not even sure this is something we should prioritize.

I'm not really sure how this is intended to be used in practice.

Imagine I've got 2 strips, placed linearly, one of them is 24fps footage, another one is 30fps:

[ 24 fps footage.mkv ][ 30 fps footage.mkv ]

If my scene framerate set to 24fps, then the first footage will play nicely, but the second one will skip every 4th frame? And if the scene framerate is set to 30fps then second footage will play fine, but first one will "duplicate" every 3th frame? And lets not even attempt to make math when our scene framerate is set to 29.99! ;)

Yes, this is basically how it should work.

I'm also not sure why there needs to be per-strip user-editable framerate?

It's mostly for debugging now, I wouldn't want this to be editable ideally. There can be use-case for image sequence strips, where you could set desired framerate, but not sure if having directly editable field is good idea.

Overall, to me it seems to be a problem which deserves a more thorough investigation and better focus, possibly involving frame interpolation. But it will always be a tradeoff between non-even frame drops and blurry footage caused by interpolation. Neither of those are desirable for video edit, so to me I'm not even sure this is something we should prioritize.

I won't pick a side here, though there are definitely users that want to edit footage from different sources effortlessly, and probably don't care too much about frame dropping or periodical frame dulication.
This is something I always wanted to resolve, but did not have planed immediately, so I wouldn't mind either working on this or postponing. Big part of ongoing refactoring effort is aimed at unifying logic so changes like this could be done easily.

This might be something what users want, but the non-even framedrop is not a good solution here. I am not really aware of people for whom such solution will be acceptable.

Think we had a frame interpolation path for the speed effect. Not sure if it was committed or not. But is it something what could be handled in the similar manner (the speed effect interpolation and "framerate" interpolation)? Sure, it will be more work this way, but then users will have something what works somewhat better.

An alternative approach would be to utilize ffmpeg's frame rate conversion.

So in import, if a mismatch between the source and the project fps is found, an offer is made: the user can either change the project frame rate to match the source(if it is the first movie file imported), or render an intermediate(high-quality) file with the corrected fps(including corrected audio(which also is a problem in Blender)) and use that.

This functionality could also be used for variable frame rate source files, which makes editing impossible, if not corrected in an intermediate file.

I will have to clarify purpose of this patch, but @Francesco Siddi (fsiddi) said, that he doesn't want to "sanitize" switching scene frame rates.

This might be something what users want, but the non-even framedrop is not a good solution here. I am not really aware of people for whom such solution will be acceptable.

Think we had a frame interpolation path for the speed effect. Not sure if it was committed or not. But is it something what could be handled in the similar manner (the speed effect interpolation and "framerate" interpolation)? Sure, it will be more work this way, but then users will have something what works somewhat better.

Frame interpolation works only when video is effectively slowed down (frames are duplicated).
I am not quite sure whether using interpolation is good idea. Imagine importing 24FPS strip into 60FPS project and then using speed effect with 6x playback rate to causually speed up some otherwise boring footage.
First footage is slowed down 2.5x so only every 5-th frame is not interpolated. Then you want every 6-th frame which would result in 2Hz artifacting between interpolated and non-interpolated frame.

So if interpolation is to be used, it should be only used by last "node" that is responsible for "retiming". But still this interpolation is something that I don't think is desirable for general purpose retiming. It was implemented for artistic purposes.

As for frame dropping, if I understand you correctly, what you mean is:
For 25 FPS video, that is to be played at 24 FPS, ideally you would play 12 frames, then drop 1 frame and play another 12 frames? Or in another words, make sure that dropped frames are evenly distributed?
Or do you mean that this even distribution of dropped frames means that only n-th frames are to be dropped so that x-th frames(x=FPS) are always in sync?

An alternative approach would be to utilize ffmpeg's frame rate conversion.

So in import, if a mismatch between the source and the project fps is found, an offer is made: the user can either change the project frame rate to match the source(if it is the first movie file imported), or render an intermediate(high-quality) file with the corrected fps(including corrected audio(which also is a problem in Blender)) and use that.

This functionality could also be used for variable frame rate source files, which makes editing impossible, if not corrected in an intermediate file.

In https://trac.ffmpeg.org/wiki/ChangingFrameRate they say:

When the frame rate is changed, ffmpeg will drop or duplicate frames as necessary to achieve the targeted output frame rate.

So that approach may be bit different as far as distribution goes, but not much different over all. FFmpeg has to do some more math, because source frame rate may be variable.

Just quick (and dirty) rebase.