Before we didn't encode the audio up until the current frame.
This lead to us not encoding the last video frame of audio.
Details
Diff Detail
Event Timeline
Either I can't reproduce the issue or this is not correct. I can see some delay in audio stream in master, but this patch doesn't resolve the issue.
When I apply D11916, the delay is gone.
But if I apply this patch on top of D11916, I get audio stream that is longer then video stream by one "blender frame"
To reproduce the issue the audio has to end on the last video frame. In your example the audio ends in the middle of the video, so you will not be able to see the issue.
Then you should be able to see that the last frame of audio data was not written (because it wasn't sent to the encoder).
I completely missed the logic of write_audio_frames, that it effectively ignores timestamp of 0 which is happens to be first frame and that is reason why there is + 1 .
So this patch seems to be OK.
I was focusing on portion of audio missing in renders, which still happens, as well as sound strip in VSE being too long. Haven't found cause for missing samples though.
I have extracted .wav from file where samples are missing, added missing samples manually and found, that there may be aslo issues with waveform drawing not representing duration very accurate. Also any small overshoot will result in sound strip to extend to next frame, so as far as this being annoying in VSE we may want to consider trimming excess frames from sound strip, when difference is small.

