**I've created a video that thoroughly expresses what is this issue about:**
https://www.youtube.com/watch?v=Y6PD6Zsh1bo
**Below is a written version**
I feel that Blender's Video Sequence Editor's performance is badly handicapped, for two reasons:
1. It's using CPU to compose the frames, not GPU. I guess that it should be perfectly possible to replace the CPU processing with GLSL processing to delegate the costly image compositing to the GPU, which is dedicated and deal with that stuff blazingly fast. Even Guassian Blur or more complicated effects could be done with GLSL shaders, making them very fast and responsive to work with. Blender's Game Engine does that in realtime.
2. It's using only one core of the CPU to render frames. If VSE could utilize all threads on my CPU, I probably wouldn't need to use Proxy.
I'm using OBS (obsproject.com) for screencasting and video capture - it has some basic compositing features like transparency masks, color correction, even chroma keying, it's all done with GLSL, becasue otherwise it wouldn't be possible to process hunderds of frames every second, and encode them to H.264 in realtime, leaving plenty of CPU time for other things.
I've heard VSE won't allow using Compositor inside it for performance reasons, but the performance is very limited anyway. And if compositor could use GLSL, maybe realtime compositing for VSE could work (for 25% frame size for example to make it faster). I don't suggest CUDA, becasue CUDA often is hard to get working consistently.
If frame rendering in VSE could go through OpenGL (or Vulkan?) it'd also make rendering the final video orders of magnitude faster.
Right now it can be very very slow.
Example: for my recent unfa vlog videos I'm capturing 3840x1080 60 FPS footage with OBS and editing/compositing this to a 1920x1080 60 FPS final render in VSE.
I'm running Ryzen 7 1700 8 core, 16 thread CPU and Nvidia GTX1060 GPU. There are resources, but VSE isn't using them.
Rendering about 80-minute video in a single Blender instance took about 12 hours on this machine.
Rendering a 10-minute FullHD 60 FPS video took about 90 minutes. My total CPU load is resting at 20%, my GPU load is at 30%, and Blender isn't contributing to that.
I found there's a script called Pulverize that runs Blender in parallel to speed up rendering from sequencer the - the problem is - it's using a lot of memory. Because every Blender instance works completely independently. With 16 GB of RAM I can't use more than 6 instances for complex projects, or I'll run out of memory and bog down my system. At least I can get near-100% CPU utilisation. It still takes 4 hours to render an hour-long video. And requires 6x the amount of disk I/O to read the input video. Hopefully the system cache is helping but with all my memory used, probably not much. This is far, far away in a distant galaxy from optimal.
Blender's VSE is very capable and I love it but as I'm trying to up my video quality I see that it needs a major rehaul to start using modern PCs full computational power.
In my opinion it's the biggest obstacle right now to use VSE in serious production, which it handles as is now, but with bad hiccups.
(The second one is the lack of an audio mixer, but that's a different issue).
I once taught Blender VSE to a friend who worked in a film studio - he was disapointed that Blender couldn't play a FullHD video fluently at a 32-core 64-thread Xeon-based vidoe rendering/transcoding server machine. VSE performs very slowly on even extremely capable hardware, and that's a big shame.
The only option to edit high quality video with Blender is by using proxy, and it has it's problems too (sometimes I simply can't get it to work).
The Proxy generation also isn't multithreaded. So it takes a long time to finish for big video files. And becasue sometimes the files are lost afterwards, I often need to rebuid the same proxy footage again and again multiple times to get all of them done. For multi-GB-sized input files this is a real pain.
Another (smaller) problem is audio waveform previews - when doing any undo, they all disappear and are regenerated from scratch. On my laptop that bogs down Blender so much that I'm unable to work, every time I press Ctrl+Z I have to wait 2 minutes before I can continue. On my desktop it's not that big of a deal.
Still - these waveform previews are re-generated every time I reload the project. Unneded disk I/O and CPU work. If this data was cached on disk, I could start my work faster after loading the project.
Ardour, an opensource digital audio workstation handles this with disk cache, and no waveform preview is calculated twice when not needed. It's efficient and fast, it can draw hundreds of audio regions with waveforms on screen without much trouble. Blender constantly drops the cached waveforms and re-genarates them when I scroll or zoom the VSE editor, needlessly slowing down the work.
**Summing up all the ideas:**
# Multithreaded CPU utilisation
# GLSL frame rendering
# Multithreaded Proxy encoding
# Fixing Proxy generation problems
# Multithreaded waveform preview generation
# Waveform preview disk cache