Page MenuHome

Benchmark: Add eevee viewport playback tests.
ClosedPublic

Authored by Jeroen Bakker (jbakker) on Jun 27 2022, 12:27 PM.

Details

Summary

This commit adds the ability to test Eevee viewport playback performance tests.

Tests should be placed in lib/benchmarks/eevee/*/*.blend. rBL62962: Add test files for viewport playback performance. added
initial test files. See https://wiki.blender.org/wiki/Tools/Tests/Performance how
to set it up.

To record the playback performance the test start the viewport playback, and adds
a post frame change handler.

This handler will go over the next steps:

  • Ensures the viewport is set to rendered mode.
  • Wait for shaders to be compiled. Utilizes bpy.app.is_job_running function when available (v3.3) to wait for shader compilation to finish. When not available will wait for one minute.
  • Draw several warmup frames
  • Record for 10 seconds tracking the number of frames drawn and performance counters.
  • When ready print the result to the console. The results will be extracted when the benchmark has run.

Example report

                                         master               v3.0                 v3.1                 v3.2                 
T88219                                   0.0860s              0.0744s              0.0744s              0.0851s              
blender290-fox                           1.3056s              0.8744s              0.7994s              1.2809s

Diff Detail

Repository
rB Blender
Branch
temp-T99136-benchmark-viewport-playback
Build Status
Buildable 22708
Build 22708: arc lint + arc unit

Event Timeline

Jeroen Bakker (jbakker) requested review of this revision.Jun 27 2022, 12:27 PM
Jeroen Bakker (jbakker) created this revision.
Jeroen Bakker (jbakker) retitled this revision from WIP: add eevee benchmark module. to Benchmark: Add eevee viewport playback tests..Jun 27 2022, 12:44 PM
Jeroen Bakker (jbakker) edited the summary of this revision. (Show Details)
Jeroen Bakker (jbakker) edited the summary of this revision. (Show Details)
Brecht Van Lommel (brecht) requested changes to this revision.EditedJun 27 2022, 1:30 PM

Like the animation.py test, I think it should do the entire frame range, and then do that multiple times until the limit has been exceeded. The reason being that some parts of the animation may be faster/slower than others, and different machines will then end up rendering different frames within the same time limit, making the result not comparable. The test files will then need to have a relatively short frame range.

tests/performance/tests/eevee.py
108

See tests/animation.py for a simple way to return a dictionary from _run, rathering parsing command line output.

114

Raise an exception on failure like the Cycles test, so the benchmark can record this test as failed.

This revision now requires changes to proceed.Jun 27 2022, 1:30 PM
Jeroen Bakker (jbakker) edited the summary of this revision. (Show Details)Jun 27 2022, 3:01 PM
Jeroen Bakker (jbakker) updated this revision to Diff 52932.EditedJun 27 2022, 3:03 PM
Jeroen Bakker (jbakker) marked 2 inline comments as done.
  • Play all frames for 3 iterations.

I updated the scenes locally, but didn't push to keep the number of changes limited.

tests/performance/tests/eevee.py
108

I looked at it and to my understanding this could not be used for drawing performance. The drawing only happens after the _run callback is executed making the results incorrect as we want to track including drawing.

My solution was to use handlers to overcome this issue, this returns the _run early on, hence I do a custom print and parse the log in EeveeTest.run.

The approach seems good. Although I cannot comment on the test code itself.

I don't understand why this implementation needs to be in a frame change handler. Perhaps the only thing that needs to do is stop playback on the end frame, other than that it could all be in _run without the state machine and global variables? In that case it would also be easy to return the dictionary from run_.

Brecht Van Lommel (brecht) requested changes to this revision.Jun 27 2022, 4:39 PM
This revision now requires changes to proceed.Jun 27 2022, 4:39 PM

I will try to see if I can get it to work in just a for loop. Where we expect that frame_set will not only tag the area to be drawn, but also redrawn it.

That's possible too, but not required for the changes I'm suggesting.

In case it wasn't clear, what I suggesting is to run bpy.ops.screen.animation_play() multiple times, but make it stop at the end frame with the frame change handler.

Run it once for warmup, then wait for shader compilation with time.sleep, then run it a few times for benchmarking as needed.

Thanks for the clarification. Will try it.

I tried several implementation, but there are 2 that work.

  • My original patch
  • Perform multiple env.run_in_blender calls with a smaller state machine. This could trigger shader compilation multiple times on certain platforms

As I understood what you're suggesting is to

def frame_change_handler_stop_at_frame_end(scene):
    print(" - Frame change handler stop at frame end invoked")
    if scene.frame_current == scene.frame_end:
        bpy.ops.screen.animation_cancel();

def _run(arg):
    bpy.app.handlers.frame_change_post.append(frame_change_handler_stop_at_frame_end)

    print(" -  Perform dry-run")
    scene.frame_set(scene.frame_start)
    bpy.ops.screen.animation_play()
    while scene.frame_current != scene.frame_end:
        time.sleep(1)

This will only call the frame change handler when calling the scene.frame_set(scene.frame_start). It isn't called as the animation playback
only starts after the python script scope is lost and we are back in the main loop of blender.

I also tried another implementation without a frame change handler, but that one also doesn't work due to the same reason.

def _run2(args):
    """
    This implementation isn't working as drawing isn't triggered by time.sleep.
    time.sleep doesn't continue with the main loop of blender which will not
    perform any drawing or updating until this function is finished.
    """
    import bpy

    screen = bpy.context.window_manager.windows[0].screen
    scene = bpy.context.scene

    print(" - Setup scene.")

    # Set playback mode to draw all frames.
    scene.sync_mode = 'NONE'

    # Set rendered shading mode to all viewports.
    for area in screen.areas:
        if area.type == 'VIEW_3D':
            space = area.spaces[0]
            space.shading.type = 'RENDERED'
            space.overlay.show_overlays = False

    # Wait for shader compilation to be finished.
    print(" - Wait for shader compilation.")
    if hasattr(bpy.app, 'is_job_running'):
        # Sleep one frame to start draw manager and trigger shading compilation
        time.sleep(0)
        while bpy.app.is_job_running("SHADER_COMPILATION"):
            time.sleep(1)
    else:
        time.sleep(SHADER_FALLBACK_SECONDS)

    # Dry-run one cycle.
    print(" -  Perform dry-run")
    scene.frame_set(scene.frame_start)
    bpy.ops.screen.animation_play()
    while scene.frame_current != scene.frame_end:
        time.sleep(0)
    bpy.ops.screen.animation_cancel()
    
    print(" -  Start playback")
    current_iter = 0
    scene.frame_set(scene.frame_start)
    start_time = time.perf_counter()
    bpy.ops.screen.animation_play()
    while current_iter < RECORD_PLAYBACK_ITER:
        if scene.frame_current == scene.frame_end:
            current_iter += 1
            print(f" - Playback iteration {current_iter}")
        time.sleep(0)
    end_time = time.perf_counter()
    bpy.ops.screen.animation_play()

    print(" - Playback stopped")

    num_frames = RECORD_PLAYBACK_ITER * (scene.frame_end + 1 - scene.frame_start)
    frame_time = (end_time - start_time) / num_frames
    fps = 1.0 / frame_time
    return {"time": frame_time, "fps": fps}

The ways I know in python to give scope back the blenders' main loop is

  • Use frame change handler with state machine (first approach)
  • Use a python thread. _run should leave scope early, EeveeTest.run would still parse the console output
  • Use python async. _run should leave scope early, EeveeTest.run would still parse the console output.

All 3 of them are just technical variations of the same thing. It could be that you're aware of different way to solve this issue or I am missing the essence what you're suggesting.

Jeroen Bakker (jbakker) requested review of this revision.Jun 28 2022, 8:54 AM

I didn't understand before that the problem was that the modal operator can't be executed blocking. It seems fine as is then.

This revision is now accepted and ready to land.Jun 28 2022, 6:51 PM