Page MenuHome

Tests: add OpenGL UI drawing tests.
ClosedPublic

Authored by Brecht Van Lommel (brecht) on Feb 14 2018, 10:20 PM.

Details

Summary

This reuses the Cycles regression test code to also work for OpenGL UI drawing.
We launch Blender with a bunch of .blend files, take a screenshot and compare
it with a reference screenshot, and generate a HMTL report showing the failed
tests and their differences.

For Cycles we keep small reference renders to compare to in svn, but for OpenGL
UI drawing this seems impractical. The results are quite platform dependent, and
even if they weren't maintaining the reference images may take too much time.

Still, I think this is useful for developers that specifically work on OpenGL
drawing to use during development. The steps to set it up are:

  • Set WITH_OPENGL_DRAW_TESTS=ON in cmake.
  • Run BLENDER_TEST_UPDATE=1 ctest -R opengl_draw
  • .. make code changes ..
  • Run ctest -R opengl_draw and open build_dir/tests/opengl_draw/report.html

This renames test environment variables from CYCLESTEST_* to BLENDER_TEST_*,
which then work for both Cycles and OpenGL drawing tests. WITH_OPENGL_TESTS
is now WITH_OPENGL_RENDER_TESTS.

This should also replace the OpenGL regression testing script that probably
no one except me ever used:
https://wiki.blender.org/index.php/Dev:2.8/Source/OpenGL#Automated_Testing

Diff Detail

Repository
rB Blender

Event Timeline

This is what the report looks like (comparing master and blender2.8):

Generally LGTM.

One thing I'd suggest is to have each comparison be a separate tests, otherwise re-running tests in the case of failure always needs to run each one - and we can't take advantage of ctest ability to run multiple tests at once.

This might be tricky though when it comes to generating the report, although I suspect it could be made to work albeit not easily.


All other suggestions are picky things, for small scripts they're not so important, added nevertheless.

tests/python/opengl_draw_tests.py
28 ↗(On Diff #10024)

should be except ImportError - so as not to hide some other random error.

37 ↗(On Diff #10024)

*picky* - prefer tuples when not mutating.

tests/python/render_report.py
88 ↗(On Diff #10024)

*picky* best to use __slots__, main advantage is typos in assignments do go unnoticed.

200 ↗(On Diff #10024)

*picky* realize this is just moved over from previous code, would use named args here:

"""blah blah {name} ... blah blah {message} """.format(name=self.name, message=message)
tests/python/render_report.py
1 ↗(On Diff #10024)

We could place this in a modules/ subdir to keep all tests executable tests.

Brecht Van Lommel (brecht) marked 5 inline comments as done.

Address comments.

One thing I'd suggest is to have each comparison be a separate tests, otherwise re-running tests in the case of failure always needs to run each one - and we can't take advantage of ctest ability to run multiple tests at once.

This might be tricky though when it comes to generating the report, although I suspect it could be made to work albeit not easily.

The report is actually updated incrementally, you can run one test and it will still contain all the other test results from previous runs. Failed tests show at the top.

Each test corresponds to one folder, and if those don't contain too many files running the test is quick. We could add one test per .blend, but Cycles currently has 450 of those. With a few dozen tests results for categories like "sss" or "displacement" it's easier to see at a glance where the problem is, without scrolling though long console output.

One thing I forgot to mention, running these tests pops up a Blender window so you can't do anything else. It would be nice if we could run them in the background, or perhaps place the Blender window below others without getting focus. But I'll leave that for the future when I'm sufficiently annoyed by it.

And of course this code could eventually be reused more. It should be relatively straightforward to add Eevee tests, perhaps starting from the Cycles .blends with all the Eevee effects turned on with a Python script.

Even the compositor or modifiers could use it. Or operators, imagine for example recording a sculpt stroke operation and replaying that, then comparing a render of the result. That kind of thing is quite difficult to test in another way.

Use .blends from all lib/tests directories, because why not. If we ever store
reference screenshots in svn then it might make sense to limit the tests, but
for now it seems pretty convenient for regression testing blender2.8.

One thing I'd suggest is to have each comparison be a separate tests, otherwise re-running tests in the case of failure always needs to run each one - and we can't take advantage of ctest ability to run multiple tests at once.

This might be tricky though when it comes to generating the report, although I suspect it could be made to work albeit not easily.

The report is actually updated incrementally, you can run one test and it will still contain all the other test results from previous runs. Failed tests show at the top.

Each test corresponds to one folder, and if those don't contain too many files running the test is quick. We could add one test per .blend, but Cycles currently has 450 of those. With a few dozen tests results for categories like "sss" or "displacement" it's easier to see at a glance where the problem is, without scrolling though long console output.

Ah, see why thats a bit tedious. OTOH I could make the argument this is only 450 lines in CMakeLists.txt each calling a macro, for a list that doesn't change often and can be easily updated as needed.
With the advantage of being able to run individual tests in parallel and removing the possibility that people are accidentally running different tests on the same Blender version *.
Just making the case to use ctest to split individual tests, +1 to apply this patch since I think larger changes can be made in git.

* Not entirely true since the same tests could be modified between SVN revisions.

This revision is now accepted and ready to land.Feb 15 2018, 11:34 PM
tests/python/CMakeLists.txt
572 ↗(On Diff #10027)

*picky*, Could use a variable for the COMMAND instead of two different test calls.

tests/python/CMakeLists.txt
572 ↗(On Diff #10027)

See D2367#55668, that doesn't work for some reason.

I think P615 would be a cleaner solution anyway, but I should test that on Windows before committing.

This revision was automatically updated to reflect the committed changes.