Page MenuHome

eevee: add FXAA post-processing pass
AbandonedPublic

Authored by Corey Richardson (cmr) on Jun 20 2017, 1:30 AM.

Details

Summary

This uses the FXAA "PC Quality" shader taken from NVIDIA's GraphicsSamples repository. The default parameters are the highest quality FXAA has to offer, which on my GPU (R9 380) is quite fast! The default settings recommended by NVIDIA average 0.028ms per frame, with this histogram of timings:

The highest settings, which are the default with this patch, are average 0.026ms per frame, with this histogram:

Someone from the blender discord ran a quick "benchmark" (a suzanne with high-contrast checkerboard texture) on an i5-6200U iGPU, and got a max pass time of 0.42ms per frame, min 0.014ms, average 0.031ms, with this histogram:

All X axis are in milliseconds. So overall, I guess the performance impact is quite small on modernish GPUs, I don't think we necessarily need to expose the performance vs quality tuning options.

Here's a visual comparison with

this scene (modified from https://www.blendswap.com/blends/view/83399).

No FXAA:

FXAA:

Diff Detail

Repository
rB Blender
Branch
blender2.8
Build Status
Buildable 704
Build 704: arc lint + arc unit

Event Timeline

I'm guessing I should remove fxaa_shader_option, which is just for debugging (and isn't that useful, unless for some reason you really want to see where FXAA is being applied?) and also maybe the tuning parameters, which shouldn't really need to be touched.

Some minor comments inline, I will test the patch later.

source/blender/makesrna/intern/rna_scene.c
6287

UI text should follow the following rule:

  • First letters are always capitalized.
  • No unecessary abbreviations

So my suggestions would be:

  • "Sub-Pixel"
  • "Edge Threshold"
  • "Edge Threshold Minimum"

And remove the debug option indeed.

Ray Molenkamp (LazyDodo) added inline comments.
source/blender/draw/engines/eevee/eevee_effects.c
546

What's going on with the braces here?

source/blender/draw/engines/eevee/eevee_effects.c
546

They're blocks. They scope the local variables (eg, grp).

source/blender/makesrna/intern/rna_scene.c
6287

I'm thinking of removing these tuning parameters too. By my measurements they don't (meaningfully) impact performance on modernish cards, and are somewhat hard to understand the exact effects of for anyone who isn't familiar with the shader (or has the debug option ;)). What do you think?

An important thing, please start using arcanist for your patch submission. It makes testing it way simpler. As it is the patch is not applying cleanly.

source/blender/makesrna/intern/rna_scene.c
6287

I think this is a great idea. If performance is really an issue the user can simply run without anti-alias altogether.

Just to clarify, when using arcanist we get:

  • Context for the patch, so we can look at the patched files and see the lines before/after
  • The exactly revision the patch was created against. This way when I do arc patch D2717 it always work flawlessly. Otherwise it tries to apply to the current branch as it is, and it often generates conflicts.

Remove all of the old tuning parameters.

Remove accidental col.separator() from old FXAA tuning options

Remove unnecessary vestigial change to viewport_size location

Corey Richardson (cmr) marked 3 inline comments as done.Jun 22 2017, 5:17 AM

One question I have about this is the second clause of the BSD license in the FXAA shader. Is there someplace in the manual or whatever this can be placed?

One question I have about this is the second clause of the BSD license in the FXAA shader. Is there someplace in the manual or whatever this can be placed?

I am not sure we would have to place this in the manual. I am no lawyer but I think we can do something similar with the other license agreements that are found in the distributed binaries.

One question I have about this is the second clause of the BSD license in the FXAA shader. Is there someplace in the manual or whatever this can be placed?

Don't worry about that. The first thing I did once I saw your patch was to reach out to NVidia. They cleared this file (not the entire repository though) as BSD and fully GPL2 compatible.

I finally managed to test this (thanks for using arcanist). I like the results. However I wonder:

  • Have you trying alternating the position of FXAA in the effect stack? For example, I wonder if it would work better if you run it before DoF.
  • Be sure to use high frequency texture for final tests and to determine which values to use as default for edge/edge threshold.

As for the settings, if we really need to expose them, maybe we could think of an enum with presets (low-medium-high).
We can even use DRW_state_is_image_render() as a test and use a higher quality if rendering. (you may need to re-merge 2.8 in for that).

I finally managed to test this (thanks for using arcanist). I like the results. However I wonder:

  • Have you trying alternating the position of FXAA in the effect stack? For example, I wonder if it would work better if you run it before DoF.

Well, FXAA really needs to work on LDR, and DoF happens on HDR, so I'm not sure how to reconcile these :) I haven't tried to see how well FXAA interacts with other effects though.

  • Be sure to use high frequency texture for final tests and to determine which values to use as default for edge/edge threshold.

Do you have any good example scenes for eevee? I can of course just play around with stuff, but I've seen you have fairly impressive scenes in some youtube videos.

As for the settings, if we really need to expose them, maybe we could think of an enum with presets (low-medium-high).
We can even use DRW_state_is_image_render() as a test and use a higher quality if rendering. (you may need to re-merge 2.8 in for that).

I'm not sure. Hopefully MSAA or TAA would be default for rendering, and not crappy FXAA. The performance impact of even the highest setting on Haswell intel GPU is so small...

I'm not sure. Hopefully MSAA or TAA would be default for rendering, and not crappy FXAA. The performance impact of even the highest setting on Haswell intel GPU is so small...

So why to make AA even optional? Why don't we do like tone-mapping and have this running all the time.

Also @Clément Foucault (fclem) shouldn't we have this for all modes/engines? Instead of only Eevee?

I'm not sure. Hopefully MSAA or TAA would be default for rendering, and not crappy FXAA. The performance impact of even the highest setting on Haswell intel GPU is so small...

So why to make AA even optional? Why don't we do like tone-mapping and have this running all the time.

At least in gaming world a lot of people really dislike FXAA's results (esp texture blurring) and some even prefer to play without any antialiasing than to play with FXAA...

I just tested it and it does clearly improve quality. Well done!

I think we can all agree that FXAA should go at the end of the pipeline and be done on the final image.

To do this you should move the shader file to source/blender/gpu/shaders/ and add it to gpu_shader.c.
Then do the transformation in draw_ofs_to_screen.

I agree on the quality settings not being important.

The setting should go inside the User Preference Panel under the System tab, alongside / replacing the multi-samples options.

Once this is done I'm all fine to

My only concern is about the offline renderers (Cycles). I can see FXAA conflicting with their renderer but I think it won't be that a problem.
And if it's too much a problem for the user then one can just disable it. This won't be a problem when using TAA.

This patch is too old and has no meaning with current codebase.

We already implemented a TAA solution for EEVEE.