Page MenuHome

Cycles: improved Beckmann sampling using precomputed data
ClosedPublic

Authored by Brecht Van Lommel (brecht) on Jun 21 2014, 12:35 AM.

Details

Summary

It turns out that the new Beckmann sampling function doesn't work well with Quasi Monte Carlo sampling, mainly near normal incidence where it can be worse than the previous sampler. In the new sampler the random number pattern gets split in two, warped and overlapped, which hurts the stratification, see:

Now we use a precomputed table, which is much better behaved:

GGX does not seem to benefit from using a precomputed table.

Disadvantage is that this table adds 1MB of memory usage and 0.03s startup time to every render (on my quad core CPU).

Diff Detail

Branch
beckmann-table

Event Timeline

2.71

master

precomputed tables

To be clear, the above example is an extreme case with roughness = 1 and flat background light, that's when the loss of stratification is most visible.

Looks very good and the listed disadvantages to me seem small.

2 questions:

  • I am not sure about the total table size for all the precomputed tables combined and the time used to compute them, but should we add functionality to preserve them between render sessions?
  • Is it worth it to try and put the pointers to the tables in constant memory for the GPU?

Tested with some spheres and monkey, and my Generator file. Rendertime was slightly better, results a bit less noisy. No issues found sofar.

@Martijn Berger (juicyfruit): I think that should just be part of the Persistent Data option that we already have (but which gets only used for Image Textures atm). Can be done later. I think the 1MB and precompute time is fine.

@Martijn Berger (juicyfruit): for CUDA, the pointer to the table in global memory is in constant memory, and "KernelData" which contains the offset in the table is in constant memory as well. We could add a separate pointer for this Beckmann table, but then we get issues with number of texture limits on older GPUs.