This commit just moves the luminance calculation in the background CDF calculation
inside the kernel, which allows to use floats instead of float3s for the buffer.
Details
Diff Detail
- Repository
- rB Blender
- Branch
- arcpatch-D1377
Event Timeline
| intern/cycles/kernel/kernel_bake.h | ||
|---|---|---|
| 442 | The background code below uses only float, not float4, so one of the two must have a cast. IMO passing the pointer as float* and then casting to float4* here, like it's done for the buffer for the path tracing kernels, is cleaner than passing float4* and casting to float* below. | |
| 444 | Yes, I'll change it. | |
Still not convinced such a casting is a good thing to have, it's more an error prone approach. Also it's a bit arbitrary to write float4 for displacement and luma for background for just rather a niche application.
Not sure if such optimization gives any improvement in terms of actual memory peak, but still think it's better to have more explicit control on what shader evaluation is expected to return. Couple of ideas:
- Have kernel_shader_evaluate(), kernel_shader_evaluate_luma()
- Have an argument defining type of an output, like SHADER_EVAL_RGBA, SHADER_EVAL_LUMA
Okay, to revive this diff, I've redone the patch on top of the current master.
Actually, keeping the pointer as float4 and only casting it for the background case does indeed seem to be the cleaner solution, this way the place where the code might be confusing can be clearly commented.
Functionality-wise, nothing changes, just like in the old diff.
Still feel bad about such pointer magic. Anything against such plan:
- Apply D1702
- Revert changes outside of light.cpp from this patch
- Apply
IMO this will keep things really clear, avoiding possible issues following the code and perhaps some compiler optimization crazyness.
Rebased to curent master, now the patch uses the changes of D1702 to pass the average value to the host side.