The main rationale for this change is to be able to fine tune the step
size for different volumes, especially the grided ones (smoke
simulations, e.g.), where you would typically want the step size to be
no lower than the voxel size.
Details
Diff Detail
- Repository
- rB Blender
- Branch
- cycles_volume_stepsize
Event Timeline
Here is a test render, step size: left: 0.1, right: 1.0 (same smoke simulation for both):
| intern/cycles/kernel/kernel_volume.h | ||
|---|---|---|
| 111 | Here I'm not too sure how to compute the shader index, so I tried to revert the operation happening in ShaderManager::get_shader_id. | |
Not really sure how much useful such an extra know is for artists actually, at least not in such an implementation. The way you did it prevents quick switch from preview settings to final ones. So even if we want something like that is should be more like a step divider or multiplies which you can tweak on some more granular level. It also doesn't really feel it's a per-shader option, more like a per-object because you can have same procedural volume used by objects with different scale for which you might perhaps want to have different step size.
Could you rephrase that please :) Not sure what you mean.
The way you did it prevents quick switch from preview settings to final ones.
Call me an extremist but I don't think you would want a separation here, you'll still need to do some test final render to tweak the step size. Maybe it's possible to add an override setting for this in case of preview render.
So even if we want something like that is should be more like a step divider or multiplies which you can tweak on some more granular level.
I would be against that, it feels less straightforward than simply plugin the right value. (But I'm open minded ;) )
It also doesn't really feel it's a per-shader option, more like a per-object because you can have same procedural volume used by objects with different scale for which you might perhaps want to have different step size.
I'm a bit confused, are you sure it's not the other way around where this here patch is per shader and you would prefer a per object setting? Because if different objects use the same procedural volume, they'll use the same shader and the step size will thus be the same. Although I admit you have a good point here, I don't think it is an issue, if you have a bigger object, you'll want the step size to remain the same, so more samples are taken from the volume and you sill have good quality (provided that the step size is not relative to the object's size).
In any case some feedback from users can indeed be desirable here.
Could you rephrase that please :) Not sure what you mean.
It's a "knob" instead of "know", sorry :)
Call me an extremist but I don't think you would want a separation here, you'll still need to do some test final render to tweak the step size. Maybe it's possible to add an override setting for this in case of preview render.
Sure you do want a separation. You never ever send a final shot to an external farm without sending it with a really quick prevew settings to be sure all the textures, caches and simulations were packed correctly and you don't waste hours of rendertime on hundreds of machines because something went wrong during the file transfer.
I'm a bit confused, are you sure it's not the other way around where this here patch is per shader and you would prefer a per object setting? Because if different objects use the same procedural volume, they'll use the same shader and the step size will thus be the same. Although I admit you have a good point here, I don't think it is an issue, if you have a bigger object, you'll want the step size to remain the same, so more samples are taken from the volume and you sill have good quality (provided that the step size is not relative to the object's size).
Imagine for example procedural cloud shader, which you apply on an objects with a small scale (to get some cute little clouds) and also apply it on a much bigger object which would represent, say, a storm cloud. In the context of optimization you would want small steps for small could and bigger steps for a bigger cloud.
But again, it's all theoretical possibilities. I'm really skeptical of such an extra settings which in practice only makes things more complicated to setup rather than achieving real benefits. What would really be convincing here is getting a real shot from, say, Cosmos Loundromat and show how much faster volume rendering becomes without visual details loss.
I want to improve the volume step sizes, and something like this patch is part of that, but it's more complicated.
- For voxel grids we should automatically use a step size equal to the voxel size. There can be a user controlled multiplier, to increaser it for speed, or decrease it if volume displacement adds detail.
- For procedurally textured volumes we don't have an estimate like this though.
- For world volumes, exponential stepping like Eevee would be nice (increasing the steps size as you get further away).
- Step sizes could be in object space to handle scaling (so affected by the object transform).
- Some software has a separate step size multiplier for volume shadows, as an optimization.
- If we ever add unbiased volume ray marching the step size would be determined automatically, though in practice this still needs some bounds or initial guess I think.
Currently I'm thinking of this:
- Scale the step size by the object transform.
- Add a per scene volume step rate, as a global multiplier. This could be split into Render and Viewport like the dicing rate?
- Add a per material volume step rate to shaders, that is a multiplier on the automatically determined step size. Per shader because it would be together with the other volume quality settings (linear/cubic), and with object scaling it's more reusable between different objects.
- Automatically determine the step size for objects:
- For voxel grids this is the voxel size.
- For procedurally texture volumes this could be 1/10th of the bounding box size.
- Add a per world volume step size, and maybe in the future something extra for exponential sampling.
So from a user point of view, you basically get an extra setting in materials and world. A downside is that we'd be breaking backwards compatibility, we could add extra settings to switch between absolute/relative step sizes but that's kind of annoying and still doesn't handle the case of linked objects or materials.
Reviving old patch to be committed along with the new volume object.
Automatic step size estimation is important especially as imported volume
objects may be at arbitrary scales so a fixed global step size may either
lose too much detail or render very slow.
