Changeset View
Changeset View
Standalone View
Standalone View
intern/cycles/render/bake.cpp
| Show First 20 Lines • Show All 150 Lines • ▼ Show 20 Lines | bool BakeManager::bake(Device *device, DeviceScene *dscene, Scene *scene, Progress& progress, ShaderEvalType shader_type, BakeData *bake_data, float result[]) | ||||
| task.shader_input = d_input.device_pointer; | task.shader_input = d_input.device_pointer; | ||||
| task.shader_output = d_output.device_pointer; | task.shader_output = d_output.device_pointer; | ||||
| task.shader_eval_type = shader_type; | task.shader_eval_type = shader_type; | ||||
| task.shader_x = 0; | task.shader_x = 0; | ||||
| task.shader_w = d_output.size(); | task.shader_w = d_output.size(); | ||||
| task.num_samples = is_aa_pass(shader_type)? scene->integrator->aa_samples: 1; | task.num_samples = is_aa_pass(shader_type)? scene->integrator->aa_samples: 1; | ||||
| task.get_cancel = function_bind(&Progress::get_cancel, &progress); | task.get_cancel = function_bind(&Progress::get_cancel, &progress); | ||||
| for(size_t i = 0; i < task.num_samples; i++) { | |||||
dfelinto: My main concern with this approach (and one I would like to hear from other devs) is if we are… | |||||
| task.sample = i; | |||||
| device->task_add(task); | device->task_add(task); | ||||
| device->task_wait(); | device->task_wait(); | ||||
| /* update progress bar */ | |||||
| progress.increment_sample(); | |||||
| progress.set_update(); | |||||
| if(progress.get_cancel()) | |||||
| break; | |||||
| } | |||||
| if(progress.get_cancel()) { | if(progress.get_cancel()) { | ||||
| device->mem_free(d_input); | device->mem_free(d_input); | ||||
| device->mem_free(d_output); | device->mem_free(d_output); | ||||
| m_is_baking = false; | m_is_baking = false; | ||||
| return false; | return false; | ||||
| } | } | ||||
| device->mem_copy_from(d_output, 0, 1, d_output.size(), sizeof(float4)); | device->mem_copy_from(d_output, 0, 1, d_output.size(), sizeof(float4)); | ||||
| ▲ Show 20 Lines • Show All 70 Lines • Show Last 20 Lines | |||||
My main concern with this approach (and one I would like to hear from other devs) is if we are adding too much overhead by dispatching a task job per sample.
An alternative is to treat the individual tasks as tiles, so when they increment sample it's counted against the overall samples (num_samples * num tasks). That said I think this approach is better in the future when we want to have preview of the baking result back to Blender. For baking it's better to show the preview of the entire image per sample instead of per parts I believe (though again we may need a threshold to not update things every sample).
Thoughts?