Changeset View
Changeset View
Standalone View
Standalone View
intern/cycles/render/bake.cpp
| Show First 20 Lines • Show All 149 Lines • ▼ Show 20 Lines | bool BakeManager::bake(Device *device, DeviceScene *dscene, Scene *scene, Progress& progress, ShaderEvalType shader_type, BakeData *bake_data, float result[]) | ||||
| DeviceTask task(DeviceTask::SHADER); | DeviceTask task(DeviceTask::SHADER); | ||||
| task.shader_input = d_input.device_pointer; | task.shader_input = d_input.device_pointer; | ||||
| task.shader_output = d_output.device_pointer; | task.shader_output = d_output.device_pointer; | ||||
| task.shader_eval_type = shader_type; | task.shader_eval_type = shader_type; | ||||
| task.shader_x = 0; | task.shader_x = 0; | ||||
| task.shader_w = d_output.size(); | task.shader_w = d_output.size(); | ||||
| task.num_samples = is_aa_pass(shader_type)? scene->integrator->aa_samples: 1; | task.num_samples = is_aa_pass(shader_type)? scene->integrator->aa_samples: 1; | ||||
| task.get_cancel = function_bind(&Progress::get_cancel, &progress); | task.get_cancel = function_bind(&Progress::get_cancel, &progress); | ||||
| task.update_progress_sample = function_bind(&Progress::increment_sample_update, &progress); | |||||
| this->num_parts = device->get_split_task_count(task); | |||||
| this->num_samples = task.num_samples; | |||||
| device->task_add(task); | device->task_add(task); | ||||
| device->task_wait(); | device->task_wait(); | ||||
dfelinto: My main concern with this approach (and one I would like to hear from other devs) is if we are… | |||||
| if(progress.get_cancel()) { | if(progress.get_cancel()) { | ||||
| device->mem_free(d_input); | device->mem_free(d_input); | ||||
| device->mem_free(d_output); | device->mem_free(d_output); | ||||
| m_is_baking = false; | m_is_baking = false; | ||||
| return false; | return false; | ||||
| } | } | ||||
| ▲ Show 20 Lines • Show All 71 Lines • Show Last 20 Lines | |||||
My main concern with this approach (and one I would like to hear from other devs) is if we are adding too much overhead by dispatching a task job per sample.
An alternative is to treat the individual tasks as tiles, so when they increment sample it's counted against the overall samples (num_samples * num tasks). That said I think this approach is better in the future when we want to have preview of the baking result back to Blender. For baking it's better to show the preview of the entire image per sample instead of per parts I believe (though again we may need a threshold to not update things every sample).
Thoughts?