Implementation of the cryptomatte render passes in EEVEE. Implementation follow design in {T81058}
and is compatible with the implementation in Cycles.similar to the implementation in Cycles. Original specification can be found at https://raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf
# Current stateCryptomatte is a standard to efficiently create mattes for compositing. The renderer outputs the required render passes, which can then be used in the an compositor to create masks for specified objects. Unlike the Material and Object Index passes, the objects to isolate are selected in compositing, and mattes will be anti-aliased.
{F9049876}
**Hair (Particle + Object)**
{F9133001}
**Asset Layer**
{F9133615}
**Volumetric Transmittance**
{F9251210}
# TODO# Deviation
* [x] Add support material layer
* [x] Add support asset layer
* [x] Add support for hair geometry
* [x] Add support for backgroundCryptomatte specification is based on a path trace approach where samples and coverage are calculated at the same time. In a sample is an exact match on top of a prepared depth buffer. Coverage is at that moment always 1.0. By sampling multiple times the number of surface hits decides the actual surface coverage for a matte per pixel. After this the coverage is post processed with the volumetric coverage that is extracted from the Volumetric Transmittance Pass.
# Implementation Overview
When drawing to the cryptomatte gpu buffer the depth of the fragment is matched to the active depth buffer. The hashes of each cryptomatte layer is stored in the GPU buffer. The exact layout depends on the active cryptomatte layers. After drawing each sample the GPU buffer is downloaded to CPU RAM and integrated into the cryptomatte accumulation buffer.
The cryptomatte accumulation buffer stores the hashes + weights for a number of levels and layers per pixel. (currently shows hard borders as background isn't counted as a sample.)
* [x] Seems to only use dead center pixels.
* [x] Calculate the correct coverageWhen a hash already exists the weight will be increased. When the hash doesn't exists it will be added to the buffer.
After all the samples have been calculated the accumulation buffer is processed. During this phase the total pixel weights of each layer is mapped to be in a range between 0 and 1. (divide the coverage by the total number of samples used to construct the pixel.)
* [x] Add accurate/inaccurate modeThe hashes are also sorted (highest weight first) and the coverage is extracted from the volumetric buffer.
# Known Limitations
* Volumetric transmittance is only computed when the Volumetric Transmittance render pass is enabled. {D9048} will add a mechanism that can be used to overcome this limitation.
* [x] Sort samples based on coverage* Motion blur is a screen space effect and isn't supported. It needs some research how we could support this.
* [x] Fix issue when with larger buffers* Screen space depth of field is a screen space effect and isn't supported. At this point stripes will appearIn the future when we have a sample based depth of field this should be easy to support.
* [ ] Extracting buffers don't use the correct buffer widthAlpha blended materials aren't supported. Alpha blended materials support in render passes needs research how to implement it in a maintainable way for any render pass.
## Test
* [] Test for memory mis-use
* [] Add render tests Future work
This is a list of stuff that needs to be done for the same release that this patch lands on (expected Blender 2.92)
## Cleanups* Add render tests.
* []* Documentation.
* [] Document Datastructure, it is now a flat buffer
## FutureStore hashes + Object names in the render result header.
* Use threading to increase performance in accumulation and post processing.
* [] Store hashes + Object names in* Merge the cycles and eevee settings as the render result headery are identical.