Implementation of the cryptomatte render passes in EEVEE. Implementation follow design in {T81058}
and is similar to the implementation in Cycles. Original specification can be found at https://raw.githubusercontent.com/Psyop/Cryptomatte/master/specification/IDmattes_poster.pdf
Cryptomatte is a standard to efficiently create mattes for compositing. The renderer outputs the required render passes, which can then be used in the an compositor to create masks for specified objects. Unlike the Material and Object Index passes, the objects to isolate are selected in compositing, and mattes will be anti-aliased.
{F9049876}
**Hair (Particle + Object)**
{F9133001}
**Asset Layer**
{F9133615}
**Volumetric Transmittance**
{F9251210}
# Accurate mode
Following Cycles, there are two accuracy modes. The difference between the two modes is the number of render samples they take into account to create the render passes. When accurate mode is off the number of levels is used as the number of cryptomatte samples to evaluate. When accuracy mode is active, the number of render samples is used.
# Deviation
Cryptomatte specification is based on a path trace approach where samples and coverage are calculated at the same time. In a sample is an exact match on top of a prepared depth buffer. Coverage is at that moment always 1.0. By sampling multiple times the number of surface hits decides the actual surface coverage for a matte per pixel. After this the coverage is post processed with the volumetric coverage that is extracted from the Volumetric Transmittance Pass.
# Implementation Overview
When drawing to the cryptomatte gpu buffer the depth of the fragment is matched to the active depth buffer. The hashes of each cryptomatte layer is stored in the GPU buffer. The exact layout depends on the active cryptomatte layers. After drawing each sample the GPU buffer is downloaded to CPU RAM and integrated into the cryptomatte accumulation buffer.
The cryptomatte accumulation buffer stores the hashes + weights for a number of levels and layers per pixel. When a hash already exists the weight will be increased. When the hash doesn't exists it will be added to the buffer.
After all the samples have been calculated the accumulation buffer is processed. During this phase the total pixel weights of each layer is mapped to be in a range between 0 and 1. The hashes are also sorted (highest weight first) and the coverage is extracted from the volumetric buffer.
# Known Limitations
* Volumetric transmittance is only computed when the Volumetric Transmittance render pass is enabled. {D9048} will add a mechanism that can be used to overcome this limitation.
* Motion blur is a screen space effect and isn't supported. It needs some research how we could support this.
* Screen space depth of field is a screen space effect and isn't supported. In the future when we have a sample based depth of field this should be easy to support.
* Alpha blended materials aren't supported. Alpha blended materials support in render passes needs research how to implement it in a maintainable way for any render pass.
# Future work
This is a list of stuff that needs to be done for the same release that this patch lands on (expected Blender 2.92)
* T82571 Add render tests.
* T82572 Documentation.
* T82573 Store hashes + Object names in the render result header.
* T82574 Use threading to increase performance in accumulation and post processing.
* T82575 Merge the cycles and eevee settings as they are identical.
* T82576 Add RNA to extract the cryptomatte hashes to use in python scripts.