Introduce a packed_float3 type for smaller storage that is exactly 3
floats, instead of 4. For computation float3 is still used since it can
use SIMD instructions.
Details
Details
Diff Detail
Diff Detail
- Repository
- rB Blender
- Build Status
Buildable 18707 Build 18707: arc lint + arc unit
Event Timeline
| intern/cycles/util/types_float3.h | ||
|---|---|---|
| 60 | Is CUDA's float3 12 bytes in size / 4 byte aligned (e.g. https://github.com/ROCm-Developer-Tools/HIP/issues/706)? In which case could this be #define packed_float3 float3 on CUDA? | |