Outcome when running and analyzing the data on Linux RX5700
- MIP levels doesn't make a difference.
- The next formats will fail when one dimension is larger than GL_MAX_TEXTURE_SIZE
- GPU_R16, GPU_R16F, GPU_R16I, GPU_R16UI, GPU_R32F, GPU_R32I, GPU_R32UI, GPU_R8, GPU_R8I, GPU_R8UI, GPU_RG16, GPU_RG16F, GPU_RG16I, GPU_RG16UI, GPU_RG32I, GPU_RG32UI, GPU_RG8, GPU_RGBA32I, GPU_RGBA32UI, GPU_RGBA8, GPU_RGBA8I, GPU_RGBA8UI.
- The next format will fail based on a buffer size (width * height)
- GPU_RG32F, GPU_RG8I, GPU_RG8UI, GPU_RGBA16, GPU_RGBA16F.
- The next formats fail on anything that is larger than 8k
- GPU_RGBA16I, GPU_RGBA16UI, GPU_RGBA32F.
Looking at the case pointed out in the T82042: Crash when rendering huge images on CPU 2.92.0 alpha the render result could either use GPU_RGBA32F or GPU_RGBA16F. For GPU_RGBA32F we should limit it to 8k, but for GPU_RGBA16F we have another limit based on the size of the buffer.
I also assume this is platform dependent so we might want to request the max dimension based on the texture format and the requested dimension.