Page MenuHome

Eevee allocates entire render framebuffer texture for region renders
Closed, ArchivedPublic

Description

System Information
Operating system: Windows 10 64bit
Graphics card: EVGA GeForce GTX 1080 Ti SC Black Edition

Blender Version
Broken: v 2.80 (release)

Description
I need to render high DPI images for print and display materials. Eevee renders would fail(Blender will stop responding, never finishes render) when I tried resolutions at and exceeding 19200x10800. So I thought to work around this by partitioning my render into 4 quadrants with render region, rendering them separately and then combining them into one large image. However, these smaller images fail as well. According to --debug-gpu, even choosing a very small region of a 19200x10800 render will still attempt to allocate the entire 19200x10800.

If this allocation issue is corrected, I can use partition rendering to produce images that exceed the 16384x16384 texture size limitation on modern graphics cards.

If you'd like a real world example of the need for this: We've had a 20 foot by 10 foot pop-up display produced in the past that required a 150dpi image. This image would be 36000x18000.



Exact steps for others to reproduce the error
Open attached blend (maybe with --debug-gpu if you want to see allocation for yourself)

Hit F12 or choose Render->Render Image

This may also be related to T70305: Eevee out of GPU memory on large render

Event Timeline

Philipp Oeser (lichtwerk) lowered the priority of this task from 90 to 50.Nov 21 2019, 2:06 PM

@Clément Foucault (fclem): will confirm for now [not sure this can be avoided?]

@Philipp Oeser (lichtwerk) I think this can be avoided by clipping the viewport to the render region like Cycles does. You can see this in the viewport when it draws too, Eevee doesn't clip to the render region like Cycles will if one is set. I looked at the code and it appears Eevee uses very similar code to setting up for the viewport while rendering, which is probably where this slipped in.

the solution or i should say workaround for this is... make 19200x10800 resolution of image and keyframe camera shift settings -1 -1, -1 1 , 1 1 ,1 -1
and just render this as 4 frame animation

Check https://docs.blender.org/manual/en/latest/render/eevee/limitations.html

In practice, using too much GPU memory can make the GPU driver crash, freeze, or kill the application. So be careful of what you ask.

IMO we could clarify this specific case better in the manual. The actual development is a feature request cq removal of a limitation, not a bug.

@Maciej Jutrzenka (Kramon) Unfortunately, it's not that simple. Try it yourself, if you shift the camera without changing the focal length, you just render more of the view, not a higher resolution version of the current camera view. I am working on an actual work around though, if i can figure out the math to generalize it, if they decide not to fix this.

@Jeroen Bakker (jbakker) Seems like a bug to me to use 10x the memory actually required to render a section of the view. Sure, it requires more math to setup the camera properly but what it's doing now is just brute forcing it and wasting a ton of GPU memory.

@Ted Milker (TedMilker)
ah yes forgot about that i long time ago did the math but don't have it now cose now i just use VM when i render huge stuff.... But it wasn't that complex

@Ted Milker (TedMilker),

If you have the math that will work with all the screen space effects that EEVEE provides, is complete and production ready, we are interested. Until then I am not convinced this could be classified as a bug.

@Jeroen Bakker (jbakker) You don't need any math to fix screen space effects, just math to clip the camera to the region. Screen space effect artifacts from region renders can be dealt with by adjusting the overscan option, assuming that's properly integrated into the fix.

How would you solve SSR with overscan?

The same way Blender solves it now when you change the camera's focal length and aspect ratio? Nothing is changing with the render path here, you're changing the camera being rendered to a camera that makes up the render region's aspect ratio with an adjusted focal length to match the original camera.

Clément Foucault (fclem) changed the task status from Unknown Status to Unknown Status.Nov 21 2019, 4:09 PM
Clément Foucault (fclem) claimed this task.

Like @Jeroen Bakker (jbakker) said, the screenspace effects will have problems with this ways of rendering. And even if you use overscan it won't match a fullscreen render and will always have discontinuity between tiles (unless you increase overscans to a ridiculous high value but this defeats the purpose and basically increase the overall vram usage).

What I'm planning to do to fix this limitation is to divide the render in a checkerboard pattern. This will lower the actual vram requirement for really high resolutions but it will make all screenspace effects blurry instead of having discontinuities (also I need to check if they actually do converge correctly in this case and not create blobs/blocky artifacts).

The pattern could also be scanlines but i'm not sure what is best (maybe a future option?).

Anyway, I consider this a limitation for the time being. I created a TODO task for this T71733.

Thanks for the speedy decision at least. I'll pursue my work around using multiple cameras for now, maybe a useful tiling addon will even come out of it.

one way to murder this bug would be to use RTX / radeon rays instead of SSR.

it's a bit like shooting a fly with a shotgun, but it also kills many other flies.