Jump to content

gl_FragCoord has different orientation while rendering into viewport and texture buffer


Go to solution Solved by Josh,

Recommended Posts

Posted

Outputting vec4(gl_FragCoord.xy / DrawViewport.zw, 0.f, 1.f) as outColor[0] for all the scene geometry fragments into viewport directly:
image.png.b657b59de9e071f6c78a751bb8c81502.png

Outputting vec4(gl_FragCoord.xy / DrawViewport.zw, 0.f, 1.f) as outColor[0] for all the scene geometry fragments into texture buffer
and then assigning this texture values to the surface.basecolor:

image.png.d87441066cd1afc1d8dae9c33f0fae86.png

I had to reorient gl_FragCoord to (gl_FragCoord.x, DrawViewport.w-gl_FragCoord.y) for one of the cameras in order to achieve the pixel-perfect dithering effect between two cameras.

Posted

You might want to try getting the camera projection matrix instead. I believe that will account for the flipping of the Y coordinate that happens, because OpenGL considers +1 to face up in screen coordinates.

If it's an ortho camera, just use the CameraProjectionViewMatrix variable.

If it's a perspective camera, I believe you can still get the ortho projection with this function: ExtractCameraOrthoMatrix(in uint cameraID)

My job is to make tools you love, with the features you want, and performance you can't live without.

Posted

Also see Shaders/Utilities/ReconstructPosition.glsl, as you might find some of those functions useful.

My job is to make tools you love, with the features you want, and performance you can't live without.

Posted
6 hours ago, Josh said:

You might want to try getting the camera projection matrix instead. I believe that will account for the flipping of the Y coordinate that happens, because OpenGL considers +1 to face up in screen coordinates.
If it's an ortho camera, just use the CameraProjectionViewMatrix variable.
If it's a perspective camera, I believe you can still get the ortho projection with this function: ExtractCameraOrthoMatrix(in uint cameraID)

Ok, not sure how it should help, but I will see if I can utilize projection view matrix for it. Currently I fix this problem for myself with the following trick:

if (CameraRange.x == 0.125f)
{
  // NOTE: gl_FragCoord has different orientation when drawn into texture buffer
  ivec2    FragCoord_Reoriented = ivec2(gl_FragCoord.x, DrawViewport.w - gl_FragCoord.y);
  ...
}

 

2 hours ago, Josh said:

Also see Shaders/Utilities/ReconstructPosition.glsl, as you might find some of those functions useful.

I saw these utility methods while thinking on how to reconstruct position from a depth cubemap. They work as transform from the screen to the world, but in the case stated what I need is just a consistency of the screen coordinates between drawing to the viewport and texture buffer render target.

I also have some guts feeling that depth pre-pass doesn't work while rendering to a texture buffer. Maybe I will make some minimal samples to reproduce both issues, but as for now it doesn't stop me moving towards more difficult scenarios for my visibility filter.
image.png.b314d1d659d880e907ce1df56f2b4d17.png

  • Solution
Posted

Okay, I think this problem is solved then. Thank you for reporting.

My job is to make tools you love, with the features you want, and performance you can't live without.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...