Jump to content

Recommended Posts

Posted

Outputting vec4(gl_FragCoord.xy / DrawViewport.zw, 0.f, 1.f) as outColor[0] for all the scene geometry fragments into viewport directly:
image.png.b657b59de9e071f6c78a751bb8c81502.png

Outputting vec4(gl_FragCoord.xy / DrawViewport.zw, 0.f, 1.f) as outColor[0] for all the scene geometry fragments into texture buffer
and then assigning this texture values to the surface.basecolor:

image.png.d87441066cd1afc1d8dae9c33f0fae86.png

I had to reorient gl_FragCoord to (gl_FragCoord.x, DrawViewport.w-gl_FragCoord.y) for one of the cameras in order to achieve the pixel-perfect dithering effect between two cameras.

Posted

You might want to try getting the camera projection matrix instead. I believe that will account for the flipping of the Y coordinate that happens, because OpenGL considers +1 to face up in screen coordinates.

If it's an ortho camera, just use the CameraProjectionViewMatrix variable.

If it's a perspective camera, I believe you can still get the ortho projection with this function: ExtractCameraOrthoMatrix(in uint cameraID)

Let's build cool stuff and have fun. :)

Posted
6 hours ago, Josh said:

You might want to try getting the camera projection matrix instead. I believe that will account for the flipping of the Y coordinate that happens, because OpenGL considers +1 to face up in screen coordinates.
If it's an ortho camera, just use the CameraProjectionViewMatrix variable.
If it's a perspective camera, I believe you can still get the ortho projection with this function: ExtractCameraOrthoMatrix(in uint cameraID)

Ok, not sure how it should help, but I will see if I can utilize projection view matrix for it. Currently I fix this problem for myself with the following trick:

if (CameraRange.x == 0.125f)
{
  // NOTE: gl_FragCoord has different orientation when drawn into texture buffer
  ivec2    FragCoord_Reoriented = ivec2(gl_FragCoord.x, DrawViewport.w - gl_FragCoord.y);
  ...
}

 

2 hours ago, Josh said:

Also see Shaders/Utilities/ReconstructPosition.glsl, as you might find some of those functions useful.

I saw these utility methods while thinking on how to reconstruct position from a depth cubemap. They work as transform from the screen to the world, but in the case stated what I need is just a consistency of the screen coordinates between drawing to the viewport and texture buffer render target.

I also have some guts feeling that depth pre-pass doesn't work while rendering to a texture buffer. Maybe I will make some minimal samples to reproduce both issues, but as for now it doesn't stop me moving towards more difficult scenarios for my visibility filter.
image.png.b314d1d659d880e907ce1df56f2b4d17.png

  • 3 months later...
Posted

Hello, I would like to revive this topic, because I've stumbled on the same issue again and this time I can provide a good small sample for it.

main.cpp:

#include "Leadwerks.h"

namespace lwe = Leadwerks;

int main(int argc, const char* argv[])
{
  auto displays    = lwe::GetDisplays();
  auto window      = lwe::CreateWindow("gl_FragCoord", 0, 0, 720, 720, displays[0], lwe::WINDOW_DEFAULT);
  auto framebuffer = lwe::CreateFramebuffer(window);
  auto world       = lwe::CreateWorld();

  auto light  = lwe::CreateDirectionalLight(world);
  auto cone_0 = lwe::CreateCone(world, 0.5f, 1.f, 64);
  auto cube_f = lwe::CreateBox(world);

  cube_f->SetScale(100.f, 1.f, 100.f);
  cube_f->SetPosition(0.f, -3.001f, 0.f);
  cone_0->SetPosition(0.0f, +1.f, +2.f);

  light->SetRotation(60, 0, 0);
  light->SetColor(0.5f);
  world->SetAmbientLight(0.1f, 0.1f, 0.1f);

  auto camera_0 = lwe::CreateCamera(world);
  auto camera_1 = lwe::CreateCamera(world);

  auto camera_0_texture_0 = lwe::CreateTexture(lwe::TEXTURE_2D, 720, 720);
  auto camera_0_texbuff   = lwe::CreateTextureBuffer(720, 720, 1, true);
  camera_0_texbuff->SetColorAttachment(camera_0_texture_0, 0);

  camera_0->SetRenderTarget(camera_0_texbuff);
  camera_0->SetOrder(0);

  camera_1->SetLighting(false);
  camera_1->SetRenderLayers(0);
  camera_1->SetOrder(1);

  auto camera_1_effect    = lwe::LoadPostEffect("Shaders/BaseColor.fx");
  auto camera_1_effect_id = camera_1->AddPostEffect(camera_1_effect);
  camera_1->EnablePostEffect(camera_1_effect_id);
  camera_1->SetUniform(camera_1_effect_id, "ColorBuffer", camera_0_texture_0);

  auto texturebuff_rt = lwe::CreateTextureBuffer(2, 2, 1, false);
  auto framebuffer_rt = nullptr;

  while (window->Closed() == false &&
         window->KeyDown(lwe::KEY_ESCAPE) == false)
  {
#if 1
    if (window->KeyHit(lwe::KEY_D1))
    { // Output to framebuffer from camera_1 (post effect Shaders/BaseColor.fx)
      camera_0->SetRenderTarget(camera_0_texbuff);
      camera_1->SetRenderTarget(framebuffer_rt);
    }
    if (window->KeyHit(lwe::KEY_D2))
    { // Output to framebuffer from camera_0 (post effect is ignored)
      camera_0->SetRenderTarget(framebuffer_rt);
      camera_1->SetRenderTarget(texturebuff_rt);
    }
#endif

    world->Update();
    world->Render(framebuffer);
  }

  return 0;
}

BaseColor.fx:

{
  "posteffect": {
    "subpasses": [
      {
        "shader": {
          "float32": {
            "vertex":   "Shaders/PostEffects/PostEffect.vert",
            "fragment": "Shaders/PostEffects/BaseColor.frag"
          }
        }
      }
    ]
  }
}


 lwe::KEY_D1: 

image.thumb.png.684b475d6c06e65bab60ae2a46066adb.png

lwe::KEY_D2:

image.thumb.png.9292ab057088ca49894b4ea01e0b4527.png

  • Thanks 1
  • 5 months later...
Posted

This example will not work the way you are expecting, for a couple of reasons:

  • The ColorBuffer uniform in the post-processing shader is a texture slot, not a bindless texture handle. When SetUniform is used with a texture, the bindless texture handle is passed to a shader as a uvec2, which can be reconstructed into a sampler.
  • The post-processing shader, as it is written, expects ColorBuffer to be a sampler2DMS, not a sampler2D.

Here is the corrected post-processing shader file BaseColor.frag:

#version 450
#extension GL_ARB_bindless_texture : enable

// Uniforms
//layout(binding = 0) uniform sampler2DMS ColorBuffer;
layout(location = 1) uniform uvec2 ColorBuffer;
layout(location = 0) uniform ivec4 DrawViewport;

// Outputs
layout(location = 0) out vec4 outColor;

void main()
{
    ivec2 coord = ivec2(gl_FragCoord.x, gl_FragCoord.y);
    outColor = texelFetch(sampler2D(ColorBuffer), coord, 0);
}

This still flips the image when you press the 2 button.

I am trying to figure out what is going on in your example, whether there is really a problem, and whether it can be changed without breaking other behavior...

 

Let's build cool stuff and have fun. :)

Posted

It looks like the orientation here is correct. In OpenGL, screen coordinates (0, 0) are in the lower-left corner.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjREEMZrCibHsXaeOHUDaNLQ4twX7l9aDFNR0QYupluGHVyYUjAtauWfGnmKcyGUHUOo9c8Be4gyd8xokBHM2dgHMURsyf5IdK_v2SM2u-E5l1Ym6dWXA6SehuKHOH448y8IrgDy1LFnxhc/s1600/20100531_DX_OpenGL.png

But texture coordinates point down in the positive direction.

#include "Leadwerks.h"

namespace lwe = Leadwerks;

int main(int argc, const char* argv[])
{
    auto displays = lwe::GetDisplays();
    auto window = lwe::CreateWindow("gl_FragCoord", 0, 0, 720, 720, displays[0], lwe::WINDOW_DEFAULT);
    auto framebuffer = lwe::CreateFramebuffer(window);
    auto world = lwe::CreateWorld();

    auto light = lwe::CreateDirectionalLight(world);
    auto cone_0 = lwe::CreateCone(world, 0.5f, 1.f, 64);
    auto cube_f = lwe::CreateBox(world);

    cube_f->SetScale(100.f, 1.f, 100.f);
    cube_f->SetPosition(0.f, -3.001f, 0.f);
    cone_0->SetPosition(0.0f, +1.f, +2.f);

    light->SetRotation(60, 0, 0);
    light->SetColor(0.5f);
    world->SetAmbientLight(0.1f, 0.1f, 0.1f);

    auto camera_0 = lwe::CreateCamera(world);
    auto camera_1 = lwe::CreateCamera(world);

    auto camera_0_texture_0 = lwe::CreateTexture(lwe::TEXTURE_2D, 720, 720);
    auto camera_0_texbuff = lwe::CreateTextureBuffer(720, 720, 1, true);
    camera_0_texbuff->SetColorAttachment(camera_0_texture_0, 0);

    camera_0->SetRenderTarget(camera_0_texbuff);
    camera_0->SetOrder(0);

    /*
    camera_1->SetLighting(false);
    camera_1->SetRenderLayers(0);
    camera_1->SetOrder(1);

    auto camera_1_effect = lwe::LoadPostEffect("Effects/BaseColor.fx");
    auto camera_1_effect_id = camera_1->AddPostEffect(camera_1_effect);
    camera_1->EnablePostEffect(camera_1_effect_id);
    camera_1->SetUniform(camera_1_effect_id, "ColorBuffer", camera_0_texture_0);
    */

    auto texturebuff_rt = lwe::CreateTextureBuffer(2, 2, 1, false);
    auto framebuffer_rt = nullptr;

    auto box = CreateBox(world);
    box->SetPosition(0, 0, 3);
    auto mtl = lwe::CreateMaterial();
    mtl->SetTexture(camera_0_texture_0);
    
    // Uncomment to verify the box UV mapping is correct:
    //mtl->SetTexture(lwe::LoadTexture("https://leadwerksstorage.s3.us-east-2.amazonaws.com/monthly_2026_02/test.png.d230cd11a6904ef36c91256aa96b243d.png"));
    
    box->SetMaterial(mtl);

  	// Let's make sure the position V axis points down on the texcoords
    for (int v = 0; v < box->lods[0]->meshes[0]->CountVertices(); ++v)
    {
        auto pos = box->lods[0]->meshes[0]->vertices[v].position;
        auto tc = box->lods[0]->meshes[0]->vertices[v].texcoords.xy();
        tc.y = 1.0f - (pos.y + 0.5f);
        box->lods[0]->meshes[0]->SetVertexTexCoords(v, tc);
    }  
  
    while (window->Closed() == false &&
        window->KeyDown(lwe::KEY_ESCAPE) == false)
    {
#if 1
        if (window->KeyHit(lwe::KEY_D1))
        { // Output to framebuffer from camera_1 (post effect Shaders/BaseColor.fx)
            //camera_0->SetRenderTarget(camera_0_texbuff);
            //camera_1->SetRenderTarget(framebuffer_rt);
        }
        if (window->KeyHit(lwe::KEY_D2))
        { // Output to framebuffer from camera_0 (post effect is ignored)
            //camera_0->SetRenderTarget(framebuffer_rt);
            //camera_1->SetRenderTarget(texturebuff_rt);
        }
#endif
        box->Turn(0, 1, 0);

        world->Update();
        world->Render(framebuffer);
    }

    return 0;
}

You can see here that render-to-texture has the correct orientation. THere's a line you can uncomment to load a different texture to verify the UV mapping of the box.

image.thumb.png.181918b4b86905bb2bc2c9f27b354b5f.png

Let's build cool stuff and have fun. :)

Posted
On 2/18/2026 at 11:13 PM, Josh said:

whether there is really a problem


I also remember that there was a case when I used 3 cameras
camera0 to render target
camera1 to render target
camera2 to frame

camera1 was using depth texture from camera0 in a custom shader,
and for shader used in camera1 I haven't had to make a y-flip,
but to use the same camera0's depth texture in a shader used by camera2 I had to make y-flip.

so you cannot use the same shader with camera1 and camera2...
 

On 2/18/2026 at 11:22 PM, Josh said:

You can see here that render-to-texture has the correct orientation.

Yeah, had no issues displaying rendered textures on objects. Issue is only observed while trying to access it using gl_FragCoord

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...