Jump to content

Leadwerks 5 beta 2D Redux

Josh

511 views

Previously, we saw how the new renderer can combine multiple cameras and even multiple worlds in a single render to combine 3D and 2D graphics. During the process of implementing Z-sorting for multiple layers of transparency, I found that Vulkan does in fact respect rasterization order. That is, objects are in fact drawn in the same order you provide draw calls to a command buffer.

Quote

Within a subpass of a render pass instance, for a given (x,y,layer,sample) sample location, the following operations are guaranteed to execute in rasterization order, for each separate primitive that includes that sample location:

  1. Scissor test
  2. Sample mask generation
  3. Depth bounds test
  4. Stencil test, stencil op and stencil write
  5. Depth test and depth write
  6. Sample counting for occlusion queries
  7. coverage reduction
  8. Blending, logic operations, and color writes

Each of these operations is atomically executed for each primitive and sample location.

Execution of these operations for each primitive in a subpass occurs in primitive order.

Furthermore, individual primitives (polygons) are also rendered in the order they are stored in the indice buffer:

Quote

Primitives generated by drawing commands progress through the stages of the graphics pipeline in primitive order. Primitive order is initially determined in the following way:

  1. Submission order determines the initial ordering
  2. For indirect draw commands, the order in which accessed instances of the VkDrawIndirectCommand are stored in buffer, from lower indirect buffer addresses to higher addresses.
  3. If a draw command includes multiple instances, the order in which instances are executed, from lower numbered instances to higher.
  4. The order in which primitives are specified by a draw command:
    • For non-indexed draws, from vertices with a lower numbered vertexIndex to a higher numbered vertexIndex.
    • For indexed draws, vertices sourced from a lower index buffer addresses to higher addresses.

Within this order implementations further sort primitives:

  1. If tessellation shading is active, by an implementation-dependent order of new primitives generated by tessellation.
  2. If geometry shading is active, by the order new primitives are generated by geometry shading.
  3. If the polygon mode is not VK_POLYGON_MODE_FILL, by an implementation-dependent ordering of the new primitives generated within the original primitive.

Primitive order is later used to define rasterization order, which determines the order in which fragments output results to a framebuffer.

Now if you were making a 2D game with 1000 zombie sprites onscreen you would undoubtedly want to use 3D-in-2D rendering with an orthographic camera. Batching and depth discard would give you much faster performance when the number of objects goes up. However, the 2D aspect of most games is relatively simple, with only a dozen or so 2D sprites making up the user interface. Given that 2D graphics are not normally going to be much of a bottleneck, and that the biggest performance savings we have achieved was in making text a static object, I decided to rework the 2D rendering system into something that was a little simpler to use.

Sprites are no longer a 3D entity, but are a new type of pure 2D object. They act in a similar way as entities with position, rotation, and scale commands, but they only use 2D coordinates:

//Create a sprite
auto sprite = CreateSprite(world,100,100);

//Make blue
sprite->SetColor(0,0,1);

//Position in upper-left corner of screen
sprite->SetPosition(10,10)

Sprites have a handle you can set. By default this is in the upper-left corner of the sprite, but you can change it to recenter them. Sprites can also be rotated around the Z axis:

//Center the handle
sprite->SetHandle(0.5,0.5);

//Rotation around center
sprite->SetRotation(45);

SVG vector images are great for 2D drawing and GUIs because they can scale for different display resolutions. We support these as well, with an optional scale value the image can be rasterized at.

auto sprite = LoadSprite(world, "tiger.svg", 0, 2.0);

nanoSVGLogo-1.png.2efcb4bbfe35df145a11b3066a44a1b3.png

Text is now just another type of sprite:

auto text = CreateSprite(world, font, L"Hello, how are you today?\nI am fine.", 72, TEXT_LEFT);

These sprites are all displayed within the same world as the 3D rendering, so unlike what I previously wrote about...

  • You do not have to create extra cameras or worlds just to draw 2D graphics. (If you are doing something advanced then the multi-camera method I previously described is a good option, but you have to have very demanding needs for it to make a difference.)
  • Regular old screen coordinates you are used to will be used (coordinate [0,0] is top-left).

By default sprites will be drawn in the order they are created. However, I definitely see a need for additional control here and I am open to ideas. Should there be a sprite order value, a MoveToFront() method, or a system of different layers? I'm not sure yet.

I'm also not sure how per-camera sprites will be controlled. At this time sprites are stored in a per-world list, but we will want some 2D elements to only appear on some cameras. I am not sure yet how this will be controlled.

I am going to try to get an update out soon with these features so you can try them out yourself.

  • Like 4


3 Comments


Recommended Comments

Quote

Sprites are no longer a 3D entity, but are a new type of pure 2D object. They act in a similar way as entities with position, rotation, and scale commands, but they only use 2D coordinates:

So, what about the 3D sprites? What would they be called? Billboards?

Overall, this seems much more easier to understand. With SVGs, we can now have round HUD elements without fears of rasterization! :D

Share this comment


Link to comment
34 minutes ago, reepblue said:

So, what about the 3D sprites? What would they be called? Billboards?

Something like that. Yeah, I think this account for 99% of use cases.

  • Like 1

Share this comment


Link to comment
Quote

I'm also not sure how per-camera sprites will be controlled. At this time sprites are stored in a per-world list, but we will want some 2D elements to only appear on some cameras.

One possibility could be linking sprites to cameras?  If they're not linked to any camera, they are visible in all cameras (the default).

Share this comment


Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Blog Entries

    • By Josh in Josh's Dev Blog 4
      I have been working on 2D rendering off and on since October. Why am I putting so much effort into something that was fairly simple in Leadwerks 4? I have been designing a system in anticipation of some features I want to see in the GUI, namely VR support and in-game 3D user interfaces. These are both accomplished with 2D drawing performed on a texture. Our system of sprite layers, cameras, and sprites was necessary in order to provide enough control to accomplish this.
      I now have 2D drawing to a texture working, this time as an official supported feature. In Leadwerks 4, some draw-to-texture features were supported, but it was through undocumented commands due to the complex design of shared resources between OpenGL contexts. Vulkan does not have this problem because everything, including contexts (or rather, the VK equivalent) is bound to an abstract VkInstance object.

      Here is the Lua code that makes this program:
      --Get the primary display local displaylist = ListDisplays() local display = displaylist[1]; if display == nil then DebugError("Primary display not found.") end local displayscale = display:GetScale() --Create a window local window = CreateWindow(display, "2D Drawing to Texture", 0, 0, math.min(1280 * displayscale.x, display.size.x), math.min(720 * displayscale.y, display.size.y), WINDOW_TITLEBAR) --Create a rendering framebuffer local framebuffer = CreateFramebuffer(window); --Create a world local world = CreateWorld() --Create second camera local texcam = CreateCamera(world) --Create a camera local camera = CreateCamera(world) camera:Turn(45,-45,0) camera:Move(0,0,-2) camera:SetClearColor(0,0,1,1) --Create a texture buffer local texbuffer = CreateTextureBuffer(512,512,1,true) texcam:SetRenderTarget(texbuffer) --Create scene local box = CreateBox(world) --Create render-to-texture material local material = CreateMaterial() local tex = texbuffer:GetColorBuffer() material:SetTexture(tex, TEXTURE_BASE) box:SetMaterial(material) --Create a light local light = CreateLight(world,LIGHT_DIRECTIONAL) light:SetRotation(55,-55,0) light:SetColor(2,2,2,1) --Create a sprite layer. This can be shared across different cameras for control over which cameras display the 2D elements local layer = CreateSpriteLayer(world) texcam:AddSpriteLayer(layer) texcam:SetPosition(0,1000,0)--put the camera really far away --Load a sprite to display local sprite = LoadSprite(layer, "Materials/Sprites/23.svg", 0, 0.5) sprite:MidHandle(true,true) sprite:SetPosition(texbuffer.size.x * 0.5, texbuffer.size.y * 0.5) --Load font local font = LoadFont("Fonts/arial.ttf", 0) --Text shadow local textshadow = CreateText(layer, font, "Hello!", 36 * displayscale.y, TEXT_LEFT, 1) textshadow:SetColor(0,0,0,1) textshadow:SetPosition(50,30) textshadow:SetRotation(90) --Create text text = textshadow:Instantiate(layer) text:SetColor(1,1,1,1) text:SetPosition(52,32) text:SetRotation(90) --Main loop while window:Closed() == false do sprite:SetRotation(CurrentTime() / 30) world:Update() world:Render(framebuffer) end I have also added a GetTexCoords() command to the PickInfo structure. This will calculate the tangent and bitangent for the picked triangle and then calculate the UV coordinate at the picked position. It is necessary to calculate the non-normalized tangent and bitangent to get the texture coordinate, because the values that are stored in the vertex array are normalized and do not include the length of the vectors.
      local pick = camera:Pick(framebuffer, mousepos.x, mousepos.y, 0, true, 0) if pick ~= nil then local texcoords = pick:GetTexCoords() Print(texcoords) end Maybe I will make this into a Mesh method like GetPolygonTexCoord(), which would work just as well but could potentially be useful for other things. I have not decided yet.
      Now that we have 2D drawing to a texture, and the ability to calculate texture coordinates at a position on a mesh, the next step will be to set up a GUI displayed on a 3D surface, and to send input events to the GUI based on the user interactions in 3D space. The texture could be applied to a computer panel, like many of the interfaces in the newer DOOM games, or it could be used as a panel floating in the air that can be interacted with VR controllers.
    • By Josh in Josh's Dev Blog 0
      Putting all the pieces together, I was able to create a GUI with a sprite layer, attach it to a camera with a texture buffer render target, and render the GUI onto a texture applied to a 3D surface. Then I used the picked UV coords to convert to mouse coordinates and send user events to the GUI. Here is the result:

      This can be used for GUIs rendered onto surfaces in your game, or for a user interface that can be interacted with in VR. This example will be included in the next beta update.
    • By Josh in Josh's Dev Blog 4
      I started to implement quads for tessellation, and at that point the shader system reached the point of being unmanageable. Rendering an object to a shadow map and to a color buffer are two different processes that require two different shaders. Turbo introduces an early Z-pass which can use another shader, and if variance shadow maps are not in use this can be a different shader from the shadow shader. Rendering with tessellation requires another set of shaders, with one different set for each primitive type (isolines, triangles, and quads). And then each one of these needs a masked and opaque option, if alpha discard is enabled.
      All in all, there are currently 48 different shaders a material could use based on what is currently being drawn. This is unmanageable.
      To handle this I am introducing the concept of a "shader family". This is a JSON file that lists all possible permutations of a shader. Instead of setting lots of different shaders in a material, you just set the shader family one:
      shaderFamily: "PBR.json" Or in code:
      material->SetShaderFamily(LoadShaderFamily("PBR.json")); The shader family file is a big JSON structure that contains all the different shader modules for each different rendering configuration: Here are the partial contents of my PBR.json file:
      { "turboShaderFamily" : { "OPAQUE": { "default": { "base": { "vertex": "Shaders/PBR.vert.spv", "fragment": "Shaders/PBR.frag.spv" }, "depthPass": { "vertex": "Shaders/Depthpass.vert.spv" }, "shadow": { "vertex": "Shaders/Shadow.vert.spv" } }, "isolines": { "base": { "vertex": "Shaders/PBR_Tess.vert.spv", "tessellationControl": "Shaders/Isolines.tesc.spv", "tessellationEvaluation": "Shaders/Isolines.tese.spv", "fragment": "Shaders/PBR_Tess.frag.spv" }, "shadow": { "vertex": "Shaders/DepthPass_Tess.vert.spv", "tessellationControl": "Shaders/DepthPass_Isolines.tesc.spv", "tessellationEvaluation": "Shaders/DepthPass_Isolines.tese.spv" }, "depthPass": { "vertex": "Shaders/DepthPass_Tess.vert.spv", "tessellationControl": "DepthPass_Isolines.tesc.spv", "tessellationEvaluation": "DepthPass_Isolines.tese.spv" } }, "triangles": { "base": { "vertex": "Shaders/PBR_Tess.vert.spv", "tessellationControl": "Shaders/Triangles.tesc.spv", "tessellationEvaluation": "Shaders/Triangles.tese.spv", "fragment": "Shaders/PBR_Tess.frag.spv" }, "shadow": { "vertex": "Shaders/DepthPass_Tess.vert.spv", "tessellationControl": "Shaders/DepthPass_Triangles.tesc.spv", "tessellationEvaluation": "Shaders/DepthPass_Triangles.tese.spv" }, "depthPass": { "vertex": "Shaders/DepthPass_Tess.vert.spv", "tessellationControl": "DepthPass_Triangles.tesc.spv", "tessellationEvaluation": "DepthPass_Triangles.tese.spv" } }, "quads": { "base": { "vertex": "Shaders/PBR_Tess.vert.spv", "tessellationControl": "Shaders/Quads.tesc.spv", "tessellationEvaluation": "Shaders/Quads.tese.spv", "fragment": "Shaders/PBR_Tess.frag.spv" }, "shadow": { "vertex": "Shaders/DepthPass_Tess.vert.spv", "tessellationControl": "Shaders/DepthPass_Quads.tesc.spv", "tessellationEvaluation": "Shaders/DepthPass_Quads.tese.spv" }, "depthPass": { "vertex": "Shaders/DepthPass_Tess.vert.spv", "tessellationControl": "DepthPass_Quads.tesc.spv", "tessellationEvaluation": "DepthPass_Quads.tese.spv" } } } } } A shader family file can indicate a root to inherit values from. The Blinn-Phong shader family pulls settings from the PBR file and just switches some of the fragment shader values.
      { "turboShaderFamily" : { "root": "PBR.json", "OPAQUE": { "default": { "base": { "fragment": "Shaders/Blinn-Phong.frag.spv" } }, "isolines": { "base": { "fragment": "Shaders/Blinn-Phong_Tess.frag.spv" } }, "triangles": { "base": { "fragment": "Shaders/Blinn-Phong_Tess.frag.spv" } }, "quads": { "base": { "fragment": "Shaders/Blinn-Phong_Tess.frag.spv" } } } } } If you want to implement a custom shader, this is more work because you have to define all your changes for each possible shader variation. But once that is done, you have a new shader that will work with all of these different settings, which in the end is easier. I considered making a more complex inheritance / cascading schema but I think eliminating ambiguity is the most important goal in this and that should override any concern about the verbosity of these files. After all, I only plan on providing a couple of these files and you aren't going to need any more unless you are doing a lot of custom shaders. And if you are, this is the best solution for you anyways.
      Consequently, the baseShader, depthShader, etc. values in the material file definition are going away. Leadwerks .mat files will always use the Blinn-Phong shader family, and there is no way to change this without creating a material file in the new JSON material format.
      The shader class is no longer derived from the Asset class because it doesn't correspond to a single file. Instead, it is just a dumb container. A ShaderModule class derived from the Asset class has been added, and this does correspond with a single .spv file. But you, the user, won't really need to deal with any of this.
      The result of this is that one material will work with tessellation enabled or disabled, quad, triangle, or line meshes, and animated meshes. I also added an optional parameter in the CreatePlane(), CreateBox(), and CreateQuadSphere() commands that will create these primitives out of quads instead of triangles. The main reason for supporting quad meshes is that the tessellation is cleaner when quads are used. (Note that Vulkan still displays quads in wireframe mode as if they are triangles. I think the renderer probably converts them to normal triangles after the tessellation stage.)


      I also was able to implement PN Quads, which is a quad version of the Bezier curve that PN Triangles add to tessellation.



      Basically all the complexity is being packed into the shader family file so that these decisions only have to be made once instead of thousands of times for each different material.
×
×
  • Create New...