Jump to content

Multipass Rendering in Leadwerks 5 beta

Josh

733 views

Current generation graphics hardware only supports up to a 32-bit floating point depth buffer, and that isn't adequate for large-scale rendering because there isn't enough precision to make objects appear in the correct order and prevent z-fighting.

Z-fighting.png.80c56cfcb001206e86517f3b4a9d5fec.png

After trying out a few different approaches I found that the best way to support large-scale rendering is to allow the user to create several cameras. The first camera should have a range of 0.1-1000 meters, the second would use the same near / far ratio and start where the first one left off, with a depth range of 1000-10,000 meters. Because the ratio of near to far ranges is what matters, not the actual distance, the numbers can get very big very fast. A third camera could be added with a range out to 100,000 kilometers!

The trick is to set the new Camera::SetClearMode() command to make it so only the furthest-range camera clears the color buffer. Additional cameras clear the depth buffer and then render on top of the previous draw. You can use the new Camera::SetOrder() command to ensure that they are drawn in the order you want.

auto camera1 = CreateCamera(world);
camera1->SetRange(0.1,1000);
camera1->SetClearMode(CLEAR_DEPTH);
camera1->SetOrder(1);

auto camera2 = CreateCamera(world);
camera2->SetRange(1000,10000);
camera2->SetClearMode(CLEAR_DEPTH);
camera2->SetOrder(2);

auto camera3 = CreateCamera(world);
camera3->SetRange(10000,100000000);
camera3->SetClearMode(CLEAR_COLOR | CLEAR_DEPTH);
camera3->SetOrder(3);

Using this technique I was able to render the Earth, sun, and moon to-scale. The three objects are actually sized correctly, at the correct distance. You can see that from Earth orbit the sun and moon appear roughly the same size. The sun is much bigger, but also much further away, so this is exactly what we would expect.

unknown.thumb.png.ac2c9ca703d935726a34b8fcefe70718.png

You can also use these features to render several cameras in one pass to show different views. For example, we can create a rear-view mirror easily with a second camera:

auto mirrorcam = CreateCamera(world);
mirrorcam->SetParent(maincamera);
mirrorcam->SetRotation(0,180,0);
mirrorcam=>SetClearMode(CLEAR_COLOR | CLEAR_DEPTH);

//Set the camera viewport to only render to a small rectangle at the top of the screen:
mirrorcam->SetViewport(framebuffer->GetSize().x/2-200,10,400,50);

This creates a "picture-in-picture" effect like what is shown in the image below:

Untitled.thumb.jpg.c8671733233f5be199ad3b4a3d715dea.jpg

Want to render some 3D HUD elements on top of your scene? This can be done with an orthographic camera:

auto uicam = CreateCamera(world);
uicam=>SetClearMode(CLEAR_DEPTH);
uicam->SetProjectionMode(PROJECTION_ORTHOGRAPHIC);

This will make 3D elements appear on top of your scene without clearing the previous render result. You would probably want to move the UI camera far away from the scene so only your HUD elements appear in the last pass.

  • Like 4
  • Confused 1


25 Comments


Recommended Comments

Quote

You would probably want to move the UI camera far away from the scene so only your HUD elements appear in the last pass.

Would it be possible to use two different worlds for this? One "real" world and one for your HUD and then first rendering only the first world and after that rendering the HUD-world.

Share this comment


Link to comment
4 minutes ago, Ma-Shell said:

Would it be possible to use two different worlds for this? One "real" world and one for your HUD and then first rendering only the first world and after that rendering the HUD-world.

This would probably not be practical, because the rendering thread is so disconnected from the game logic thread.

Share this comment


Link to comment

You mean, because unlike the previous version the user does not directly call Render() anymore, since it is run on a different thread and thus the user can not correctly orchestrate the rendering? This should not be a problem, since in CreateCamera you can specify the world.

auto realCamera = CreateCamera(realWorld);
auto HUDCamera = CreateCamera(HUDWorld);
realCamera->setOrder(1);
HUDCamera->setOrder(2);
realCamera->SetClearMode(CLEAR_DEPTH | CLEAR_COLOR);
HUDCamera->SetClearMode(CLEAR_DEPTH);

 

Share this comment


Link to comment
3 minutes ago, Ma-Shell said:

You mean, because unlike the previous version the user does not directly call Render() anymore, since it is run on a different thread and thus the user can not correctly orchestrate the rendering? This should not be a problem, since in CreateCamera you can specify the world.


auto realCamera = CreateCamera(realWorld);
auto HUDCamera = CreateCamera(HUDWorld);
realCamera->setOrder(1);
HUDCamera->setOrder(2);
realCamera->SetClearMode(CLEAR_DEPTH | CLEAR_COLOR);
HUDCamera->SetClearMode(CLEAR_DEPTH);

 

There is a World::Render() command which basically says "tell the rendering thread to start using this world for rendering". So rendering two different worlds in one pass would be sort of difficult to manage.

Share this comment


Link to comment

Ah, I see... So the rendering always uses the last world, which called World::Render() and uses all cameras from that world in the specified order?

Would it be possible to implement the same ordering as with cameras for worlds then? Like the following

realWorld->setOrder(1);
HUDWorld->setOrder(2);

where it would first render all cameras from realWorld and after that all cameras from HUDWorld.

This would probably mean, it is more intuitive to have the render-call on the context instead of the world, since all worlds would be rendered.

 

By the way, is there any way to disable certain cameras, so they get skipped? Like setting a negative order or something like this. What happens if you set two cameras to the same order?

Share this comment


Link to comment

What might work better is to have a layer / tag system that lets different cameras have a filter to render certain types of objects.

To skip a camera, you can either hide it or set the projection mode to zero.

Share this comment


Link to comment

 

1 hour ago, Josh said:

layer / tag system that lets different cameras have a filter to render certain types of objects

Isn't that basically what a world is? Each camera renders only the objects that are in its world, so the world is basically a filter for the camera

Share this comment


Link to comment
1 hour ago, Ma-Shell said:

 

Isn't that basically what a world is? Each camera renders only the objects that are in its world, so the world is basically a filter for the camera

In an abstract sense, yes, but there’s a thousand more details than that. The strictness of both Vulkan and the multithreaded architecture mean I can’t design things that are “fast and loose” anymore.

Share this comment


Link to comment
11 hours ago, Ma-Shell said:

Isn't that basically what a world is? Each camera renders only the objects that are in its world, so the world is basically a filter for the camera

I'm going to try adding a command like this:

void World::AddRenderPass(shared_ptr<World> world, const int order)

The order value can be -1 (background), 1 (foreground), or 0 (mix with current world). No guarantee yet but I will try and see how well it works.

Share this comment


Link to comment

Okay, it looks like that will probably not work. The data management is just too complicated. I think a filter value will probably work best, because that can be handled easily in the culling routine.

Share this comment


Link to comment

I see two possible issues with filters:
1. I understand a filter as having some sort of "layer-id" on the objects and a camera only rendering objects with a given layer-id. Suppose, you have two objects A and B. If you want to first render A and B and then from a different camera only B (e.g. you have a vampire in front of a mirror. You would want the entire scenery to include the vampire but when rendering the mirror, you would want to render the scenery from that perspective without the vampire). This would not be easily possible with a layer-id.

2. Performance: You have to run through all objects and see, whether they match the filter in order to decide, whether to cull them or not. If you instead just walk the world's list of entities, this does not happen.

 

Why not using something like this:

camWorldA->SetClearMode(ColorBit | DepthBufferBit);
camWorldB->SetClearMode(DepthBufferBit);
camWorldC->SetClearMode(DepthBufferBit);

context->ClearWorlds(); // nothing is being rendered
context->AddWorld(worldA); // All cameras of world A are rendered
context->AddWorld(worldB); // First all cameras of world A are rendered, then all of world B
context->AddWorld(worldC); // First all cameras of world A are rendered, then all of world B, then all of world C

This would give the user maximum flexibility and only require the context to hold a list of worlds, which are rendered one after another instead of having a single active world.

For compatibility and comfortability reasons, you could additionally define

void World::Render(Context* ctx)
{
	ctx->ClearWorlds();
	ctx->AddWorld(this);
}

This way, you could still use the system as before without ever knowing of being able to render multiple worlds.

 

EDIT: I see, my vampire wouldn't work with this system, as well (unless you allow a mesh to be in multiple worlds) but I still think, this is quite a flexible and easy to use system without much of an overhead

Edited by Ma-Shell
  • Like 1

Share this comment


Link to comment

I will have to experiment some more with this. Interestingly, this overlaps with some problems with 2D drawing. Now in reality there is no such thing as 2D graphics on your computer, it is always 3D graphics. I think my approach here will be to stop trying to hide the fact that 2D is really 3D even if it does not fit our conceptions of how it should be. Stay tuned...

  • Like 1

Share this comment


Link to comment
2 hours ago, Josh said:

I think my approach here will be to stop trying to hide the fact that 2D is really 3D even if it does not fit our conceptions of how it should be

I'm curious what this would allow the end-user to do in exchange for making a straightforward system more complex.  I know in the past people wanted to play movies on textures and map cameras to textures.  And a lot of games do 3D-ish UI (font and images on textures).  I wonder if this system would let us do them with reasonable ease.

Share this comment


Link to comment
3 hours ago, gamecreator said:

I'm curious what this would allow the end-user to do in exchange for making a straightforward system more complex.  I know in the past people wanted to play movies on textures and map cameras to textures.  And a lot of games do 3D-ish UI (font and images on textures).  I wonder if this system would let us do them with reasonable ease.

I started digging into 2D drawing recently because @Lethal Raptor Games was asking about it in the private forum. I found that our model rendering system works well for 2D sprites as well, but in Vulkan you have no guarantee what order objects will be drawn in. I realized how stupid it is to do back-to-front rendering of 2D objects and that we should just use the depth buffer to handle this. I mean, we don't render 3D objects in order by distance, so why are 2D objects any different?

I think the 2D rendering will take place with a separate framebuffer, and then the results will be drawn on top of the 3D view. I think DOOM 2016 did this, for the same reasons. See the section here on "User Interface": http://www.adriancourreges.com/blog/2016/09/09/doom-2016-graphics-study/

Naturally this means that 3D-in-2D elements are very simple to add, but it also means you only have a Z-position for ordering. The rendering speed of this will be unbelievably fast. 100,000 unique sprites each with a different image would be absolutely no problem.

  • Like 1

Share this comment


Link to comment

I thought perhaps 2D rendering would require an orthographic camera to be created and rendered on top of the 3D camera, but that would invalidate the depth buffer contents, and we want to hang onto that for post-processing effects. Unless we insert a post-processing step in between camera passes like this:

  1. Render perspective camera.
  2. Draw post-processing effects.
  3. Clear depth buffer and render 2D elements.
  • Confused 1

Share this comment


Link to comment

Ok, it's just a Vulkan limitation that makes things more complex.  So, to draw the layers of a clock in the proper order, for example, we'd have to assign proper Z coordinates to the back, the hour hand, minute hand and second hand?  Is that how the engine would know what goes on top of what?
clock.png.1e2aa4d74c6021880eb40186ea305740.png

 

Share this comment


Link to comment

You would probably have three sprites with a material with alpha masking enabled, and position them along the Z axis the way you would want them to appear. Imagine if you were doing it in 3D. Which you are.

Rotation of sprites is absolutely no problem with this approach, along with many other options.

You can also use polygon meshes very easily for 2D rendering. For example, the clock hands could be a model, perhaps loaded from an SVG file.

  • Like 1

Share this comment


Link to comment

Okay, I have rendering with multiple worlds working now. The command is like this:

void World::Combine(shared_ptr<World> world)

All the cameras in the added world will be used in rendering, using the contents of the world they belong to. There is no ordering of the worlds, instead the cameras within the world are drawn with the same rules as a single world with multiple cameras:

  • By default, cameras are rendered in the order they are created.
  • A camera order setting can be used to override this and become the primary sorting method. (If two cameras have the same order value, then the creation order is used to sort them.)

So you can do something like this:

//Create main world
auto mainworld = CreateWorld();
auto maincamera = CreateCamera(world);

//Create world for HUD rendering
auto foreground = CreateWorld();
auto fgcam = CreateCamera(foreground);
fgcam->SetProjectionMode(PROJECTION_ORTHOGRAPHIC);
fgcam->SetClearMode(CLEAR_DEPTH);

auto healthbar = CreateSprite(foreground);

mainworld->Combine(foreground);

//Draw both worlds. Ortho HUD camera will be drawn on top since it was created last.
mainworld->Render(framebuffer);

That means that drawing 2D graphics on top of 3D graphics requires a world and camera to be created for this purpose. There is no "2D" commands really, there is just orthographic camera projection. This is also really really flexible, and the same fast rendering the normal 3D graphics use will make 2D graphics ridiculously fast.

Leadwerks 4 used a lot of render-to-texture and caching to make the GUI fast, but that will be totally unnecessary here, I think.

  • Like 2

Share this comment


Link to comment

Is it possible to combine more than 2 worlds (possibly by daisy chaining them)?

  • Like 1

Share this comment


Link to comment

Yes, you would just call Combine again. It just adds the indicated world to a list. It’s one-way, so “Combine” might not be the best nomenclature.

  • Like 1

Share this comment


Link to comment

Curious: is there a reason this isn't all done under the hood to keep the commands as simple as they are now?  In other words, getting back to my previous question, what extra functionality does this offer the end-user?  What practical things will you be able to do with images and text that you can't now?

  • Like 1

Share this comment


Link to comment
17 minutes ago, gamecreator said:

Curious: is there a reason this isn't all done under the hood to keep the commands as simple as they are now?  In other words, getting back to my previous question, what extra functionality does this offer the end-user?  What practical things will you be able to do with images and text that you can't now?

I think it mostly boils down to making 3D models in the UI possible. I could see some other things working nicely like particles. So if we need to account for this, let’s just cut the foreplay and have one method that handles everything.

Theres also a fundamental shift in my approach to design. I’m not trying to make the easiest to use game engine anymore, because the easiest game to develop is a pre-built asset flip.

I am not interested in making Timmy’s first game maker so if that means Turbo Engine trades ease of use in the simple case for more speed and functionality I am okay with that. I almost feel like people don’t respect an API that isn’t difficult to use.

we’ll see, there is still some time before this is finalized.

  • Like 2

Share this comment


Link to comment

Appreciate it.  And I think you know I'm not trying to bust your balls.  I just feel like if there's sacrifices to be made, it should be for a good cause, which it sounds like it will be.  And I also appreciate that building a good foundation now, even if the rewards aren't immediately seen, is still a good thing.  Was mostly just curious about what Turbo might be shaping into.

Share this comment


Link to comment
16 hours ago, gamecreator said:

Appreciate it.  And I think you know I'm not trying to bust your balls.  I just feel like if there's sacrifices to be made, it should be for a good cause, which it sounds like it will be.  And I also appreciate that building a good foundation now, even if the rewards aren't immediately seen, is still a good thing.  Was mostly just curious about what Turbo might be shaping into.

Definitely, this is the best time to discuss everything.

I think this is the right way to go, even though it makes simple drawing harder. But which is harder, learning one system that does everything you want, or learning two systems that have some overlapping capabilities but work differently? 😨

I am working on text now. It works by creating a textured model for the text you want to draw. For an FPS counter, for example, I recommend creating models for each number you display, and storing them in a C++ map or Lua table, like this:

void UpdateFPSDisplay(int fps)
{
	currentFPSDisplayModel->Hide();
	if (textcache[fps] == nullptr) textcache[fps] = CreateText(world, font, String(fps), 12);
	currentFPSDisplayModel = textcache[fps];
	currentFPSDisplayModel->Show();
}

This is more complicated than just calling DrawText(fps) but it is far more powerful. Text rendering with this system will be instantaneous, whereas it can be quite slow in Leadwerks 4, which performs one draw call for each character.

  • Like 2

Share this comment


Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Blog Entries

    • By Josh in Josh's Dev Blog 2
      Previously I talked about the technical details of hardware tessellation and what it took to make it truly useful. In this article I will talk about some of the implications of this feature and the more advanced ramifications of baking tessellation into Turbo Game Engine as a first-class feature in the 
      Although hardware tessellation has been around for a few years, we don't see it used in games that often. There are two big problems that need to be overcome.
      We need a way to prevent cracks from appearing along edges. We need to display a consistent density of triangles on the screen. Too many polygons is a big problem. I think these issues are the reason you don't really see much use of tessellation in games, even today. However, I think my research this week has created new technology that will allow us to make use of tessellation as an every-day feature in our new Vulkan renderer.
      Per-Vertex Displacement Scale
      Because tessellation displaces vertices, any discrepancy in the distance or direction of the displacement, or any difference in the way neighboring polygons are subdivided, will result in cracks appearing in the mesh.

      To prevent unwanted cracks in mesh geometry I added a per-vertex displacement scale value. I packed this value into the w component of the vertex position, which was not being used. When the displacement strength is set to zero along the edges the cracks disappear:

      Segmented Primitives
      With the ability to control displacement on a per-vertex level, I set about implementing more advanced model primitives. The basic idea is to split up faces so that the edge vertices can have their displacement scale set to zero to eliminate cracks. I started with a segmented plane. This is a patch of triangles with a user-defined size and resolution. The outer-most vertices have a displacement value of 0 and the inner vertices have a displacement of 1. When tessellation is applied to the plane the effect fades out as it reaches the edges of the primitive:

      I then used this formula to create a more advanced box primitive. Along the seam where the edges of each face meet, the displacement smoothly fades out to prevent cracks from appearing.

      The same idea was applied to make segmented cylinders and cones, with displacement disabled along the seams.


      Finally, a new QuadSphere primitive was created using the box formula, and then normalizing each vertex position. This warps the vertices into a round shape, creating a sphere without the texture warping that spherical mapping creates.

      It's amazing how tessellation and displacement can make these simple shapes look amazing. Here is the full list of available commands:
      shared_ptr<Model> CreateBox(shared_ptr<World> world, const float width = 1.0); shared_ptr<Model> CreateBox(shared_ptr<World> world, const float width, const float height, const float depth, const int xsegs = 1, const int ysegs = 1); shared_ptr<Model> CreateSphere(shared_ptr<World> world, const float radius = 0.5, const int segments = 16); shared_ptr<Model> CreateCone(shared_ptr<World> world, const float radius = 0.5, const float height = 1.0, const int segments = 16, const int heightsegs = 1, const int capsegs = 1); shared_ptr<Model> CreateCylinder(shared_ptr<World> world, const float radius = 0.5, const float height=1.0, const int sides = 16, const int heightsegs = 1, const int capsegs = 1); shared_ptr<Model> CreatePlane(shared_ptr<World> world, cnst float width=1, const float height=1, const int xsegs = 1, const int ysegs = 1); shared_ptr<Model> CreateQuadSphere(shared_ptr<World> world, const float radius = 0.5, const int segments = 8); Edge Normals
      I experimented a bit with edges and got some interesting results. If you round the corner by setting the vertex normal to point diagonally, a rounded edge appears.

      If you extend the displacement scale beyond 1.0 you can get a harder extended edge.

      This is something I will experiment with more. I think CSG brush smooth groups could be used to make some really nice level geometry.
      Screen-space Tessellation LOD
      I created an LOD calculation formula that attempts to segment polygons into a target size in screen space. This provides a more uniform distribution of tessellated polygons, regardless of the original geometry. Below are two cylinders created with different segmentation settings, with tessellation disabled:

      And now here are the same meshes with tessellation applied. Although the less-segmented cylinder has more stretched triangles, they both are made up of triangles about the same size.

      Because the calculation works with screen-space coordinates, objects will automatically adjust resolution with distance. Here are two identical cylinders at different distances.

      You can see they have roughly the same distribution of polygons, which is what we want. The same amount of detail will be used to show off displaced edges at any distance.

      We can even set a threshold for the minimum vertex displacement in screen space and use that to eliminate tessellation inside an object and only display extra triangles along the edges.

      This allows you to simply set a target polygon size in screen space without adjusting any per-mesh properties. This method could have prevented the problems Crysis 2 had with polygon density. This also solves the problem that prevented me from using tessellation for terrain. The per-mesh tessellation settings I worked on a couple days ago will be removed since it is not needed.
      Parallax Mapping Fallback
      Finally, I added a simple parallax mapping fallback that gets used when tessellation is disabled. This makes an inexpensive option for low-end machines that still conveys displacement.

      Next I am going to try processing some models that were not designed for tessellation and see if I can use tessellation to add geometric detail to low-poly models without any cracks or artifacts.
    • By Josh in Josh's Dev Blog 0
      For finer control over what 2D elements appear on what camera, I have implemented a system of "Sprite Layers". Here's how it works:
      A sprite layer is created in a world. Sprites are created in a layer. Layers are attached to a camera (in the same world). The reason the sprite layer is linked to the world is because the render tweening operates on a per-world basis, and it works with the sprite system just like the entity system. In fact, the rendering thread uses the same RenderNode class for both.
      I have basic GUI functionality working now. A GUI can be created directly on a window and use the OS drawing commands, or it can be created on a sprite layer and rendered with 3D graphics. The first method is how I plan to make the new editor user interface, while the second is quite flexible. The most common usage will be to create a sprite layer, attach it to the main camera, and add a GUI to appear in-game. However, you can just as easily attach a sprite layer to a camera that has a texture render target, and make the GUI appear in-game on a panel in 3D. Because of these different usages, you must manually insert events like mouse movements into the GUI in order for it to process them:
      while true do local event = GetEvent() if event.id == EVENT_NONE then break end if event.id == EVENT_MOUSE_DOWN or event.id == EVENT_MOUSE_MOVE or event.id == EVENT_MOUSE_UP or event.id == EVENT_KEY_DOWN or event.id == EVENT_KEY_UP then gui:ProcessEvent(event) end end You could also input your own events from the mouse position to create interactive surfaces, like in games like DOOM and Soma. Or you can render the GUI to a texture and interact with it by feeding in input from VR controllers.

      Because the new 2D drawing system uses persistent objects instead of drawing commands the code to display elements has changed quite a lot. Here is my current button script. I implemented a system of abstract GUI "rectangles" the script can create and modify. If the GUI is attached to a sprite layer these get translated into sprites, and if it is attached directly to a window they get translated into system drawing commands. Note that the AddTextRect doesn't even allow you to access the widget text directly because the widget text is stored in a wstring, which supports Unicode characters but is not supported by Lua.
      --Default values widget.pushed=false widget.hovered=false widget.textindent=4 widget.checkboxsize=14 widget.checkboxindent=5 widget.radius=3 widget.textcolor = Vec4(1,1,1,1) widget.bordercolor = Vec4(0,0,0,0) widget.hoverbordercolor = Vec4(51/255,151/255,1) widget.backgroundcolor = Vec4(0.2,0.2,0.2,1) function widget:MouseEnter(x,y) self.hovered = true self:Redraw() end function widget:MouseLeave(x,y) self.hovered = false self:Redraw() end function widget:MouseDown(button,x,y) if button == MOUSE_LEFT then self.pushed=true self:Redraw() end end function widget:MouseUp(button,x,y) if button == MOUSE_LEFT then self.pushed = false if self.hovered then EmitEvent(EVENT_WIDGET_ACTION,self) end self:Redraw() end end function widget:OK() EmitEvent(EVENT_WIDGET_ACTION,self) end function widget:KeyDown(keycode) if keycode == KEY_ENTER then EmitEvent(EVENT_WIDGET_ACTION,self) self:Redraw() end end function widget:Start() --Background self:AddRect(self.position, self.size, self.backgroundcolor, false, self.radius) --Border if self.hovered == true then self:AddRect(self.position, self.size, self.hoverbordercolor, true, self.radius) else self:AddRect(self.position, self.size, self.bordercolor, true, self.radius) end --Text if self.pushed == true then self:AddTextRect(self.position + iVec2(1,1), self.size, self.textcolor, TEXT_CENTER + TEXT_MIDDLE) else self:AddTextRect(self.position, self.size, self.textcolor, TEXT_CENTER + TEXT_MIDDLE) end end function widget:Draw() --Update position and size self.primitives[1].position = self.position self.primitives[1].size = self.size self.primitives[2].position = self.position self.primitives[2].size = self.size self.primitives[3].size = self.size --Update the border color based on the current hover state if self.hovered == true then self.primitives[2].color = self.hoverbordercolor else self.primitives[2].color = self.bordercolor end --Offset the text when button is pressed if self.pushed == true then self.primitives[3].position = self.position + iVec2(1,1) else self.primitives[3].position = self.position end end This is arguably harder to use than the Leadwerks 4 system, but it gives you advanced capabilities and better performance that the previous design did not allow.
    • By Roland in ZERO 5
      For this Zero project I have a need to call C++ classes from LUA. I have gone the same way as Josh has and used 'tolua++' for this. This works like this
       
      1 - Create PKG
      For each class you need to export you have to write a corresponding 'pkg' file which is a version of the C++ header file suitable for the 'tolua++' program.
       
      2 - Use 'tolua++'
      Then send the 'pkg' file to 'tolua++' which then will generate a source file with the LUA-export version of the class and a header file which defines the function to call in order to export the class.
       
      3 - Add & Compile
      The two generated files should be included in your C++ project and you have to call the function defined in the header at some time after LUA has been initialized by Leadwerks. After compilation and linking you should be able to use the C++ class in you LUA scripts
       
      Here is a simple example:
       
      C++ header file: CppTest.h

      #pragma once #include <string> class CppTest { int _value; std::string _string; public: CppTest(); virtual ~CppTest(); int get_value() const; void set_value( int value ); std::string get_string() const; void set_string( const std::string& value ); };
       
      C++ source file: CppTest.cpp

      #include "CppTest.h" CppTest::CppTest() { _value = 0; } CppTest::~CppTest() { } int CppTest::get_value() const { return _value; } void CppTest::set_value( int value ) { _value = value; } std::string CppTest::get_string() const { return _string; } void CppTest::set_string( const std::string& value ) { _string = value; }
       
      PKG file: CppTest.pkg

      $#include <string> $#include "CppTest.h" class CppTest : public Temp { CppTest(); virtual ~CppTest(); int get_value() const; void set_value( int value ); std::string get_string() const; void set_string( const std::string& value ); };
       
      After you written the CppTest.pkg file you have to compile it using 'tolua++' like this. Note that I use the same filename for the outputs but with an '_' added. That way its easy to keep track of things
       

      tolua++ -o CppTest_.cpp -n CppTest -H CppTest_.h CppTest.pkg

       
      Now tolua should have generated two files.
       
      Generated CppTest_.h

      /* ** Lua binding: CppTest ** Generated automatically by tolua++-1.0.92 on 08/18/16 11:36:39. */ /* Exported function */ TOLUA_API int tolua_CppTest_open (lua_State* tolua_S);
       
      Generated CppTest_.cpp (shortned down)

      /* ** Lua binding: CppTest ** Generated automatically by tolua++-1.0.92 on 08/18/16 11:36:39. */ #ifndef __cplusplus #include "stdlib.h" #endif #include "string.h" #include "tolua++.h" /* Exported function */ TOLUA_API int tolua_CppTest_open (lua_State* tolua_S); #include <string> #include "CppTest.h" /* function to release collected object via destructor */ #ifdef __cplusplus static int tolua_collect_CppTest (lua_State* tolua_S) { CppTest* self = (CppTest*) tolua_tousertype(tolua_S,1,0); delete self; return 0; } #endif --- snip --- snip --- snip --- --- snip --- snip --- snip --- --- snip --- snip --- snip --- /* Open function */ TOLUA_API int tolua_CppTest_open (lua_State* tolua_S) { tolua_open(tolua_S); tolua_reg_types(tolua_S); tolua_module(tolua_S,NULL,0); tolua_beginmodule(tolua_S,NULL); #ifdef __cplusplus tolua_cclass(tolua_S,"CppTest","CppTest","Temp",tolua_collect_CppTest); #else tolua_cclass(tolua_S,"CppTest","CppTest","Temp",NULL); #endif tolua_beginmodule(tolua_S,"CppTest"); tolua_function(tolua_S,"new",tolua_CppTest_CppTest_new00); tolua_function(tolua_S,"new_local",tolua_CppTest_CppTest_new00_local); tolua_function(tolua_S,".call",tolua_CppTest_CppTest_new00_local); tolua_function(tolua_S,"delete",tolua_CppTest_CppTest_delete00); tolua_function(tolua_S,"get_value",tolua_CppTest_CppTest_get_value00); tolua_function(tolua_S,"set_value",tolua_CppTest_CppTest_set_value00); tolua_function(tolua_S,"get_string",tolua_CppTest_CppTest_get_string00); tolua_function(tolua_S,"set_string",tolua_CppTest_CppTest_set_string00); tolua_endmodule(tolua_S); tolua_endmodule(tolua_S); return 1; } #if defined(LUA_VERSION_NUM) && LUA_VERSION_NUM >= 501 TOLUA_API int luaopen_CppTest (lua_State* tolua_S) { return tolua_CppTest_open(tolua_S); }; #endif
      You should include both of those in your project. You also have to call tolua_CppTest_open somewhere after Leadwerks has initialized LUA. I do it here in my App.cpp. Remember to #include "CppTest_.h" at the top of App.cpp.
       
      App.cpp (shortned down)

      #include "App.h" #include "CppTest_.h" using namespace Leadwerks; App::App() : window(NULL), context(NULL), world(NULL), camera(NULL) {} App::~App() { delete world; delete window; } bool App::Start() { int stacksize = Interpreter::GetStackSize(); //Get the global error handler function int errorfunctionindex = 0; #ifdef DEBUG Interpreter::GetGlobal("LuaErrorHandler"); errorfunctionindex = Interpreter::GetStackSize(); #endif //Create new table and assign it to the global variable "App" Interpreter::NewTable(); Interpreter::SetGlobal("App"); std::string scriptpath = "Scripts/Main.lua"; if (FileSystem::GetFileType("Scripts/App.Lua") == 1) scriptpath = "Scripts/App.Lua"; // ADDED to initialize the CppTest LUA implemenation tolua_CppTest_open(Interpreter::L); //Invoke the start script if (!Interpreter::ExecuteFile(scriptpath)) { System::Print("Error: Failed to execute script \"" + scriptpath + "\"."); return false; } --- snip --- snip --- snip --- --- snip --- snip --- snip --- --- snip --- snip --- snip ---
      LuaParser
      I read somewhere that Josh has a parser that automates this a bit by parsing lines in the header file that is commented with //lua and generates pkg-files. What a good idea. As I think programming is better that watching lousy programs on TV, I made my own version of this and called it "LuaParser". I have attached this program for those who like to use it. Here is what it does
       
      1. Parses all C++ header files (*.h) in the folder and subfolders for lines commented with //lua
      2. For such files it creates a pkg file suitable for tolua++ compilation
      3. It complies the generated pkg files into _.h and _.cpp files to be included into you project
       
      Here is same example from above declared for LuaParsing
       
      C++ header file: CppTest.h prepared for LuaParser
      #pragma once #include <string>//lua class CppTest { int _value; std::string _string; public: CppTest();//lua virtual ~CppTest();//lua int get_value() const;//lua void set_value( int value );//lua std::string get_string() const;//lua void set_string( const std::string& value );//lua };
       
      Extract the LuaParser.zip into you Source folder and open a command prompt there. The just type LuaParser and hit Enter. A number of files ending with '_' in the name will be generated. Include them in your project and call the function in each of the '_.h' files as mentioned.
       
      Windows version
      LuaParserWin-1.5.zip
       
      Linux version
      LuaParserLinux-1.0.tar.gz - support discontinued
       
      History
      1.0 Initial version
      1.1 Comment header in PKG files was inside class declaration instead of at top of file
      1.2 Didn't handle class inheritance
      1.3 - 1.5 Various minor fixes
       
      You can read more about tolua++ here
      tolua++ - Reference Manual
×
×
  • Create New...