Jump to content

Multipass Rendering in Leadwerks 5 beta

Josh

727 views

Current generation graphics hardware only supports up to a 32-bit floating point depth buffer, and that isn't adequate for large-scale rendering because there isn't enough precision to make objects appear in the correct order and prevent z-fighting.

Z-fighting.png.80c56cfcb001206e86517f3b4a9d5fec.png

After trying out a few different approaches I found that the best way to support large-scale rendering is to allow the user to create several cameras. The first camera should have a range of 0.1-1000 meters, the second would use the same near / far ratio and start where the first one left off, with a depth range of 1000-10,000 meters. Because the ratio of near to far ranges is what matters, not the actual distance, the numbers can get very big very fast. A third camera could be added with a range out to 100,000 kilometers!

The trick is to set the new Camera::SetClearMode() command to make it so only the furthest-range camera clears the color buffer. Additional cameras clear the depth buffer and then render on top of the previous draw. You can use the new Camera::SetOrder() command to ensure that they are drawn in the order you want.

auto camera1 = CreateCamera(world);
camera1->SetRange(0.1,1000);
camera1->SetClearMode(CLEAR_DEPTH);
camera1->SetOrder(1);

auto camera2 = CreateCamera(world);
camera2->SetRange(1000,10000);
camera2->SetClearMode(CLEAR_DEPTH);
camera2->SetOrder(2);

auto camera3 = CreateCamera(world);
camera3->SetRange(10000,100000000);
camera3->SetClearMode(CLEAR_COLOR | CLEAR_DEPTH);
camera3->SetOrder(3);

Using this technique I was able to render the Earth, sun, and moon to-scale. The three objects are actually sized correctly, at the correct distance. You can see that from Earth orbit the sun and moon appear roughly the same size. The sun is much bigger, but also much further away, so this is exactly what we would expect.

unknown.thumb.png.ac2c9ca703d935726a34b8fcefe70718.png

You can also use these features to render several cameras in one pass to show different views. For example, we can create a rear-view mirror easily with a second camera:

auto mirrorcam = CreateCamera(world);
mirrorcam->SetParent(maincamera);
mirrorcam->SetRotation(0,180,0);
mirrorcam=>SetClearMode(CLEAR_COLOR | CLEAR_DEPTH);

//Set the camera viewport to only render to a small rectangle at the top of the screen:
mirrorcam->SetViewport(framebuffer->GetSize().x/2-200,10,400,50);

This creates a "picture-in-picture" effect like what is shown in the image below:

Untitled.thumb.jpg.c8671733233f5be199ad3b4a3d715dea.jpg

Want to render some 3D HUD elements on top of your scene? This can be done with an orthographic camera:

auto uicam = CreateCamera(world);
uicam=>SetClearMode(CLEAR_DEPTH);
uicam->SetProjectionMode(PROJECTION_ORTHOGRAPHIC);

This will make 3D elements appear on top of your scene without clearing the previous render result. You would probably want to move the UI camera far away from the scene so only your HUD elements appear in the last pass.

  • Like 4
  • Confused 1


25 Comments


Recommended Comments

Quote

You would probably want to move the UI camera far away from the scene so only your HUD elements appear in the last pass.

Would it be possible to use two different worlds for this? One "real" world and one for your HUD and then first rendering only the first world and after that rendering the HUD-world.

Share this comment


Link to comment
4 minutes ago, Ma-Shell said:

Would it be possible to use two different worlds for this? One "real" world and one for your HUD and then first rendering only the first world and after that rendering the HUD-world.

This would probably not be practical, because the rendering thread is so disconnected from the game logic thread.

Share this comment


Link to comment

You mean, because unlike the previous version the user does not directly call Render() anymore, since it is run on a different thread and thus the user can not correctly orchestrate the rendering? This should not be a problem, since in CreateCamera you can specify the world.

auto realCamera = CreateCamera(realWorld);
auto HUDCamera = CreateCamera(HUDWorld);
realCamera->setOrder(1);
HUDCamera->setOrder(2);
realCamera->SetClearMode(CLEAR_DEPTH | CLEAR_COLOR);
HUDCamera->SetClearMode(CLEAR_DEPTH);

 

Share this comment


Link to comment
3 minutes ago, Ma-Shell said:

You mean, because unlike the previous version the user does not directly call Render() anymore, since it is run on a different thread and thus the user can not correctly orchestrate the rendering? This should not be a problem, since in CreateCamera you can specify the world.


auto realCamera = CreateCamera(realWorld);
auto HUDCamera = CreateCamera(HUDWorld);
realCamera->setOrder(1);
HUDCamera->setOrder(2);
realCamera->SetClearMode(CLEAR_DEPTH | CLEAR_COLOR);
HUDCamera->SetClearMode(CLEAR_DEPTH);

 

There is a World::Render() command which basically says "tell the rendering thread to start using this world for rendering". So rendering two different worlds in one pass would be sort of difficult to manage.

Share this comment


Link to comment

Ah, I see... So the rendering always uses the last world, which called World::Render() and uses all cameras from that world in the specified order?

Would it be possible to implement the same ordering as with cameras for worlds then? Like the following

realWorld->setOrder(1);
HUDWorld->setOrder(2);

where it would first render all cameras from realWorld and after that all cameras from HUDWorld.

This would probably mean, it is more intuitive to have the render-call on the context instead of the world, since all worlds would be rendered.

 

By the way, is there any way to disable certain cameras, so they get skipped? Like setting a negative order or something like this. What happens if you set two cameras to the same order?

Share this comment


Link to comment

What might work better is to have a layer / tag system that lets different cameras have a filter to render certain types of objects.

To skip a camera, you can either hide it or set the projection mode to zero.

Share this comment


Link to comment

 

1 hour ago, Josh said:

layer / tag system that lets different cameras have a filter to render certain types of objects

Isn't that basically what a world is? Each camera renders only the objects that are in its world, so the world is basically a filter for the camera

Share this comment


Link to comment
1 hour ago, Ma-Shell said:

 

Isn't that basically what a world is? Each camera renders only the objects that are in its world, so the world is basically a filter for the camera

In an abstract sense, yes, but there’s a thousand more details than that. The strictness of both Vulkan and the multithreaded architecture mean I can’t design things that are “fast and loose” anymore.

Share this comment


Link to comment
11 hours ago, Ma-Shell said:

Isn't that basically what a world is? Each camera renders only the objects that are in its world, so the world is basically a filter for the camera

I'm going to try adding a command like this:

void World::AddRenderPass(shared_ptr<World> world, const int order)

The order value can be -1 (background), 1 (foreground), or 0 (mix with current world). No guarantee yet but I will try and see how well it works.

Share this comment


Link to comment

Okay, it looks like that will probably not work. The data management is just too complicated. I think a filter value will probably work best, because that can be handled easily in the culling routine.

Share this comment


Link to comment

I see two possible issues with filters:
1. I understand a filter as having some sort of "layer-id" on the objects and a camera only rendering objects with a given layer-id. Suppose, you have two objects A and B. If you want to first render A and B and then from a different camera only B (e.g. you have a vampire in front of a mirror. You would want the entire scenery to include the vampire but when rendering the mirror, you would want to render the scenery from that perspective without the vampire). This would not be easily possible with a layer-id.

2. Performance: You have to run through all objects and see, whether they match the filter in order to decide, whether to cull them or not. If you instead just walk the world's list of entities, this does not happen.

 

Why not using something like this:

camWorldA->SetClearMode(ColorBit | DepthBufferBit);
camWorldB->SetClearMode(DepthBufferBit);
camWorldC->SetClearMode(DepthBufferBit);

context->ClearWorlds(); // nothing is being rendered
context->AddWorld(worldA); // All cameras of world A are rendered
context->AddWorld(worldB); // First all cameras of world A are rendered, then all of world B
context->AddWorld(worldC); // First all cameras of world A are rendered, then all of world B, then all of world C

This would give the user maximum flexibility and only require the context to hold a list of worlds, which are rendered one after another instead of having a single active world.

For compatibility and comfortability reasons, you could additionally define

void World::Render(Context* ctx)
{
	ctx->ClearWorlds();
	ctx->AddWorld(this);
}

This way, you could still use the system as before without ever knowing of being able to render multiple worlds.

 

EDIT: I see, my vampire wouldn't work with this system, as well (unless you allow a mesh to be in multiple worlds) but I still think, this is quite a flexible and easy to use system without much of an overhead

Edited by Ma-Shell
  • Like 1

Share this comment


Link to comment

I will have to experiment some more with this. Interestingly, this overlaps with some problems with 2D drawing. Now in reality there is no such thing as 2D graphics on your computer, it is always 3D graphics. I think my approach here will be to stop trying to hide the fact that 2D is really 3D even if it does not fit our conceptions of how it should be. Stay tuned...

  • Like 1

Share this comment


Link to comment
2 hours ago, Josh said:

I think my approach here will be to stop trying to hide the fact that 2D is really 3D even if it does not fit our conceptions of how it should be

I'm curious what this would allow the end-user to do in exchange for making a straightforward system more complex.  I know in the past people wanted to play movies on textures and map cameras to textures.  And a lot of games do 3D-ish UI (font and images on textures).  I wonder if this system would let us do them with reasonable ease.

Share this comment


Link to comment
3 hours ago, gamecreator said:

I'm curious what this would allow the end-user to do in exchange for making a straightforward system more complex.  I know in the past people wanted to play movies on textures and map cameras to textures.  And a lot of games do 3D-ish UI (font and images on textures).  I wonder if this system would let us do them with reasonable ease.

I started digging into 2D drawing recently because @Lethal Raptor Games was asking about it in the private forum. I found that our model rendering system works well for 2D sprites as well, but in Vulkan you have no guarantee what order objects will be drawn in. I realized how stupid it is to do back-to-front rendering of 2D objects and that we should just use the depth buffer to handle this. I mean, we don't render 3D objects in order by distance, so why are 2D objects any different?

I think the 2D rendering will take place with a separate framebuffer, and then the results will be drawn on top of the 3D view. I think DOOM 2016 did this, for the same reasons. See the section here on "User Interface": http://www.adriancourreges.com/blog/2016/09/09/doom-2016-graphics-study/

Naturally this means that 3D-in-2D elements are very simple to add, but it also means you only have a Z-position for ordering. The rendering speed of this will be unbelievably fast. 100,000 unique sprites each with a different image would be absolutely no problem.

  • Like 1

Share this comment


Link to comment

I thought perhaps 2D rendering would require an orthographic camera to be created and rendered on top of the 3D camera, but that would invalidate the depth buffer contents, and we want to hang onto that for post-processing effects. Unless we insert a post-processing step in between camera passes like this:

  1. Render perspective camera.
  2. Draw post-processing effects.
  3. Clear depth buffer and render 2D elements.
  • Confused 1

Share this comment


Link to comment

Ok, it's just a Vulkan limitation that makes things more complex.  So, to draw the layers of a clock in the proper order, for example, we'd have to assign proper Z coordinates to the back, the hour hand, minute hand and second hand?  Is that how the engine would know what goes on top of what?
clock.png.1e2aa4d74c6021880eb40186ea305740.png

 

Share this comment


Link to comment

You would probably have three sprites with a material with alpha masking enabled, and position them along the Z axis the way you would want them to appear. Imagine if you were doing it in 3D. Which you are.

Rotation of sprites is absolutely no problem with this approach, along with many other options.

You can also use polygon meshes very easily for 2D rendering. For example, the clock hands could be a model, perhaps loaded from an SVG file.

  • Like 1

Share this comment


Link to comment

Okay, I have rendering with multiple worlds working now. The command is like this:

void World::Combine(shared_ptr<World> world)

All the cameras in the added world will be used in rendering, using the contents of the world they belong to. There is no ordering of the worlds, instead the cameras within the world are drawn with the same rules as a single world with multiple cameras:

  • By default, cameras are rendered in the order they are created.
  • A camera order setting can be used to override this and become the primary sorting method. (If two cameras have the same order value, then the creation order is used to sort them.)

So you can do something like this:

//Create main world
auto mainworld = CreateWorld();
auto maincamera = CreateCamera(world);

//Create world for HUD rendering
auto foreground = CreateWorld();
auto fgcam = CreateCamera(foreground);
fgcam->SetProjectionMode(PROJECTION_ORTHOGRAPHIC);
fgcam->SetClearMode(CLEAR_DEPTH);

auto healthbar = CreateSprite(foreground);

mainworld->Combine(foreground);

//Draw both worlds. Ortho HUD camera will be drawn on top since it was created last.
mainworld->Render(framebuffer);

That means that drawing 2D graphics on top of 3D graphics requires a world and camera to be created for this purpose. There is no "2D" commands really, there is just orthographic camera projection. This is also really really flexible, and the same fast rendering the normal 3D graphics use will make 2D graphics ridiculously fast.

Leadwerks 4 used a lot of render-to-texture and caching to make the GUI fast, but that will be totally unnecessary here, I think.

  • Like 2

Share this comment


Link to comment

Is it possible to combine more than 2 worlds (possibly by daisy chaining them)?

  • Like 1

Share this comment


Link to comment

Yes, you would just call Combine again. It just adds the indicated world to a list. It’s one-way, so “Combine” might not be the best nomenclature.

  • Like 1

Share this comment


Link to comment

Curious: is there a reason this isn't all done under the hood to keep the commands as simple as they are now?  In other words, getting back to my previous question, what extra functionality does this offer the end-user?  What practical things will you be able to do with images and text that you can't now?

  • Like 1

Share this comment


Link to comment
17 minutes ago, gamecreator said:

Curious: is there a reason this isn't all done under the hood to keep the commands as simple as they are now?  In other words, getting back to my previous question, what extra functionality does this offer the end-user?  What practical things will you be able to do with images and text that you can't now?

I think it mostly boils down to making 3D models in the UI possible. I could see some other things working nicely like particles. So if we need to account for this, let’s just cut the foreplay and have one method that handles everything.

Theres also a fundamental shift in my approach to design. I’m not trying to make the easiest to use game engine anymore, because the easiest game to develop is a pre-built asset flip.

I am not interested in making Timmy’s first game maker so if that means Turbo Engine trades ease of use in the simple case for more speed and functionality I am okay with that. I almost feel like people don’t respect an API that isn’t difficult to use.

we’ll see, there is still some time before this is finalized.

  • Like 2

Share this comment


Link to comment

Appreciate it.  And I think you know I'm not trying to bust your balls.  I just feel like if there's sacrifices to be made, it should be for a good cause, which it sounds like it will be.  And I also appreciate that building a good foundation now, even if the rewards aren't immediately seen, is still a good thing.  Was mostly just curious about what Turbo might be shaping into.

Share this comment


Link to comment
16 hours ago, gamecreator said:

Appreciate it.  And I think you know I'm not trying to bust your balls.  I just feel like if there's sacrifices to be made, it should be for a good cause, which it sounds like it will be.  And I also appreciate that building a good foundation now, even if the rewards aren't immediately seen, is still a good thing.  Was mostly just curious about what Turbo might be shaping into.

Definitely, this is the best time to discuss everything.

I think this is the right way to go, even though it makes simple drawing harder. But which is harder, learning one system that does everything you want, or learning two systems that have some overlapping capabilities but work differently? 😨

I am working on text now. It works by creating a textured model for the text you want to draw. For an FPS counter, for example, I recommend creating models for each number you display, and storing them in a C++ map or Lua table, like this:

void UpdateFPSDisplay(int fps)
{
	currentFPSDisplayModel->Hide();
	if (textcache[fps] == nullptr) textcache[fps] = CreateText(world, font, String(fps), 12);
	currentFPSDisplayModel = textcache[fps];
	currentFPSDisplayModel->Show();
}

This is more complicated than just calling DrawText(fps) but it is far more powerful. Text rendering with this system will be instantaneous, whereas it can be quite slow in Leadwerks 4, which performs one draw call for each character.

  • Like 2

Share this comment


Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Blog Entries

    • By Josh in Josh's Dev Blog 3
      What's new
      EAX audio effects for supported hardware. Source class renamed to "Speaker". Plane joint for 2D physics, so now you can make Angry Birds with Vulkan graphics. Fixed DPI issues with fullscreen mode. Added impact noise to barrels, fixed Lua collision function not being called. Script functions now start with "Entity:" instead of "Script:", i.e. Entity:Update() instead of Script:Update(). Additionally, four examples can be run showing various functionality. Double-click on the .bat files to launch a different demo:
      First-person shooter game. 2D physics demonstration. Advanced 2D drawing with text, rotation, and scaling. Multi-camera setup.
    • By 💎Yue💎 in The shock absorbers 0
      It's interesting that when you become an expert on something, you're not sparing any effort to see how something works, but rather you're focusing on creating something. And so everything becomes easier.
      At this point of learning there is a glimpse of a low idea of creating a game, but the secret of all this is to keep it simple and to be very clear that a game is a game, and not an exact simulation of the real world. For example anyone who has a low idea of the red planet, will understand no matter the colors of the scene that is a terrain of Mars, even if it is not very real what is transmitted, a game, that's just it.
      At this point I already have an astronaut character who runs from one place to another on a very large 4096 x 4046 terrain that would surely take a long walk. My previous prototype projects involve a vehicle, but I didn't get the best implementation prospect in that time and I always found performance problems in my machine, something that isn't happening with the character controller for a third person player. 
      As always, I think I'm a scavenger looking for game resources, that's where this community exposes links to websites with interesting hd textures, and one or another model searched on the net, but what I've greatly improved is learning to write code, I have a better workflow, writing Lua code focused on the paradigm of object programming.



      Something interesting is the system of putting rocks, all very nice from the point of implementing them. And it works very well with the character controller if you put collision in cube form.
      I've been thinking about implementing a car system, I think it would be necessary in such a large terrain, but I think it's not the time, my previous experience, involves deterioration in performance and something I think is the physics of the car with respect to the terrain and rocks that in the previous project involve deterioration in the fps. Although if you implement a car would have an option would be to remove the rocks, but I prefer not to have a car and if you have rocks. 
       
       
       
       
    • By reepblue in reepblue's Blog 6
      Loading sounds in Leadwerks has always been straight forward. A sound file is loaded from the disk, and with the Source class emits the sound in 3D space. The sound entity also has a play function, but it's only really good for UI sounds. There is also Entity::EmitSound() which will play the sound at the entity's location. (You can also throw in a Source, but it'll auto release the object when it's done.)
      While this is OK for small games, larger games in which sounds may change might mean you have to open your class, and adjust the sounds accordingly. What if you use the sound in multiple places and you're happy with the volume and pitch settings from an earlier implementation? You could just redefine the source in a different actor, but why should you?
      A solution I came up with comes from SoundScripts from the Source Engine. With that engine, you had to define each sound as a SoundScript entry. This allowed you to define a sound once, and it allowed for other sound settings such as multiple sounds per entry. I thought this over, and with JSON, we can easily create a similar system for Leadwerks 4 and the new engine.
      I first started with a dummy script so I can figure out how I wanted the end result to be.
      { "soundData": { "Error": { "file": "Sound/error.wav", "volume": 1.0, "pitch": 1.0, "range": 0.25 }, "RandomSound": { "files": { "file1": "Sound/Test/tone1.wav", "file2": "Sound/Test/tone2.wav", "file3": "Sound/Test/tone3.wav" }, "volume": 1.0, "pitch": 1.0, "range": 0.25 } } } In this script, we have two sound entries. We have an error sound (Which is suppose to be the fall back sound for an invalid sound entry) and we have a sound entry that holds multiple files. We want a simple, straight forward. entry like "Error" to work, while also supporting something "RandomSound" which can be used for something like footstep sounds.
      The script is streamed and stored into multiple structs in a std::map at the application start. We use the key for the name, and the value is the struct.
      typedef struct { std::string files[128]; char filecount; float volume; float pitch; float range; bool loopmode; } sounddata_t; std::map<std::string, sounddata_t> scriptedsounds; Also notice that we don't store any pointers, just information. To do the next bit, I decided to derive off of the engine's Source class and call it "Speaker". The Speaker class allows us to load sounds via the script entry, and support multiple sounds.
      You create one like this, and you have all the functionalities with the Source as before, but a few differences.
      // Speaker: auto speaker = CreateSpeaker("RandomSound"); When you use Play() with the speaker class and if the sound entry has a "files" table array, it'll pick a sound at random. You can also use PlayIndex() to play the sound entry in the array. I also added a SetSourceEntity() function which will create a pivot, parent to the target entity. From there, the Play function will always play from the pivot's position. This is a good alternative to Entity::EmitSound(), as you don't need to Copy/Instance the Source before calling the function as that function releases the Source as mentioned earlier. Just play the speaker, and you'll be fine! You can also change the sound entry at anytime by calling SetSoundEntry(const std::string pSoundEntryName); The creation of the Speaker class will start the JSON phrasing. If it has already been done, it will not do it again.
      Having sounds being loaded and stored like this opens up a lot of possibles. One thing I plan on implementing is a volume modifier which will adjust the volume based on the games volume setting.Right now, it uses the defined volume setting. It's also a part of another system I have in the works.
×
×
  • Create New...