Jump to content

Josh

Staff
  • Content Count

    15,825
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    Previously, we saw how the new renderer can combine multiple cameras and even multiple worlds in a single render to combine 3D and 2D graphics. During the process of implementing Z-sorting for multiple layers of transparency, I found that Vulkan does in fact respect rasterization order. That is, objects are in fact drawn in the same order you provide draw calls to a command buffer.
    Furthermore, individual primitives (polygons) are also rendered in the order they are stored in the indice buffer:
    Now if you were making a 2D game with 1000 zombie sprites onscreen you would undoubtedly want to use 3D-in-2D rendering with an orthographic camera. Batching and depth discard would give you much faster performance when the number of objects goes up. However, the 2D aspect of most games is relatively simple, with only a dozen or so 2D sprites making up the user interface. Given that 2D graphics are not normally going to be much of a bottleneck, and that the biggest performance savings we have achieved was in making text a static object, I decided to rework the 2D rendering system into something that was a little simpler to use.
    Sprites are no longer a 3D entity, but are a new type of pure 2D object. They act in a similar way as entities with position, rotation, and scale commands, but they only use 2D coordinates:
    //Create a sprite auto sprite = CreateSprite(world,100,100); //Make blue sprite->SetColor(0,0,1); //Position in upper-left corner of screen sprite->SetPosition(10,10) Sprites have a handle you can set. By default this is in the upper-left corner of the sprite, but you can change it to recenter them. Sprites can also be rotated around the Z axis:
    //Center the handle sprite->SetHandle(0.5,0.5); //Rotation around center sprite->SetRotation(45); SVG vector images are great for 2D drawing and GUIs because they can scale for different display resolutions. We support these as well, with an optional scale value the image can be rasterized at.
    auto sprite = LoadSprite(world, "tiger.svg", 0, 2.0);
    Text is now just another type of sprite:
    auto text = CreateSprite(world, font, L"Hello, how are you today?\nI am fine.", 72, TEXT_LEFT); These sprites are all displayed within the same world as the 3D rendering, so unlike what I previously wrote about...
    You do not have to create extra cameras or worlds just to draw 2D graphics. (If you are doing something advanced then the multi-camera method I previously described is a good option, but you have to have very demanding needs for it to make a difference.) Regular old screen coordinates you are used to will be used (coordinate [0,0] is top-left). By default sprites will be drawn in the order they are created. However, I definitely see a need for additional control here and I am open to ideas. Should there be a sprite order value, a MoveToFront() method, or a system of different layers? I'm not sure yet.
    I'm also not sure how per-camera sprites will be controlled. At this time sprites are stored in a per-world list, but we will want some 2D elements to only appear on some cameras. I am not sure yet how this will be controlled.
    I am going to try to get an update out soon with these features so you can try them out yourself.
  2. Josh
    Previously I described how multiple cameras can be combined in the new renderer to create an unlimited depth buffer. That discussion lead into multi-world rendering and 2D drawing. Surprisingly, there is a lot of overlap in these features, and it makes sense to solve all of it at one time.
    Old 2D rendering systems are designed around the idea of storing a hierarchy of state changes. The renderer would crawl through the hierarchy and perform commands as it went along, rendering all 2D elements in the order they should appear. It made sense for the design of the first graphics cards, but this style of rendering is really inefficient on modern graphics hardware. Today's hardware works best with batches of objects, using the depth buffer to handle which object appears on top. We don't sort 3D objects back-to-front because it would be monstrously inefficient, so why should 2D graphics be any different?
    We can get much better results if we use the same fast rendering techniques we use for 3D graphics and apply it to 2D shapes. After all, the only difference between 3D and 2D rendering is the shape of the camera projection matrix. For this reason, Turbo Engine will use 2D-in-3D rendering for all 2D drawing. You can render a pure 2D scene by setting the camera projection mode to orthographic, or you can create a second orthographic camera and render it on top of your 3D scene. This has two big implications:
    Performance will be incredibly fast. I predict 100,000 uniquely textured sprites will render pretty much instantaneously. In fact anyone making a 2D PC game who is having trouble with performance will be interested in using Turbo Engine. Advanced 3D effects will be possible that we aren't used to seeing in 2D. For example, lighting works with 2D rendering with no problems, as you can see below. Mixing of 3D and 2D elements will be possible to make inventory systems and other UI items. Particles and other objects can be incorporated into the 2D display.
    The big difference you will need to adjust to is there are no 2D drawing commands. Instead you have persistent objects that use the same system as the 3D rendering.
    Sprites
    The primary 2D element you will work with is the Sprite entity, which works the same as the 3D sprites in Leadwerks 4. Instead of drawing rectangles in the order you want them to appear, you will use the Z position of each entity and let the depth buffer take care of the rest, just like we do with 3D rendering. I also am adding support for animation frames and other features, and these can be used with 2D or 3D rendering.

    Rotation and scaling of sprites is of course trivial. You could even use effects like distance fog! Add a vector joint to each entity to lock the Z axis in the same direction and Newton will transform into a nice 2D physics system.
    Camera Setup
    By default, with a zoom value of 1.0 an orthographic camera maps so that one meter in the world equals one screen pixel. We can position the camera so that world coordinates match screen coordinates, as shown in the image below.
    auto camera = CreateCamera(world); camera->SetProjectionMode(PROJECTION_ORTHOGRAPHIC); camera->SetRange(-1,1); iVec2 screensize = framebuffer->GetSize(); camera->SetPosition(screensize.x * 0.5, -screensize.y * 0.5); Note that unlike screen coordinates in Leadwerks 4, world coordinates point up in the positive direction.

    We can create a sprite and reset its center point to the upper left hand corner of the square like so:
    auto sprite = CreateSprite(world); sprite->mesh->Translate(0.5,-0.5,0); sprite->mesh->Finalize(); sprite->UpdateBounds(); And then we can position the sprite in the upper left-hand corner of the screen and scale it:
    sprite->SetColor(1,0,0); sprite->SetScale(200,50); sprite->SetPosition(10,-10,0);
    This would result in an image something like this, with precise alignment of screen pixels:

    Here's an idea: Remember the opening sequence in Super Metroid on SNES, when the entire world starts tilting back and forth? You could easily do that just by rotating the camera a bit.
    Displaying Text
    Instead of drawing text with a command, you will create a text model. This is a series of rectangles of the correct size with their texture coordinates set to display a letter for each rectangle. You can include a line return character in the text, and it will create a block of multiple lines of text in one object. (I may add support for text made out of polygons at a later time, but it's not a priority right now.)
    shared_ptr<Model> CreateText(shared_ptr<World> world, shared_ptr<Font> font, const std::wstring& text, const int size) The resulting model will have a material with the rasterized text applied to it, shown below with alpha blending disabled so you can see the mesh background. Texture coordinates are used to select each letter, so the font only has to be rasterized once for each size it is used at:

    Every piece of text you display needs to have a model created for it. If you are displaying the framerate or something else that changes frequently, then it makes sense to create a cache of models you use so your game isn't constantly creating new objects. If you wanted, you could modify the vertex colors of a text model to highlight a single word.

    And of course all kinds of spatial transformations are easily achieved.

    Because the text is just a single textured mesh, it will render very fast. This is a big improvement over the DrawText() command in Leadwerks 4, which performs one draw call for each character.
    The font loading command no longer accepts a size. You load the font once and a new image will be rasterized for each text size the engine requests internally:
    auto font = LoadFont("arial.ttf"); auto text = CreateText(foreground, font, "Hello, how are you today?", 18); Combining 2D and 3D
    By using two separate worlds we can control which items the 3D camera draws and which item 2D camera draws: (The foreground camera will be rendered on top of the perspective camera, since it is created after it.) We need to use a second camera so that 2D elements are rendered in a second pass with a fresh new depth buffer.
    //Create main world and camera auto world = CreateWorld(); auto camera = CreateCamera(world); auto scene = LoadScene(world,"start.map"); //Create world for 2D rendering auto foreground = CreateWorld() auto fgcam = CreateCamera(foreground); fgcam->SetProjection(PROJECTION_ORTHOGRAPHIC); fgcam->SetClearMode(CLEAR_DEPTH); fgcam->SetRange(-1,1); auto UI = LoadScene(foreground,"UI.map"); //Combine rendering world->Combine(foreground); while (true) { world->Update(); world->Render(framebuffer); } Overall, this will take more work to set up and get started with than the simple 2D drawing in Leadwerks 4, but the performance and additional control you get are well worth it. This whole approach makes so much sense to me, and I think it will lead to some really cool possibilities.
    As I have explained elsewhere, performance has replaced ease of use as my primary design goal. I like the results I get with this approach because I feel the design decisions are less subjective.
  3. Josh
    I'm putting together ideas for a racing game template to add to Leadwerks. We already support vehicles. The challenge is to put together that looks and feels slick and professional, like a real game people want to play. The finished demo will be submitted to Greenlight, GameJolt, IndieDB, itch.io, etc.
     
    Gameplay
    First, I wanted to think about what style of racing I want this to be. I don't want street racing because it's kind of boring, and the level design is more involved. I don't want spintires-style technical offroading because it's too specialized. I want some fun medium-paced 4x4 racing like in
    , but modern. 


     
    This single-player game will pit you against seven computer-controlled components. You win by coming in the top three places. A time-trial option will allow you to compare your scores to other players via Steam leaderboards.
     
    The HUD will display a speedometer, your place in the race, current lap, and total and current lap time.
     
    Cars will be 4x4 trucks, identical except with a different texture.
     
    The player can turn headlights on and off, honk their horn, and drive. The transmission will always be automatic.
     
    Pressing the C key will alternate between views, including 3rd person, 3rd person further away, first-person (in-car), and a free third person camera that doesn't rotate with the car.
     
    Checkpoints will be placed throughout the level, with a sound when you pass through.
     
    After the race is complete, a replay will be performed from data recorded during the race, and scores will be shown on the screen.
     
    Environment
    I want the environment to be scrubby arid desert with big dramatic crags in the background.
     

     
    Roads will be painted on with a dirt texture, and decals will be used to add tire tracks sporadically. Decals will fade out at a fairly close distance, as I plan on having lots of them in the map.
     
    The game will allow you to set the time of day and weather. I have not decided if the weather and time of day will change as the race progresses. Time of day includes night, morning, afternoon, and evening.
     
    Weather can also be set, with options for sunny, rainy, and snowy. Snow will use a post-processing effect to add snow on all upwards-facing surfaces. Tire grip will be reduced in snowy and rainy conditions.
     
    The vehicles will throw up a cloud of dirt, mud, water, or other material, based on the primary texture of the terrain where they are contacting. Dirt, water, raindrops, ice, snow, and other effects will hit the camera and remain for a moment before fading.
     
    Screen-space reflection will be showcased heavily on the vehicle bodies.
     
    One song will play for the menu and one for the race. The song will sound something like this at 0:44 because it sounds modern:


     
    Or maybe this:

     
    Scope Limits
    The game is single-player only.
    I'm not going to bother with changes to the terrain or vehicles leaving tread marks.
    There will be no arms visible when the camera is inside the car.
    The environment will be static. There will be no destruction of the environment, and no moving objects or physically interactive items except for the cars.
    I am not going to implement an overhead map.
    I am not going to implement vehicle damage.
    Other than finishing the game GUI, I do not want to implement any new features in Leadwerks to complete this.
    The game will not attempt to be realistic or follow any real-world racing events.
    The race will not portray an audience or people standing around.
    No weapons.

  4. Josh
    Current generation graphics hardware only supports up to a 32-bit floating point depth buffer, and that isn't adequate for large-scale rendering because there isn't enough precision to make objects appear in the correct order and prevent z-fighting.

    After trying out a few different approaches I found that the best way to support large-scale rendering is to allow the user to create several cameras. The first camera should have a range of 0.1-1000 meters, the second would use the same near / far ratio and start where the first one left off, with a depth range of 1000-10,000 meters. Because the ratio of near to far ranges is what matters, not the actual distance, the numbers can get very big very fast. A third camera could be added with a range out to 100,000 kilometers!
    The trick is to set the new Camera::SetClearMode() command to make it so only the furthest-range camera clears the color buffer. Additional cameras clear the depth buffer and then render on top of the previous draw. You can use the new Camera::SetOrder() command to ensure that they are drawn in the order you want.
    auto camera1 = CreateCamera(world); camera1->SetRange(0.1,1000); camera1->SetClearMode(CLEAR_DEPTH); camera1->SetOrder(1); auto camera2 = CreateCamera(world); camera2->SetRange(1000,10000); camera2->SetClearMode(CLEAR_DEPTH); camera2->SetOrder(2); auto camera3 = CreateCamera(world); camera3->SetRange(10000,100000000); camera3->SetClearMode(CLEAR_COLOR | CLEAR_DEPTH); camera3->SetOrder(3); Using this technique I was able to render the Earth, sun, and moon to-scale. The three objects are actually sized correctly, at the correct distance. You can see that from Earth orbit the sun and moon appear roughly the same size. The sun is much bigger, but also much further away, so this is exactly what we would expect.

    You can also use these features to render several cameras in one pass to show different views. For example, we can create a rear-view mirror easily with a second camera:
    auto mirrorcam = CreateCamera(world); mirrorcam->SetParent(maincamera); mirrorcam->SetRotation(0,180,0); mirrorcam=>SetClearMode(CLEAR_COLOR | CLEAR_DEPTH); //Set the camera viewport to only render to a small rectangle at the top of the screen: mirrorcam->SetViewport(framebuffer->GetSize().x/2-200,10,400,50); This creates a "picture-in-picture" effect like what is shown in the image below:

    Want to render some 3D HUD elements on top of your scene? This can be done with an orthographic camera:
    auto uicam = CreateCamera(world); uicam=>SetClearMode(CLEAR_DEPTH); uicam->SetProjectionMode(PROJECTION_ORTHOGRAPHIC); This will make 3D elements appear on top of your scene without clearing the previous render result. You would probably want to move the UI camera far away from the scene so only your HUD elements appear in the last pass.
  5. Josh
    A new build is available on the beta branch on Steam.
    Updated to Visual Studio 2019. Updated to latest version of OpenVR and Steamworks SDK. Fixed object tracking with seated VR mode. Note that the way seated VR works now is a little different, you need to use the VR orientation instead of just positioning the camera (see below). Added VR:SetRotation() so you can rotate the world around in VR. The VRPlayer script has rotation added to the left controller. Press the touchpad to turn left and right. Any arbitrary rotation will work, including roll and pitch. Here are the offset commands:
    static void VR::SetOffset(const Vec3& position); static void VR::SetOffset(const float x, const float y, const float z); static void VR::SetOffset(const float x, const float y, const float z, const float pitch, const float yaw, const float roll); static void VR::SetOffset(const Vec3& position, const Vec3& rotation); static void VR::SetOffset(const Vec3& position, const Quat& rotation); static Vec3 VR::GetOffset(); static Vec3 VR::GetRotation(); static Quat VR::GetQuaternion();  
  6. Josh
    Still a lot of things left to do. Now that I have very large-scale rendering working, people want to fill it up with very big terrains. A special system will be required to handle this, which adds another layer to the terrain system. Also, I want to resume work on the voxel GI system, as I feel these results are much better than the performance penalty of ray-tracing. There are a few odds and ends like AI navigation and cascaded shadow maps to finish up.
    I am planning to have the engine more or less finished in the spring, and begin work on the new editor. Our workflow isn't going to change much. The new editor is just going to be a more refined version of what we already have, although it is a complete new program written from scratch, this time in C++.
    It's kind of overwhelming but I have confidence in the whole direction and strategy of this new product.
  7. Josh
    If were a user of BlitzMax in the past, you will love these convenience commands in Turbo Engine:
    int Notify(const std::wstring& title, const std::wstring& message, shared_ptr<Window> window = nullptr, const int icon = 0) int Confirm(const std::wstring& title, const std::wstring& message, shared_ptr<Window> window = nullptr, const int icon = 0) int Proceed(const std::wstring& title, const std::wstring& message, shared_ptr<Window> window = nullptr, const int icon = 0)
  8. Josh
    Gamers have always been fascinated with the idea of endless areas to roam.  It seems we are always artificially constrained within a small area to play in, and the possibility of an entire world outside those bounds is tantalizing.  The game FUEL captured this idea by presenting the player with an enormous world that took hours to drive across:
    In the past, I always implemented terrain with one big heightmap texture, which had a fixed size like 1024x1024, 2048x2048, etc.  However, our vegetation system, featured in the book Game Engine Gems 3, required a different approach.  There was far too many instances of grass, trees, and rocks to store them all in memory, and I wanted to do something really radical.  The solution was to create an algorithm that could instantly calculate all the vegetation instances in a given area.  The algorithm would always produce the same result, but the actual data would never be saved, it was just retrieved in the area where you needed it, when you needed it.  So with a few modifications, our vegetation system is already set up to generate infinite instances far into the distance.

    However, terrain is problematic.  Just because an area is too far away to see doesn't mean it should stop existing.  If we don't store the terrain in memory then how do we prevent far away objects from falling into the ground?  I don't like the idea of disabling far away physics because it makes things very complex for the end user.  There are definitely some tricks we can add like not updating far away AI agents, but I want everything to just work by default, to the best of my ability.
    It was during the development of the vegetation system that I realized the MISSING PIECE to this puzzle.  The secret is in the way collision works with vegetation.  When any object moves all the collidable vegetation instances around it are retrieved and collision is performed on this fetched data.  We can do the exact same thing with terrain   Imagine a log rolling across the terrain.  We could use an algorithm to generate all the triangles it potentially could collide with, like in the image below.

    You can probably imagine how it would be easy to lay out an infinite grid of flat squares around the player, wherever he is standing in the world.

    What if we only save heightmap data for the squares the user modifies in the editor?  They can't possibly modify the entire universe, so let's just save their changes and make the default terrain flat.  It won't be very interesting, but it will work, right?
    What if instead of being flat by default, there was a function we had that would procedurally calculate the terrain height at any point?  The input would be the XZ position in the world and the output would be a heightmap value.

    If we used this, then we would have an entire procedurally generated terrain combined with parts that the developer modifies by hand with the terrain tools.  Only the hand-modified parts would have to be saved to a series of files that could be named "mapname_x_x.patch", i.e. "magickingdom_54_72.patch".  These patches could be loaded from disk as needed, and deleted from memory when no longer in use.
    The real magic would be in developing an algorithm that could quickly generate a height value given an XZ position.  A random seed could be introduced to allow us to create an endless variety of procedural landscapes to explore.  Perhaps a large brush could even be used to assign characteristics to an entire region like "mountainy", "plains", etc.
    The possibilities of what we can do in Leadwerks Engine 5 are intriguing.  Granted I don't have all the answers right now, but implementing a system like this would be a major step forward that unlocks an enormous world to explore.  What do you think?

  9. Josh
    New commands in Turbo Engine will add better support for multiple monitors. The new Display class lets you iterate through all your monitors:
    for (int n = 0; n < CountDisplays(); ++n) { auto display = GetDisplay(n); Print(display->GetPosition()); //monitor XY coordinates Print(display->GetSize()); //monitor size Print(display->GetScale()); //DPI scaling } The CreateWindow() function now takes a parameter for the monitor to create the window on / relative to.
    auto display = GetDisplay(0); Vec2 scale = display->GetScale(); auto window = CreateWindow(display, "My Game", 0, 0, 1280.0 * scale.x, 720.0 * scale.y, WINDOW_TITLEBAR | WINDOW_RESIZABLE); The WINDOW_CENTER style can be used to center the window on one display.
    You can use GetDisplay(DISPLAY_PRIMARY) to retrieve the main display. This will be the same as GetDisplay(0) on systems with only one monitor.
  10. Josh
    A huge update is available for Turbo Engine Beta.
    Hardware tessellation adds geometric detail to your models and smooths out sharp corners. The new terrain system is live, with fast performance, displacement, and support for up to 255 material layers. Plugins are now working, with sample code for loading MD3 models and VTF textures. Shader families eliminate the need to specify a lot of different shaders in a material file. Support for multiple monitors and better control of DPI scaling. Notes:
    Terrain currently has cracks between LOD stages, as I have not yet decided how I want to handle this. Tessellation has some "shimmering" effects at some resolutions. Terrain may display a wire grid on parts. Directional lights are supported but cast no shadows. Tested in Nvidia and AMD, did not test on Intel. Subscribers can get the latest beta in the private forum here.

     
     
  11. Josh
    I wanted to work on something a bit easier before going back into voxel ray tracing, which is another difficult problem. "Something easier" was terrain, and it ended up consuming the entire month of August, but I think you will agree it was worthwhile.
    In Leadwerks Game Engine, I used clipmaps to pre-render the terrain around the camera to a series of cascading textures. You can read about the implementation here:
    This worked very well with the hardware we had available at the time, but did result in some blurriness in the terrain surface at far distances. At the time this was invented, we had some really severe hardware restrictions, so this was the best solution then. I also did some experiments with tessellation, but a finished version was never released.
    New Terrain System
    Vulkan gives us a lot more freedom to follow our dreams. When designing a new system, I find it useful to come up with a list of attributes I care about, and then look for the engineering solution that best meets those needs.
    Here's what we want:
    Unlimited number of texture layers Pixel-perfect resolution at any distance Support for tessellation, including physics that match the tessellated surface. Fast performance independent from the number of texture layers (more layers should not slow down the renderer) Hardware tessellation is easy to make a basic demo for, but it is hard to turn it into a usable feature, so I decided to attack this first. You can read my articles about the implementation below. Once I got the system worked out for models, it was pretty easy to carry that over to terrain.
    So then I turned my attention to the basic terrain system. In the new engine, terrain is a regular old entity. This means you can move it, rotate it, and even flip it upside down to make a cave level. Ever wonder what a rotated terrain looks like?

    Now you know.
    You can create multiple terrains, instead of just having one terrain per world like in Leadwerks. If you just need a little patch of terrain in a mostly indoor scene, you can create one with exactly the dimensions you want and place it wherever you like. And because terrain is running through the exact same rendering path as models, shadows work exactly the same.
    Here is some of the terrain API, which will be documented in the new engine:
    shared_ptr<Terrain> CreateTerrain(shared_ptr<World> world, const int tilesx, const int tiles, const int patchsize = 32, const int LODLevels = 4) shared_ptr<Material> Terrain::GetMaterial(const int x, const int y, const int index = 0) float Terrain::GetHeight(const int x, const int y, const bool global = true) void Terrain::SetHeight(const int x, const int y, const float height) void Terrain::SetSlopeConstraints(const float minimum, const float maximum, const float range, const int layer) void Terrain::SetHeightConstraints(const float minimum, const float maximum, const float range, const int layer) int Terrain::AddLayer(shared_ptr<Material> material) void Terrain::SetMaterial(const int index, const int x, const int y, const float strength = 1.0, const int threadindex = 0) Vec3 Terrain::GetNormal(const int x, const int y) float Terrain::GetSlope(const int x, const int y) void Terrain::UpdateNormals() void Terrain::UpdateNormals(const int x, const int y, const int width, const int height) float Terrain::GetMaterialStrength(const int x, const int y, const int index) What I came up with is flexible it can be used in three ways.
    Create one big terrain split up into segments (like Leadwerks Engine does, except non-square terrains are now supported). Create small patches of terrain to fit in a specific area. Create many terrains and tile them to simulate very large areas. Updating Normals
    I spent almost a full day trying to calculate terrain normal in local space. When they were scaled up in a non-linear scale, the PN Quads started to produce waves. I finally realized that normal cannot really be scaled. The scaled vector, even if normalized, is not the correct normal. I searched for some information on this issue, but the only thing I could find is a few mentions of an article called "Abnormal Normals" by someone named Eric Haines, but it seems the original article has gone down the memory hole. In retrospect it makes sense if I picture the normal vectors rotating instead of shifting each axis. So bottom line is that normal for any surface have to be recalculated if a non-uniform scale is used.
    I'm doing more things on the CPU in this design because the terrain system is more complex, and because it's a lot harder to get Vulkan to do anything. I might move it over to the GPU in the future but for right now I will stick with the CPU. I used multithreading to improve performance by a lot:
    Physics
    Newton Dynamics provides a way to dynamically calculate triangles for collision. This will be used to calculate a high-res collision mesh on-the-fly for physics. (For future development.) Something similar could probably be done for the picking system, but that might not be a great idea to do.
    Materials
    At first I thought I would implement a system where one terrain vertex just has one material, but it quickly became apparent that this would result in very "square" patterns, and that per-vertex blending between multiple materials would be needed. You can see below the transitions between materials form a blocky pattern.

    So I came up with a more advanced system that gives nice smooth transitions between multiple materials, but is still very fast:

    The new terrain system supports up to 256 different materials per terrain. I've worked out a system that runs fast no matter how many material layers you use, so you don't have to be concerned at all about using too many layers. You will run out of video memory before you run out of options.
    Each layer uses a PBR material with full support for metalness, roughness, and reflections. This allows a wider range of materials, like slick shiny obsidian rocks and reflective ice. When combined with tessellation, it is possible to make snow that actually looks like snow.

     Instancing
    Like any other entity, terrain can be copied or instantiated. If you make an instance of a terrain, it will use the same height, material, normal, and alpha data as the original. When the new editor arrives, I expect that will allow you to modify one terrain and see the results appear on the other instance immediately. A lot of "capture the flag" maps have two identical sides facing each other, so this could be good for that.

    Final Shots
    Loading up "The Zone" with a single displacement map added to one material produced some very nice results.



    The new terrain system will be very flexible, it looks great, and it runs fast. (Tessellation requires a high-end GPU, but can be disabled.) I think this is one of the features that will make people very excited about using the new Turbo Game Engine when it comes out.
  12. Josh
    Texture loading plugins allow us to move some big libraries like FreeImage into separate optional plugins, and it also allows you to write plugins to load new texture and image formats.
    To demonstrate the feature, I have added a working VTF plugin to the GMF2 SDK. This plugin will load Valve texture files used in Source Engine games like Half-Life 2 and Portal. Here are the results, showing a texture loaded directly from VTF format and also displayed in Nem’s nifty VTFEdit tool.

    Just like we saw with the MD3 model loader, loading a texture with a plugin is done by first loading the plugin, and then the texture. (Make sure you keep a handle to the plugin or it will be automatically unloaded.)
    auto vtfloader = LoadPlugin("Plugins/VTF.dll"); auto tex = LoadTexture("Materials/HL2/gordon.vtf”); This will be included in the next update to the Turbo Game Engine update, and at that point there is nothing stopping you from creating your own plugins. I hope to someone smart write a plugin for Crunch .crn files, which could cut the size of your game's distribution files down by half.
    Unlike the GMF2 model format, which provides fast loading of large model files, my internal texture format ("GTF2") has no advantage over other texture formats, and I do not plan to make standalone files in this format. I recommend using DDS files for textures, using BC7 compression for most images and BC5 for normal maps.
  13. Josh
    Multithreading is very useful for processes that can be split into a lot of parallel parts, like image and video processing. I wanted to speed up the normal updating for the new terrain system so I added a new thread creation function that accepts any function as the input, so I can use std::bind with it, the same way I have been easily using this to send instructions in between threads:
    shared_ptr<Thread> CreateThread(std::function<void()> instruction); The terrain update normal function has two overloads. Once can accept parameters for the exact area to update, but if no parameters are supplied the entire terrain is updated:
    virtual void UpdateNormals(const int x, const int y, const int width, const int height); virtual void UpdateNormals(); This is what the second overloaded function looked like before:
    void Terrain::UpdateNormals() { UpdateNormals(0, 0, resolution.x, resolution.y); } And this is what it looks like now:
    void Terrain::UpdateNormals() { const int MAX_THREADS_X = 4; const int MAX_THREADS_Y = 4; std::array<shared_ptr<Thread>, MAX_THREADS_X * MAX_THREADS_Y> threads; Assert((resolution.x / MAX_THREADS_X) * MAX_THREADS_X == resolution.x); Assert((resolution.y / MAX_THREADS_Y) * MAX_THREADS_Y == resolution.y); for (int y = 0; y < MAX_THREADS_Y; ++y) { for (int x = 0; x < MAX_THREADS_X; ++x) { threads[y * MAX_THREADS_X + x] = CreateThread(std::bind((void(Terrain::*)(int, int, int, int)) & Terrain::UpdateNormals, this, x * resolution.x / MAX_THREADS_X, y * resolution.y / MAX_THREADS_Y, resolution.x / MAX_THREADS_X, resolution.y / MAX_THREADS_Y)); } } for (auto thread : threads) { thread->Resume(); } for (auto thread : threads) { thread->Wait(); } } Here are the results, using a 2048x2048 terrain. You can see that multithreading dramatically reduced the update time. Interestingly, four threads runs more than four times faster than a single thread. It looks like 16 threads is the sweet spot, at least on this machine, with a 10x improvement in performance.

  14. Josh
    The GMF2 data format gives us fine control to enable fast load times. Vertex and indice data is stored in the exact same format the GPU uses so everything is read straight into video memory. There are some other optimizations I have wanted to make for a long time, and our use of big CAD models makes this a perfect time to implement these improvements.
    Precomputed BSP Trees
    For raycast (pick) operations, a BSP structure needs to be constructed from the mesh data. In Leadwerks Engine this structure is computed after a model is loaded. The "Bober Station: model from The Zone is a 46,000 poly model. Here are the build times for its BSP structure:
    Release: 500 milliseconds Debug: 6143 milliseconds This is not our biggest mesh. Let's assume a two million poly mesh, which is actually something we come across with CAD data. With this size model our load times increase 40x:
    Release: 20 seconds Debug: 4 minutes Those numbers are not good. To reduce load times I have embedded precomputed BSP structures into the GMF2 format so that they can be loaded from the file. I don't have any numbers yet because there is currently no GMF1 to GMF2 pipeline, but I believe this will significantly reduce load times, especially for large models.
    I am interested in the idea of replacing the BSP float vertex values with shorts to reduce memory, but I'm not going to worry about it right now.
    Static Meshes
    I also am adding making the engine dump vertex and indice data from memory by default when a mesh is sent to the GPU. This will reduce memory usage, since you normally don't need a copy of the vertex data in memory for any reason.
    void Mesh::Finalize(const bool makestatic) { GameEngine::Get()->renderingthreadmanager->AddInstruction(std::bind(&RenderMesh::Modify, rendermesh, vertices, indices)); if (makestatic) { if (collider == nullptr) UpdateCollider(); vertices.clear(); vertices.shrink_to_fit(); indices.clear(); indices.shrink_to_fit(); } } This change reduced our system memory usage in "The Zone" by 40 mb.
  15. Josh
    My work with the MD3 format and its funny short vertex positions made me think about vertex array sizes. (Interestingly, the Quake 2 MD2 format uses a single char for vertex positions!)
    Smaller vertex formats run faster, so it made sense to look into this. Here was our vertex format before, weighing in at 72 bytes:
    struct Vertex { Vec3 position; float displacement; Vec3 normal; Vec2 texcoords[2]; Vec4 tangent; unsigned char color[4]; unsigned char boneweights[4]; unsigned char boneindices[4]; } According to the OpenGL wiki, it is okay to replace the normals with a signed char. And if this works for normals, it will also work for tangents, since they are just another vector.
    I also implemented half floats for the texture coordinates.
    Here is the vertex structure now at a mere 40 bytes, about half the size:
    struct Vertex { Vec3 position; signed char normal[3]; signed char displacement; unsigned short texcoords[4]; signed char tangent[4]; unsigned char color[4]; unsigned char boneweights[4]; unsigned char boneindices[4]; } Everything works with no visible loss of quality. Half-floats for the position would reduce the size by an additional 6 bytes, but would likely incur two bytes of padding since it would no longer by aligned to four bytes, like most GPUs prefer. So it would really only save four bytes, which is not worth it for half the precision.
    Another interesting thing I might work into the GMF format is Draco mesh compression by Google. This is only for shrinking file sizes, and does not lead to any increased performance. The bulk of your game data is going to be in texture files, so this isn't too important but might be a nice extra.
  16. Josh
    The GMF2 SDK has been updated with tangents and bounds calculation, object colors, and save-to-file functionality.
    The GMF2 SDK is a lightweight engine-agnostic open-source toolkit for loading and saving of game models. The GMF2 format can work as a standalone file format or simply as a data format for import and export plugins. This gives us a protocol we can pull model data into and export model data from, and a format that loads large data much faster than alternative file formats.
    Here is an example showing how to construct a GMF2 model and save it to a file:
    //Create a GMF file GMFFile* file = new GMFFile; //Create a model GMFNode* node = new GMFNode(file, GMF_TYPE_MODEL); //Set the orientation node->SetMatrix(1,0,0,0, 0,1,0,0, 0,0,1,0, 0,0,0,1); //Set the object color node->SetColor(0,0,1,1); //Add one LOD level GMFLOD* lod = node->AddLOD(); //Create a triangle mesh and add it to the LOD. (Meshes can be shared.) GMFMesh* mesh = new GMFMesh(file, 3); lod->AddMesh(mesh); //Add vertices mesh->AddVertex(-0.5,0.5,0, 0,0,1, 0,0); mesh->AddVertex(0.5,0.5,0, 0,0,1, 1,0); mesh->AddVertex(0.5,-0.5,0, 0,0,1, 1,1); mesh->AddVertex(-0.5,-0.5,0, 0,0,1, 0,1); //Add triangles mesh->AddPolygon(0,1,2); mesh->AddPolygon(0,2,3); //Build vertex tangents (optional) mesh->UpdateTangents(); //Save data to file file->Save("out.gmf"); //Cleanup - This will get rid of all created objects delete file; You can get the GMF2 SDK on GitHub.
  17. Josh
    The Turbo Game Engine beta is updated! This will allow you to load your own maps in the new engine and see the speed difference the new renderer makes.

    LoadScene() has replaced the LoadMap() function, but it still loads your existing map files. To create a PBR material, insert a line into the material file that says "lightingmodel=1". Blinn-Phong is the default lighting model. The red and green channels on texture unit 2 represent metalness and roughness. You generally don't need to assign shaders to your materials. The engine will automatically select one based on what textures you have. Point and spot lights work. Directional lights do not. Setting the world skybox only affects PBR reflections and Blinn-Phong ambient lighting. No sky will be visible. Physics, scripting, particles, and terrain do not work. Variance shadow maps are in use. There are currently some problems with lines appearing at cubemap seams and some flickering pixels. Objects should always cast a shadow or they won't appear correctly with VSMs. I had to include glew.c in the project because the functions weren't being detected from the static lib. I'm not sure why. The static libraries are huge. The release build is nearly one gigabyte. But when compiled, your executable is small. #include "Leadwerks.h" using namespace Leadwerks; int main(int argc, const char *argv[]) { //Create a window auto window = CreateWindow("MyGame", 0, 0, 1280, 720); //Create a rendering context auto context = CreateContext(window); //Create the world auto world = CreateWorld(); //This only affects reflections at this time world->SetSkybox("Models/Damaged Helmet/papermill.tex"); shared_ptr<Camera> camera; auto scene = LoadScene(world, "Maps/turbotest.map"); for (auto entity : scene->entities) { if (dynamic_pointer_cast<Camera>(entity)) { camera = dynamic_pointer_cast<Camera>(entity); } } auto model = LoadModel(world, "Models/Damaged Helmet/DamagedHelmet.mdl"); model->Move(0, 1, 0); model->SetShadowMode(LIGHT_DYNAMIC, true); //Create a camera if one was not found if (camera == nullptr) { camera = CreateCamera(world); camera->Move(0, 1, -3); } //Set background color camera->SetClearColor(0.15); //Enable camera free look and hide mouse camera->SetFreeLookMode(true); window->HideMouse(); //Create a light auto light = CreateLight(world, LIGHT_POINT); light->SetShadowMode(LIGHT_DYNAMIC | LIGHT_STATIC | LIGHT_CACHED); light->SetPosition(0, 4, -4); light->SetRange(15); while (window->KeyHit(KEY_ESCAPE) == false and window->Closed() == false) { //Rotate model model->Turn(0, 0.5, 0); //Camera movement if (window->KeyDown(Key::A)) camera->Move(-0.1, 0, 0); if (window->KeyDown(Key::D)) camera->Move(0.1, 0, 0); if (window->KeyDown(Key::W)) camera->Move(0, 0, 0.1); if (window->KeyDown(Key::S)) camera->Move(0, 0, -0.1); //Update the world world->Update(); //Render the world world->Render(context); } Shutdown(); return 0; }
  18. Josh
    Not a lot of people know about this, but back in 2001 Discreet (before the company was purchased by Autodesk) released a free version of 3ds max for modding games. Back then game file formats and tools were much more highly specialized than today, so each game required a "game pack" to customize the gmax interface to support that game. I think the idea was to charge the game developer money to add support for their game. Gmax supported several titles including Quake 3 Arena and Microsoft Flight Simulator, but was later discontinued.
    I personally love the program because it includes only the features you need from 3ds max for hard surface modeling. It's basically a stripped down version of 3ds max with only the features you need.

    Gmax still survives today, apparently in the custody of Turbosquid. You can download it here. You need the gmax 1.2 installer and the Tempest game pack for Quake 3. You will also need to request a free registration key from Turbosquid. After installing gmax, you can simply find the MD3 export plugin ("md3exp.dle") in the Tempest game pack download and copy that to your "C:\gmax\plugins" directory to enable export. There is also an optional  There is also an optional MD3 import script by Chris Cookson which is uploaded here for safekeeping:
    q3-md3.ms
    With the plugin system in our new engine I was able to add support for loading Quake 3 MD3 models, so you can export your gmax models and load them up in the new engine. However, there are some restrictions. The MD3 file format uses compressed vertex positions, so your vertex positions have a limited range and resolution. Additionally, there are restrictions on what you can do with the gmax program, so take a look at the licensing terms before you do anything. Still, it's a fun program to have and this is a nice feature to play around with.
     
  19. Josh
    I wanted to get a working example of the plugin system together. One thing led to another, and...well, this is going to be a very different blog.
    GMF2
    The first thing I had to do is figure out a way for plugins to communicate with the main program. I considered using a structure they both share in memory, but those always inevitably get out of sync when either the structure changes or there is a small difference between the compilers used for the program and DLL. This scared me so I went with a standard file format to feed the data from the plugin to the main program.
    GLTF is a pretty good format but has some problems that makes me look for something else to use in our model loader plugins.
    It's text-based and loads slower. It's extremely complicated. There are 3-4 different versions of the file format and many many options to split it across multiple files, binary, text, or binary-to-text encoding. It's meant for web content, not PC games. Tons of missing functionality is added through a weird plugin system. For example, DDS is supported through a plugin, but a backup PNG has to be included in case the loaded doesn't support the extension. The GMF file format was used in Leadwerks. It's a custom fast-loading chunk-based binary file format. GMF2 is a simpler flat binary format updated with some extra features:
    Vertices are stored in a single array ready to load straight into GPU memory. Vertex displacement is supported. Compressed bitangents. Quad and line meshes are supported. LOD Materials and textures can be packed into the format or loaded from external files. PBR and Blinn-Phong materials Mesh bounding boxes are supported so they don't have to be calculated at load time. This means the vertex array never has to be iterated through, making large meshes load faster. I am not sure yet if GMF2 will be used for actual files or if it is just going to be an internal data format for plugins to use. GLTF will continue to be supported, but the format is too much of a mess to use for plugin data.
    Here's a cool five-minute logo:

    The format looks something like this:
    char[4] ID "GMF2" int version 200 int root //ID of the root entity int texture_count //number of textures int textures_pos //file position for texture array int materials_count //number of materials int materials_pos //file position for materials int nodes_count //number of nodes int nodes_pos //file position for nodes As you can see, it is really easy to read and really easy to write. There's enough complexity in this already. I'm not bored. I don't need to introduce unnecessary complexity into the design just so I can show off. There are real problems that need to be solved and making a "tricky" file format is not one of them.
    In Leadwerks 2, we had a bit of code called the "GMF SDK". This was written in BlitzMax, and allowed construction of a GMF file with easy commands. I've created new C++ code to create GMF2 files:
    //Create the file GMFFile* file = new GMFFile; //Create a model node = new GMFNode(file, GMF_TYPE_MODEL); //Add an LOD level GMFLOD* lod = node->AddLOD(); //Add a mesh GMFMesh* mesh = lod->AddMesh(3); //triangle mesh //Add a vertex mesh->AddVertex(0,0,0, 0,0,1, 0,0, 0,0, 255,255,255,255); mesh->AddVertex(0,0,1, 0,0,1, 0,0, 0,0, 255,255,255,255); mesh->AddVertex(0,1,1, 0,0,1, 0,0, 0,0, 255,255,255,255); //Add a triangle mesh->AddIndice(0); mesh->AddIndice(1); mesh->AddIndice(2); Once your GMFFile is constructed you can save it into memory with one command. The Turbo Plugin SDK is a little more low-level than the engine, so it includes a MemWriter class to help with this, since the engine Stream class is not present.
    As a test I am writing a Quake 3 MD3 import plugin and will provide the project and source as an example of how to use the Turbo Plugin SDK.

    Packages
    The ZIP virtual file system from Leadwerks is being expanded and formalized. You can load a Package object to add new virtual files to your project. These will also be loadable from the editor, so you can add new packages to a project, and the files will appear in the asset browser and file dialogs. (Package files are read-only.) Plugins will allow packages to be loaded from file formats besides ZIP, like Quake WADs or Half-Life GCF files. Notice we keep all our loaded items in variables or arrays because we don't want them to get auto-deleted.
    //Load Quake 3 plugins auto pk3reader = LoadPlugin("Plugins/PK3.dll"); auto md3loader = LoadPlugin("Plugins/MD3.dll"); auto bsploader = LoadPlugin("Plugins/Q3BSP.dll"); Next we load the game package files straight from the Quake game directory. This is just like the package system from Leadwerks.
    //Load Quake 3 game packages std::wstring q3apath = L"C:/Program Files (x86)/Steam/steamapps/common/Quake 3 Arena/baseq3"; auto dir = LoadDir(q3apath); std::vector<shared_ptr<Package> > q3apackages; for (auto file : dir) { if (Lower(ExtractExt(file)) == L"pk3") { auto pak = LoadPackage(q3apath + L"/" + file); if (pak) q3apackages.push_back(pak); } } Now we can start loading content directly from the game.
    //Load up some game content from Quake! auto head = LoadModel(world, "models/players/bitterman/head.md3"); auto upper = LoadModel(world, "models/players/bitterman/upper.md3"); auto lower = LoadModel(world, "models/players/bitterman/lower.md3"); auto scene = LoadScene(world, "maps/q3ctf2.bsp"); Modding
    I have a soft spot for modding because that is what originally got me into computer programming and game development. I was part of the team that made "Checkered Flag: Gold Cup" which was a spin-off on the wonderful Quake Rally mod:
    I expect in the new editor you will be able to browse through game files just as if they were uncompressed in your project file, so the new editor can act as a modding tool, for those who are so inclined. It's going to be interesting to see what people do with this. We can configure the new editor to run a launch script that handles map compiling and then launches the game. All the pieces are there to make the new editor a tool for modding games, like Discreet's old Gmax experiment.

    I am going to provide official support for Quake 3 because all the file formats are pretty easy to load, and because gmax can export to MD3 and it would be fun to load Gmax models. Other games can be supported by adding plugins.
    So here are some of the things the new plugin system will allow:
    Load content directly from other games and use it in your own game. I don't recommend using copyrighted game assets for commercial projects, but you could make a "mod" that replaces the engine and requires the player to own the original game. You could probably safely use content from any Valve games and release a commercial game on Steam. Use the editor as a tool to make Quake or other maps. Add plugin support for new file formats. This might all be a silly waste of time, but it's a way to get the plugin system working, and if we can make something flexible enough to build Quake maps, well that is a good test of the robustness of the system.
     
  20. Josh
    During development of Leadwerks Game Engine, there was some debate on whether we should allow multiple scripts per entity or just associate a single script with an entity. My first iteration of the scripting system actually used multiple scripts, but after using it to develop the Darkness Awaits example I saw a lot of problems with this. Each script used a different classname to store its variables and functions in, so you ended up with code like this:
    function Script:HurtEnemy(amount) if self.enemy ~= nil then if self.enemy.healthmanager ~= nil then if type(self.enemy.healthmanager.TakeDamage)=="function" then self.enemy.healthmanager.TakeDamage(amount) end end end end I felt this hurt script interoperability because you had to have a bunch of prefixes like healthmanager, ammomanager, etc. I settled on using a single script, which I still feel was the better choice between these two options:
    function Script:HurtEnemy(amount) if self.enemy ~= nil then if type(self.enemy.TakeDamage)=="function" then self.enemy.TakeDamage(amount) end end end Scripting in Turbo Game Engine is a bit different. First of all, all values and functions are attached to the entity itself, so there is no "script" table. When you access the "self" variable in a script function you are using the entity object itself. Here is a simple script that makes an entity spin around its Y axis:
    function Entity:Update() self:Turn(0,0.1,0) end Through some magic that is only possible due to the extreme flexibility of Lua, I have managed to devise a system for multiple script attachments that makes sense. There is no "component" or "script" objects itself, adding a script to an entity just executes some code that attached values and functions to an entity. Adding a script to an entity can be done in C++ as follows:
    model->AttachScript("Scripts/Objects/spin.lua"); Or in Lua itself:
    model:AttachScript("Scripts/Objects/spin.lua"); Note there is no concept of "removing" a script, because a script just executes a bit of code that adds values and functions to the entity.
    Let's say we have two scripts named "makeHealth100 and "makeHealth75".
    MakeHealth100.lua
    Entity.health=100 MakeHealth75.lua
    Entity.health=75 Now if you were to run the code below, which attaches the two scripts, the health value would first be set to 100, and then the second script would set the same value to 75, resulting in the number 75 being printed out:
    model->AttachScript("Scripts/Objects/MakeHealth100.lua"); model->AttachScript("Scripts/Objects/MakeHealth75.lua"); Print(entity->GetNumber("health")); Simple enough, right? The key point here is that with multiple scripts, variables are shared between scripts. If one scripts sets a variable to a value that conflicts with another script, the two scripts won't work as expected. However, it also means that two scripts can easily share values to work together and create new functionality, like this health regeneration script that could be added to work with any other scripts that treat the value "health" as a number.
    HealthRegen.lua
    Entity.healthregendelay = 1000 function Entity:Start() self.healthregenupdatetime = CurrentTime() end function Entity:Update() if self.health > 0 then if CurrentTime() - self.healthregenupdatetime > self.healthregendelay then self.health = self.health + 1 self.health = Min(self.health,100) end end end What about functions? Won't adding a script to an entity overwrite any functions it already has attached to it? If I treated functions the same way, then each entity could only have one function for each name, and there would be very little point in having multiple scripts! That's why I implemented a special system that copies any added functions into an internal table. If two functions with the same name are declared in two different scripts, they will both be copied into an internal table and executed. For example, you can add both scripts below to an entity to make it both spin and make the color pulse:
    Spin.lua
    function Entity:Update() self:Turn(0,0.1,0) end Pulse.lua
    function Entity:Update() local i = Sin(CurrentTime()) * 0.5 + 0.5 self:SetColor(i,i,i) end When the engine calls the Update() function, both copies of the function will be called, in the order they were added.
    But wait, there's more.
    The engine will add each function into an internal table, but it also creates a dummy function that iterates through the table and executes each copy of the function. This means when you call functions in Lua, the same multi-execution feature will be available. Let's consider a theoretical bullet script that causes damage when the bullet collides with something:
    function Entity:Collision(entity,position,normal,speed) if type(entity.TakeDamage) == "function" then entity:TakeDamage(20) end end If you have two (or more) different TakeDamage functions on different scripts attached to that entity, all of them would get called, in order.
    What if a function returns a value, like below?:
    function Entity:Update() if self.target ~= nil then if self.target:GetHealth() <= 0 then self.target = nil --stop chasing if dead end end end If multiple functions are attached that return values, then all the return values are returned.

    To grab multiple returned values, you can set up multiple variables like this:
    function foo() return 1,2,3 end a, b, c = foo() print(a) --1 print(b) --2 print(c) --3 But a more practical usage would be to create a table from the returned values like so:
    function foo() return 1,2,3 end t = { foo() } print(t[1]) --1 print(t[2]) --2 print(t[3]) --3 How could this be used? Let's say you had a script that was used to visually debug AI scripts. It did this by checking to see what an entity's target enemy was, by calling a GetTarget() function, and then creating a sprite and aligning it to make a line going from the AI entity to its target it was attacking:
    function Entity:UpdateDisplay() local target = self:GetTarget() self.sprite = CreateSprite() local p1 = self.entity:GetPosition() local p2 = target:GetPosition() self.sprite:SetPosition((p1 + p2) * 0.5) self.sprite:AlignToVector(p2 - p1) self.sprite:SetSize(0.1,(p2-p1):Length()) end Now let's imagine we had a tank with a main gun as well as an anti-aircraft gun that would ward off attacks from above, like this beauty I found on Turbosquid:

    Let's imagine we have two different scripts we attach to the tank. One handles AI for driving and shooting the main turret, while the other just manages the little machine gun. Both the scripts have a GetTarget() function, as the tank may be attacking two different enemies at once.
    We can easily modify our AI debugging script to handle multiple returned values as follows:
    function Entity:UpdateDisplay() local targets = { self:GetTarget() } --all returned values get put into a table for n,target in ipairs(targets) do local sprite = CreateSprite() self.sprites.insert(sprite) local p1 = self.entity:GetPosition() local p2 = target:GetPosition() sprite:SetPosition((p1 + p2) * 0.5) sprite:AlignToVector(p2 - p1) sprite:SetSize(0.1,(p2-p1):Length()) end end However, any scripts that are not set up to account for multiple returned values from a function will simply use the first returned value, and proceed as normal.
    This system supports both easy mix and match behavior with multiple scripts, but keeps the script code simple and easy to use. Scripts have easy interoperability by default, but if you want to make your function and variable names unique to the script it is easy to do so.
    Let me know if you have any other ideas for scripting in Turbo Game Engine.
  21. Josh
    With tessellation now fully implemented, I was very curious to see how it would perform when applied to arbitrary models. With tessellation, vertices act like control points for a Bezier mesh that is subdivided dynamically in screen space. Could tessellation be used to add new details to any low-poly model?
    Here is a low-res character model with a pointy head and obvious sharp edges all around his silhouette:


    When tessellation is enabled, the sharp edges go away and the mesh magically appears more detailed:


    Let's take a look at the "Doom Seeker" enemy model from Doom 3. The original model is 640 triangles:


    Tessellation gets rid of the blocky outline.

    And displacement adds new details to the model.


    What about other shapes? Can this technique be used to turn any low-poly model into a high-res version? Here is a tire model from "The Zone" DLC. Even with displacement set to zero on edge vertices, cracks still appear.

    So my conclusion on this is that tessellation is fantastic for closed organic shapes. In fact, I think it totally replaces separate LOD meshes for rocks, characters, and other organic shapes. However, it is not something you should use everywhere without any discretion.
  22. Josh
    Previously I talked about the technical details of hardware tessellation and what it took to make it truly useful. In this article I will talk about some of the implications of this feature and the more advanced ramifications of baking tessellation into Turbo Game Engine as a first-class feature in the 
    Although hardware tessellation has been around for a few years, we don't see it used in games that often. There are two big problems that need to be overcome.
    We need a way to prevent cracks from appearing along edges. We need to display a consistent density of triangles on the screen. Too many polygons is a big problem. I think these issues are the reason you don't really see much use of tessellation in games, even today. However, I think my research this week has created new technology that will allow us to make use of tessellation as an every-day feature in our new Vulkan renderer.
    Per-Vertex Displacement Scale
    Because tessellation displaces vertices, any discrepancy in the distance or direction of the displacement, or any difference in the way neighboring polygons are subdivided, will result in cracks appearing in the mesh.

    To prevent unwanted cracks in mesh geometry I added a per-vertex displacement scale value. I packed this value into the w component of the vertex position, which was not being used. When the displacement strength is set to zero along the edges the cracks disappear:

    Segmented Primitives
    With the ability to control displacement on a per-vertex level, I set about implementing more advanced model primitives. The basic idea is to split up faces so that the edge vertices can have their displacement scale set to zero to eliminate cracks. I started with a segmented plane. This is a patch of triangles with a user-defined size and resolution. The outer-most vertices have a displacement value of 0 and the inner vertices have a displacement of 1. When tessellation is applied to the plane the effect fades out as it reaches the edges of the primitive:

    I then used this formula to create a more advanced box primitive. Along the seam where the edges of each face meet, the displacement smoothly fades out to prevent cracks from appearing.

    The same idea was applied to make segmented cylinders and cones, with displacement disabled along the seams.


    Finally, a new QuadSphere primitive was created using the box formula, and then normalizing each vertex position. This warps the vertices into a round shape, creating a sphere without the texture warping that spherical mapping creates.

    It's amazing how tessellation and displacement can make these simple shapes look amazing. Here is the full list of available commands:
    shared_ptr<Model> CreateBox(shared_ptr<World> world, const float width = 1.0); shared_ptr<Model> CreateBox(shared_ptr<World> world, const float width, const float height, const float depth, const int xsegs = 1, const int ysegs = 1); shared_ptr<Model> CreateSphere(shared_ptr<World> world, const float radius = 0.5, const int segments = 16); shared_ptr<Model> CreateCone(shared_ptr<World> world, const float radius = 0.5, const float height = 1.0, const int segments = 16, const int heightsegs = 1, const int capsegs = 1); shared_ptr<Model> CreateCylinder(shared_ptr<World> world, const float radius = 0.5, const float height=1.0, const int sides = 16, const int heightsegs = 1, const int capsegs = 1); shared_ptr<Model> CreatePlane(shared_ptr<World> world, cnst float width=1, const float height=1, const int xsegs = 1, const int ysegs = 1); shared_ptr<Model> CreateQuadSphere(shared_ptr<World> world, const float radius = 0.5, const int segments = 8); Edge Normals
    I experimented a bit with edges and got some interesting results. If you round the corner by setting the vertex normal to point diagonally, a rounded edge appears.

    If you extend the displacement scale beyond 1.0 you can get a harder extended edge.

    This is something I will experiment with more. I think CSG brush smooth groups could be used to make some really nice level geometry.
    Screen-space Tessellation LOD
    I created an LOD calculation formula that attempts to segment polygons into a target size in screen space. This provides a more uniform distribution of tessellated polygons, regardless of the original geometry. Below are two cylinders created with different segmentation settings, with tessellation disabled:

    And now here are the same meshes with tessellation applied. Although the less-segmented cylinder has more stretched triangles, they both are made up of triangles about the same size.

    Because the calculation works with screen-space coordinates, objects will automatically adjust resolution with distance. Here are two identical cylinders at different distances.

    You can see they have roughly the same distribution of polygons, which is what we want. The same amount of detail will be used to show off displaced edges at any distance.

    We can even set a threshold for the minimum vertex displacement in screen space and use that to eliminate tessellation inside an object and only display extra triangles along the edges.

    This allows you to simply set a target polygon size in screen space without adjusting any per-mesh properties. This method could have prevented the problems Crysis 2 had with polygon density. This also solves the problem that prevented me from using tessellation for terrain. The per-mesh tessellation settings I worked on a couple days ago will be removed since it is not needed.
    Parallax Mapping Fallback
    Finally, I added a simple parallax mapping fallback that gets used when tessellation is disabled. This makes an inexpensive option for low-end machines that still conveys displacement.

    Next I am going to try processing some models that were not designed for tessellation and see if I can use tessellation to add geometric detail to low-poly models without any cracks or artifacts.
  23. Josh
    Now that I have all the Vulkan knowledge I need, and most work is being done with GLSL shader code, development is moving faster. Before starting voxel ray tracing, another hard problem, I decided to work one some *relatively* easier things for a few days. I want tessellation to be an every day feature in the new engine, so I decided to work out a useful implementation of it.
    While there are a ton of examples out there showing how to split a triangle up into smaller triangles, useful discussion and techniques of in-game tessellation is much more rare. I think this is because there are several problems to solve before this technical feature can really be made practical.
    Tessellation Level
    The first problem is deciding how much to tessellate an object. Tessellation should use a single detail level per set of primitives being drawn. The reason for this is that cracks will appear when you apply displacement if you try to use a different tessellation level for each polygon. I solved this with some per-mesh setting for tessellation parameters.
    Note: In Leadwerks Game Engine, a model is an entity with one or more surfaces. Each surface has a vertex array, indice array, and a material. In Turbo Game Engine, a model contains one of more LODs, and each LOD can have one or more meshes. A mesh object in Turbo is like a surface object in Leadwerks.
    We are not used to per-mesh settings. In fact, the material is the only parameter meshes contained other than vertex or indice data. But for tessellation parameters, that is exactly what we need, because the density of the mesh polygons gives us an idea of how detailed the tessellation should be. This command has been added:
    void Mesh::SetTessellation(const float detail, const float nearrange, const float farrange) Here are what the parameters do:
    detail: the higher the detail, the more the polygons are split up. Default is 16. nearrange: the distance below which tessellation stops increasing. Default is 1.0 meters. farrange: the distance below which tessellation starts increasing. Default is 20.0 meters. This command gives you the ability to set properties that will give a roughly equal distribution of polygons in screen space. For convenience, a similar command has been added to the model class, which will apply the settings to all meshes in the model.
    Surface Displacement
    Another problem is culling offscreen polygons so the GPU doesn't have to process millions of extra vertices. I solved this by testing if all control vertices lie outside one edge of the camera frustum. (This is not the same as testing if any control point is inside the camera frustum, as I have seen suggested elsewhere. The latter will cause problems because it is still possible for a triangle to be visible even if all its corners are outside the view.) To account for displacement, I also tested the same vertices with maximum displacement applied.
    To control the amount of displacement, a scale property has been added to the displacementTexture object scheme:
    "displacementTexture": { "file": "./harshbricks-height5-16.png", "scale": 0.05 } A Boolean value called "tessellation" has also been added to the JSON material scheme. This will tell the engine to choose a shader that uses tessellation, so you don't need to explicitly specify a shader file. Shadows will also be rendered with tessellation, unless you explicitly choose a different shadow shader.
    Here is the result:

    Surface displacement takes the object scale into account, so if you scale up an object the displacement will increase accordingly.
    Surface Curvature
    I also added an implementation of the PN Triangles technique. This treats a triangle as control points for a Bezier curve and projects tessellated curved surfaces outwards.
     
    You can see below the shot using the PN Triangles technique eliminates the pointy edges of the sphere.


    The effects is good, although it is more computationally expensive, and if a strong displacement map is in use, you can't really see a difference. Since vertex positions are being changed but texture coordinates are still using the same interpolation function, it can make texture coordinates appear distorted. To counter this, texture coordinates would need to be recalculated from the new vertex positions.
    EDIT:
    I found a better algorithm that doesn't produce the texcoord errors seen above.


    Finally, a global tessellation factor has been added that lets you scale the effect for different hardware levels:
    void SetTessellationDetail(const float detail) The default setting is 1.0, so you can use this to scale up or down the detail any way you like.
  24. Josh
    It is now possible to compile shaders into a single self-contained file that can loaded by any Vulkan program, but it's not obvious how this is done. After poking around for a while I found all the pieces I needed to put this together.
    Compiling
    First, you need to compile each shader stage from a source code file into a precompiled SPIR-V file. There are several tools available to do this, but I prefer GLSlangValidator because it supports the Google #include extension. Put your vertex shader code in a text file named "shader.vert" and your pixel shader code in a file called "shader.frag". Create a .bat file in the same directory with the following contents:
    glslangValidator.exe "shader.vert" -V -o "vert.spv" glslangValidator.exe "shader.frag" -V -o "frag.spv" Run the bat file and two .spv files will be saved.
    Linking
    Now we want to combine our two files representing different shader stages into a single file. This is done with the link tool from Khronos. Add the following lines to your .bat file to compile the two .spv files into one. It will also delete the existing files to clean things up a little:
    spirv-link "vert.spv" "frag.spv" -o "shader.spv" del "vert.spv" del "frag.spv" This will save a single file named "shader.spv" that you can load as one shader module and use for different stages in Vulkan.
    Here are the required executables and a .bat file:
    BuildShader.zip
    Parsing
    If you always use vertex and fragment stages then there is no problem, but what if the combined .spv file contains other stages, or is missing a fragment stage? We can easily account for this with a minimal SPIR-V file parser. We're not going to include any big bloated libraries to do this because we only need some basic information about what stages are contained in the shader. Fortunately, the SPIR-V specification is pretty simple and it doesn't take much code to extract the information we want:
    std::string entrypointname[6]; auto stream = ReadFile(L"Shaders/shader.spv"); // Parse SPIR-V data Assert(stream->ReadInt() == 0x07230203); int version = stream->ReadInt(); int genmagnum = stream->ReadInt(); int bound = stream->ReadInt(); int reserved = stream->ReadInt(); bool stages[6] = {false,false,false,false,false,false}; // Instruction stream while (stream->Ended() == false) { int pos = stream->GetPos(); unsigned int bytes = stream->ReadUInt(); int opcode = LOWORD(bytes); int wordcount = HIWORD(bytes); if (opcode == 15) { int executionmodel = stream->ReadInt(); Assert(executionmodel >= 0); if (executionmodel < 6) { stream->ReadInt(); // entry point stages[executionmodel] = true; entrypointname[executionmodel] = stream->ReadString(); } } stream->Seek(pos + wordcount * 4); } This code even retrieves the entry point name for each stage, so you can be sure you are loadng the shader correctly.
    Here are the different shader stages from the SPIR-V specification:
    0: Vertex 1: TessellationControl 2: TessellationEvaluation 3: Geometry 4: Fragment 5: GLCompute That's it! We now have a standard single-file shader format for Vulkan programs. Your code for creating these will look something like this:
    VkShaderModule shadermodule; // Create shader module VkShaderModuleCreateInfo shaderCreateInfo = {}; shaderCreateInfo.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO; shaderCreateInfo.codeSize = bank->GetSize(); shaderCreateInfo.pCode = reinterpret_cast<const uint32_t*>(bank->buf); VkAssert(vkCreateShaderModule(device->device, &shaderCreateInfo, nullptr, &shadermodule)); // Create vertex stage info VkPipelineShaderStageCreateInfo vertShaderStageInfo = {}; vertShaderStageInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO; vertShaderStageInfo.stage = VK_SHADER_STAGE_VERTEX_BIT; vertShaderStageInfo.module = shadermodule; vertShaderStageInfo.pName = entrypointname[0].c_str(); VkPipelineShaderStageCreateInfo fragShaderStageInfo = {}; if (stages[4]) { // Create fragment stage info fragShaderStageInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO; fragShaderStageInfo.stage = VK_SHADER_STAGE_FRAGMENT_BIT; fragShaderStageInfo.module = shadermodule; fragShaderStageInfo.pName = entrypointname[4].c_str(); } // Create your graphics pipeline...  
  25. Josh
    The Leadwerks 2 terrain system was expansive and very fast, which allowed rendering of huge landscapes. However, it had some limitations. Texture splatting was done in real-time in the pixel shader. Because of the limitations of hardware texture units, only four texture units per terrain were supported. This limited the ability of the artist to make terrains with a lot of variation. The landscapes were beautiful, but somewhat monotonous.
    With the Leadwerks 3 terrain system, I wanted to retain the advantages of terrain in Leadwerks 2, but overcome some of the limitations. There were three different approaches we could use to increase the number of terrain textures.
    Increase the number of textures used in the shader. Allow up to four textures per terrain chunk. These would be determined either programmatically based on which texture layers were in use on that section, or defined by the artist. Implement a virtual texture system like id Software used in the game "Rage". Since Leadwerks 3 runs on mobile devices as well as PC and Mac, we couldn't use any more texture units than we had before, so the first option was out. The second option is how Crysis handles terrain layers. If you start painting layers in the Crysis editor, you will see when "old" layers disappear as you paint new ones on. This struck me as a bad approach because it would either involve the engine "guessing" which layers should have priority, or involve a tedious process of user-defined layers for each terrain chunk.
    A virtual texturing approach seemed liked the ideal choice. Basically, this would render near sections of the terrain at a high resolution, and far sections of the terrain at low resolutions, with a shader that chose between them. If done correctly, the result should be the same as using one impossibly huge texture (like 1,048,576 x 1,048,576 pixels) at a much lower memory cost. However, there were some serious challenges to be overcome, so much so that I added a disclaimer in our Kickstarter campaign basically saying "this might not work"..
    Previous Work
    id Software pioneered this technique with the game Rage (a previous implementation was in Quake Wars). However, id's "megatexture" technique had some serious downsides. First, the data size requirements of storing completely unique textures for the entire world were prohibitive. "Rage" takes about 20 gigs of hard drive space, with terrains much smaller than the size I wanted to be able to use. The second problem with id's approach is that both games using this technique have some pretty blurry textures in the foreground, although the unique texturing looks beautiful from a distance.

    I decided to overcome the data size problem by dynamically generating the megatexture data, rather than storing in on the hard drive. This involves a pre-rendering step where layers are rendered to the terrain virtual textures, and then the virtual textures are applied to the terrain. Since id's art pipeline was basically just conventional texture splatting combined with "stamps" (decals), I didn't see any reason to permanently store that data. I did not have a simple solution to the blurry texture problem, so I just went ahead and started implementing my idea, with the understanding that the texture resolution issue could kill it.
    I had two prior examples to work from. One was a blog from a developer at Frictional Games (Amnesia: The Dark Descent and Penumbra). The other was a presentation describing the technique's use in the game Halo Wars. In both of these games, a fixed camera distance could be relied on, which made the job of adjusting texture resolution much easier. Leadwerks, on the other hand, is a general-purpose game engine for making any kind of game. Would it be possible to write an implementation that would provide acceptable texture resolution for everything from flight sims to first-person shooters? I had no idea if it would work, but I went forward anyway.
    Implementation
    Because both Frictional Games and id had split the terrain into "cells" and used a texture for each section, I tried that approach first. Our terrain already gets split up and rendered in identical chunks, but I needed smaller pieces for the near sections. I adjusted the algorithm to render the nearest chunks in smaller pieces. I then allocated a 2048x2048 texture for each inner section, and used a 1024x1024 texture for each outer section:

    The memory requirements of this approach can be calculated as follows:
    1024 * 1024 * 4 * 12 = 50331648 bytes
    2048 * 2048 * 4 * 8 = 134217728
    Total = 184549376 bytes = 176 megabytes
    176 megs is a lot of texture data. In addition, the texture resolution wasn't even that good at near distances. You can see my attempt with this approach in the image below. The red area is beyond the virtual texture range, and only uses a single low-res baked texture. The draw distance was low, the memory consumption high, and the resolution was too low.

    This was a failure, and I thought maybe this technique was just impractical for anything but very controlled cases in certain games. I wasn't ready to give up yet without trying one last approach. Instead of allocating textures for a grid section, I tried creating a radiating series of textures extending away from the camera:

    The resulting resolution wasn't great, but the memory consumption was a lot lower, and terrain texturing was now completely decoupled from the terrain geometry. I found by adjusting the distances at which the texture switches, I could get a pretty good resolution in the foreground. I was using only three texture stages, so I increased the number to six and found I could get a good resolution at all distances, using just six 1024x1024 textures. The memory consumption for this was just 24 megabytes, a very reasonable number. Since the texturing is independent from terrain geometry, the user can fine-tune the texture distances to accommodate flight sims, RPGs, or whatever kind of game they are making.

    The last step was to add some padding around each virtual texture, so the virtual textures did not have to be complete redrawn each time the camera moves. I used a value of 0.25 the size of the texture range so the various virtual textures only get redrawn once in a while.
    Advantages of Virtual Textures
    First, because the terrain shader only has to perform a few lookups each pixel with almost no math, the new terrain shader runs much faster than the old one. When the bottleneck for most games is the pixel fillrate, this will make Leadwerks 3 games faster. Second, this allows us to use any number of texture layers on a terrain, with virtually no difference in rendering speed. This gives artists the flexibility to paint anything they want on the terrain, without worrying about budgets and constraints. A third advantage is that this allows the addition of "stamps", which are decals rendered directly into the virtual texture. This allows you to add craters, clumps of rocks, and other details directly onto the terrain. The cost of rendering them in is negligible, and the resulting virtual texture runs at the exact same speed, no matter how much detail you pack into it. Below you can see a simple example. The smiley face is baked into the virtual texture, not rendered on top of the terrain:

    Conclusion
    The texture resolution problem I feared might make this approach impossible was solved by using a graduated series of six textures radiating out around the camera. I plan to implement some reasonable default settings, and it's only a minor hassle for the user to adjust the virtual texture draw distances beyond that.
    Rendering the virtual textures dynamically eliminates the high space requirements of id's megatexture technique, and also gets rid of the problems of paging texture data dynamically from the hard drive. At the same time, most of the flexibility of the megatexture technique is retained.
    Having the ability to paint terrain with any number of texture layers, plus the added stamping feature gives the artist a lot more flexibility than our old technique offered, and it even runs faster than the old terrain. This removes a major source of uncertainty from the development of Leadwerks 3.1 and turned out to be one of my favorite features in the new engine.
×
×
  • Create New...