Jump to content

Josh

Staff
  • Posts

    23,100
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    I've moved the GI calculation over to the GPU and our Vulkan renderer in Leadwerks Game Engine 5 beta now supports volume textures. After a lot of trial and error I believe I am closing in on our final techniques. Voxel GI always involves a degree of light leakage, but this can be mitigated by setting a range for the ambient GI. I also implemented a hard reflection which was pretty easy to do. It would not be much more difficult to store the triangles in a lookup table for each voxel in order to trace a finer polygon-based ray for results that would look the same as Nvidia RTX but perform much faster.
    The video below is only performing a single GI bounce at this time, and it is displaying the lighting on the scene voxels, not on the original polygons. I am pretty pleased with this progress and I think the final results will look great and run fast. In addition, the need for environment probes placed in the scene will soon forever be a thing of the past.

    2035564276_VoxelGI_raytracingprogress.mp4.a173c8eb756aa1403cccc972a3306d49.mp4 There is still a lot of work to do on this, but I would say that this feature just went from something I was very overwhelmed and intimidated by to something that is definitely under control and feasible.
    Also, we might not use cascaded shadow maps (for directional lights) at all but instead rely on a voxel raytrace for directional light shadows. If it works, that would be my preference because CSMs waste so much space and drawing a big outdoors scene 3-4 times can be taxing.
  2. Josh
    I am happy to show you a preview of the new documentation system I am working on:

    Let's take a look at what is going on here:
    It's dark, so you can stare lovingly at it for hours without going blind. You can switch between languages with the links in the header. Lots of internal cross-linking for easy access to relevant information. Extensive, all-inclusive documentation, including Enums, file formats, constructors, and public members. Data is fetched from a Github repository and allows user contributions. I am actually having a lot of fun creating this. It is very fulfilling to be able to go in and build something with total attention to detail.
  3. Josh
    All this month I have been working on a sort of academic paper for a conference I will be speaking at towards the end of the year. This paper covers details of my work for the last three years, and includes benchmarks that demonstrate the performance gains I was able to get as a result of the new design, based on an analysis of modern graphics hardware.
    I feel like my time spent has not been very efficient. I have not written any code in a while, but it's not like I was working that whole time. I had to just let the ideas settle for a bit.
    Activity doesn't always mean progress.
    Anyways, I am wrapping up now, and am very pleased with the results. It all turned out much much better than I was expecting.
  4. Josh
    TLDR: I made a long-term bet on VR and it's paying off. I haven't been able to talk much about the details until now.
    Here's what happened:
    Leadwerks 3.0 was released during GDC 2013. I gave a talk on graphics optimization and also had a booth at the expo. Something else significant happened that week.  After the expo closed I walked over to the Oculus booth and they let me try out the first Rift prototype.
    This was a pivotal time both for us and for the entire game industry. Mobile was on the downswing but there were new technologies emerging that I wanted to take advantage of. Our Kickstarter campaign for Linux support was very successful, reaching over 200% of its goal. This coincided with a successful Greenlight campaign to bring Leadwerks Game Engine to Steam in the newly-launched software section. The following month Valve announced the development of SteamOS, a Linux-based operating system for the Steam Machine game consoles. Because of our work in Linux and our placement in Steam, I was fortunate enough to be in close contact with much of the staff at Valve Software.
    The Early Days of VR
    It was during one of my visits to Valve HQ that I was able to try out a prototype of the technology that would go on to become the HTC Vive. In September of 2014 I bought an Oculus Rift DK2 and first started working with VR in Leadwerks. So VR has been something I have worked on in the background for a long time, but I was looking for the right opportunity to really put it to work. In 2016 I felt it was time for a technology refresh, so I wrote a blog about the general direction I wanted to take Leadwerks in. A lot of it centered around VR and performance. I didn't really know exactly how things would work out, but I knew I wanted to do a lot of work with VR.
    A month later I received a message on this forum that went something like this (as I recall):
    I thought "Okay, some stupid teenager, where is my ban button?", but when I started getting emails with nasa.gov return addresses I took notice.
    Now, Leadwerks Software has a long history of use in the defense and simulation industries, with orders for software from Northrop Grumman, Lockheed Martin, the British Royal Navy, and probably some others I don't know about. So NASA making an inquiry about software isn't too strange. What was strange was that they were very interested in meeting in person.
    Mr. Josh Goes to Washington
    I took my first trip to Goddard Space Center in January 2017 where I got a tour of the facility. I saw robots, giant satellites, rockets, and some crazy laser rooms that looked like a Half-Life level. It was my eleven year old self's dream come true. I was also shown some of the virtual reality work they are using Leadwerks Game Engine for. Basically, they were taking high-poly engineering models from CAD software and putting them into a real-time visualization in VR. There are some good reasons for this. VR gives you a stereoscopic view of objects that is far superior to a flat 2D screen. This makes a huge difference when you are viewing complex mechanical objects and planning robotic movements. You just can't see things on a flat screen the same way you can see them in VR. It's like the difference between looking at a photograph of an object versus holding it in your hands.

    What is even going on here???
    CAD models are procedural, meaning they have a precise mathematical formula that describes their shape. In order to render them in real-time, they have to be converted to polygonal models, but these objects can be tens of millions of polygons, with details down to threading on individual screws, and they were trying to view them in VR at 90 frames per second! Now with virtual reality, if there is a discrepancy between what your visual system and your vestibular system perceives, you will get sick to your stomach. That's why it's critical to maintain a steady 90 Hz frame rate. The engineers at NASA told me they first tried to use Unity3D but it was too slow, which is why they came to me. Leadwerks was giving them better performance, but it still was not fast enough for what they wanted to do next. I thought "these guys are crazy, it cannot be done".
    Then I remembered something else people said could never be done.

    So I started to think "if it were possible, how would I do it?" They had also expressed interest in an inverse kinematics simulation, so I put together this robotic arm control demo in a few days, just to show what could be easily be done with our physics system.
     
    A New Game Engine is Born
    With the extreme performance demands of VR and my experience writing optimized rendering systems, I saw an opportunity to focus our development on something people can't live without: speed. I started building a new renderer designed specifically around the way modern PC hardware works. At first I expected to see performance increases of 2-3x. Instead what we are seeing are 10-40x performance increases under heavy loads. During this time I stayed in contact with people at NASA and kept them up to date on the capabilities of the new technology.
    At this point there was still nothing concrete to show for my efforts. NASA purchased some licenses for the Enterprise edition of Leadwerks Game Engine, but the demos I made were free of charge and I was paying my own travel expenses. The cost of plane tickets and hotels adds up quickly, and there was no guarantee any of this would work out. I did not want to talk about what I was doing on this site because it would be embarrassing if I made a lot of big plans and nothing came of it. But I saw a need for the technology I created and I figured something would work out, so I kept working away at it.
    Call to Duty
    Today I am pleased to announce I have signed a contract to put our virtual reality expertise to work for NASA. As we speak, I am preparing to travel to Washington D.C. to begin the project. In the future I plan to provide support for aerospace, defense, manufacturing, and serious games, using our new technology to help users deliver VR simulations with performance and realism beyond anything that has been possible until now.
    My software company and relationship with my customers (you) is unaffected. Development of the new engine will continue, with a strong emphasis on hyper-realism and performance. I think this is a direction everyone here will be happy with. I am going to continue to invest in the development of groundbreaking new features that will help in the aerospace and defense industries (now you understand why I have been talking about 64-bit worlds) and I think a great many people will be happy to come along for the ride in this direction.
    Leadwerks is still a game company, but our core focus is on enabling and creating hyper-realistic VR simulations. Thank you for your support and all the suggestions and ideas you have provided over the years that have helped me create great software for you. Things are about to get very interesting. I can't wait to see what you all create with the new technology we are building.
     
  5. Josh
    After working out a thread manager class that stores a stack of C++ command buffers, I've got a pretty nice proof of concept working. I can call functions in the game thread and the appropriate actions are pushed onto a command buffer that is then passed to the rendering thread when World::Render is called. The rendering thread is where all the (currently) OpenGL code is executed. When you create a context or load a shader, all it does is create the appropriate structure and send a request over to the rendering thread to finish the job:

    Consequently, there is currently no way of detecting if OpenGL initialization fails(!) and in fact the game will still run along just fine without any graphics rendering! We obviously need a mechanism to detect this, but it is interesting that you can now load a map and run your game without ever creating a window or graphics context. The following code is perfectly legitimate in Leawerks 5:
    #include "Leadwerks.h" using namespace Leadwerks int main(int argc, const char *argv[]) { auto world = CreateWorld() auto map = LoadMap(world,"Maps/start.map"); while (true) { world->Update(); } return 0; } The rendering thread is able to run at its own frame rate independently from the game thread and I have tested under some pretty extreme circumstances to make sure the threads never lock up. By default, I think the game loop will probably self-limit its speed to a maximum of 30 updates per second, giving you a whopping 33 milliseconds for your game logic, but this frequency can be changed to any value, or completely removed by setting it to zero (not recommended, as this can easily lock up the rendering thread with an infinite command buffer stack!). No matter the game frequency, the rendering thread runs at its own speed which is either limited by the window refresh rate, an internal clock, or it can just be let free to run as fast as possible for those triple-digit frame rates.
    Shaders are now loaded from multiple files instead of being packed into a single .shader file. When you load a shader, the file extension will be stripped off (if it is present) and the engine will look for .vert, .frag, .geom, .eval, and .ctrl files for the different shader stages:
    auto shader = LoadShader("Shaders/Model/diffuse"); The asynchronous shader compiling in the engine could make our shader editor a little bit more difficult to handle, except that I don't plan on making any built-in shader editor in the new editor! Instead I plan to rely on Visual Studio Code as the official IDE, and maybe add a plugin that tests to see if shaders compile and link on your current hardware. I found that a pragma statement can be used to indicate include files (not implemented yet) and it won't trigger any errors in the VSCode intellisense:

    Although restructuring the engine to work in this manner is a big task, I am making good progress. Smart pointers make this system really easy to work with. When the owning object in the game thread goes out of scope, its associated rendering object is also collected...unless it is still stored in a command buffer or otherwise in use! The relationships I have worked out work perfectly and I have not run into any problems deciding what the ownership hierarchy should be. For example, a context has a shared pointer to the window it belongs to, but the window only has a weak pointer to the context. If the context handle is lost it is deleted, but if the window handle is lost the context prevents it from being deleted. The capabilities of modern C++ and modern hardware are making this development process a dream come true.
    Of course with forward rendering I am getting about 2000 FPS with a blank screen and Intel graphics, but the real test will be to see what happens when we start adding lots of lights into the scene. The only reason it might be possible to write a good forward renderer now is because graphics hardware has gotten a lot more flexible. Using a variable-length for loop and using the results of a texture lookup for the coordinates of another lookup  were a big no-no when we first switched to deferred rendering, but it looks like that situation has improved.
    The increased restrictions on the renderer and the total separation of internal and user-exposed classes are actually making it a lot easier to write efficient code. Here is my code for the indice array buffer object that lives in the rendering thread:
    #include "../../Leadwerks.h" namespace Leadwerks { OpenGLIndiceArray::OpenGLIndiceArray() : buffer(0), buffersize(0), lockmode(GL_STATIC_DRAW) {} OpenGLIndiceArray::~OpenGLIndiceArray() { if (buffer != 0) { #ifdef DEBUG Assert(glIsBuffer(buffer),"Invalid indice buffer."); #endif glDeleteBuffers(1, &buffer); buffer = 0; } } bool OpenGLIndiceArray::Modify(shared_ptr<Bank> data) { //Error checks if (data == nullptr) return false; if (data->GetSize() == 0) return false; //Generate buffer if (buffer == 0) glGenBuffers(1, &buffer); if (buffer == 0) return false; //shouldn't ever happen //Bind buffer glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffer); //Set data if (buffersize == data->GetSize() and lockmode == GL_DYNAMIC_DRAW) { glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, data->GetSize(), data->buf); } else { if (buffersize == data->GetSize()) lockmode = GL_DYNAMIC_DRAW; glBufferData(GL_ELEMENT_ARRAY_BUFFER, data->GetSize(), data->buf, lockmode); } buffersize = data->GetSize(); return true; } bool OpenGLIndiceArray::Enable() { if (buffer == 0) return false; if (buffersize == 0) return false; glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffer); return true; } void OpenGLIndiceArray::Disable() { glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); } } From everything I have seen, my gut feeling tells me that the new engine is going to be ridiculously fast.
    If you would like to be notified when Leadwerks 5 becomes available, be sure to sign up for the mailing list here.
  6. Josh
    I have been spending most of my time on something else this month in preparation for the release of the Leadwerks 5 SDK. However, I did add one small feature today that has very big implications for the way the engine works. You can load a file from a web URL:
    local tex = LoadTexture("https://www.github.com/Leadwerks/Documentation/raw/master/Assets/brickwall01.dds") Why is this a big deal? Well, it means you can post code snippets that can be copied and pasted without requiring download of any extra files. That means the documentation can include examples that use files that aren't required to be in the user's project directory:
    https://github.com/Leadwerks/Documentation/blob/master/LUA_LoadTexture.md
    The documentation doesn't have to have any awkward zip files you are instructed to download like here, because any files that are needed in any examples can simply be linked directly to by the URL. So basically the default blank template can really be blank and doesn't need to include any "sample" files at all. If you have something like a model that has separate material and texture files, it should be possible to just link to the model file's URL and the rest of the associated files will be grabbed automatically.
  7. Josh
    I implemented light bounces and can now run the GI routine as many times as I want. When I use 25 rays per voxel and run the GI routine three times, here is the result. (The dark area in the middle of the floor is actually correct. That area should be lit by the sky color, but I have not yet implemented that, so it appears darker.)


    It's sort of working but obviously these results aren't usable yet. Making matters more difficult is the fact that people love to show their best screenshots and love to hide the problems their code has, so it is hard to find something reliable to compare my results to.
    I also found that the GI pass, unlike all previous passes, is very slow. Each pass takes about 30 seconds in release mode! I could try to optimize the C++ code but something tells me that even optimized C++ code would not be fast enough. So it seems the GI passes will probably need to be performed in a shader. I am going to experiment a bit with some ideas I have first to provide better quality GI results first though.
     
  8. Josh
    I have resumed work on voxel-based global illumination using voxel cone step tracing in Leadwerks Game Engine 5 beta with our Vulkan renderer. I previously put about three months of work into this with some promising results, but it is a very difficult system and I wanted to focus on Vulkan. Some of features we have gained since then like Pixmaps and DXT decompression make the voxel GI system easier to finish.
    I previously considered implementing Nvidia's raytracing techniques for Vulkan but the performance is terrible, even on the highest-end graphics cards. Voxel-based GI looks great and runs fast with basically no performance penalty.
    Below we have a section of the scene voxelized and lit with direct lighting. Loading the Sponza scene from GLTF format made it easy to display all materials and textures correctly.

    I found that the fastest way to manage voxel data was by storing the data in one big STL vector, and storing an STL set of occupied cells. (An STL set is like a map with only keys.) I found the fastest way to perform voxel raycasting was actually just to walk through the voxel data with optimized C++ code. This was much faster than my previous attempts to use octrees, and much simpler too! The above scene took about about 100 milliseconds to calculate direct lighting on a single CPU core, which is three times faster than my previous attempts. This definitely means that CPU-based GI lighting may be possible, which is my preferred approach. It's easier to implement, easy to parallelize, more flexible, more reliable, uses less video memory, transfers less data to the GPU, and doesn't draw any GPU power away from rendering the rest of the scene.
    The challenge will be in minimizing the delay between when an object moves, GI is recalculated, and when the data uploaded to the GPU and appears onscreen. I am guessing a delay somewhere around 200 milliseconds will be acceptable. It should also be considered that only an onscreen object will have a perceived delay if the reflection is slow to appear. An offscreen object will have no perceived delay because you can only see the reflection. Using screen-space reflections on pixels that can use it is one way to mitigate that problem, but if possible I would prefer to use one uniform system instead of mixing two rendering techniques.
    If this does not work then I will upload a DXT compressed texture containing the voxel data to the GPU. There are several stages at which the data can be handed off, so the question is which one works best?

    My design has changed a bit, but this is a pretty graphic.
    Using the pixmap class I will be able to load low-resolution versions of textures into system memory, decompress them to a readable format, and use that data to colorize the voxels according to the textures and UV coordinates of the vertices that are fed into the voxelization process.
  9. Josh
    I've had some more time to work with the Lua debugger in Leadwerks Game Engine 5 beta, and it's really amazing.  Adding the engine classes into the debug information has been pretty simple. All it takes is a class function that adds members into a table and returns it to Lua.
    sol::table Texture::debug(sol::this_state ts) const { auto t = Object::debug(ts); t["size"] = size; t["format"] = format; t["type"] = type; t["flags"] = flags; t["samples"] = samples; t["faces"] = faces; return t; } The base Object::debug function will add all the custom properties that you attach to the object:
    sol::table Object::debug(sol::this_state ts) const { sol::table t(ts, sol::create); for (auto& pair : entries) { if (pair.second.get_type() == sol::type::function) continue; if (Left(pair.first, 1) == "_") continue; t[pair.first] = pair.second; } return t; } This allows you to access both the built-in class members and your own values you attach to an object. You can view all these variables in the side panel while debugging, in alphabetical order:

    You can even hover over a variable to see its contents!

    The Lua debugger in Leadwerks 4 just sends a static stack of data to the IDE that is a few levels deep, but the new Lua debugger in VS Code will actually allow you to traverse the code and look all around your program. You can drill down as deep as you want, even viewing the positions of individual vertices in a model:

    This gives us a degree of power we've never had before with Lua game coding. Programming games with Lua will be easier than ever in our new game engine, and it's easy to add your own C++ classes to the environment.
  10. Josh
    Leadwerks Game Engine 5 Beta now supports debugging Lua in Visual Studio Code. To get started, install the Lua Debugger extension by DevCat.
    Open the project folder in VSCode and press F5. Choose the Lua debugger if you are prompted to select an environment.
    You can set breakpoints and step through Lua code, viewing variables and the callstack. All printed output from your game will be visible in the Debug Console within the VS Code interface.

    Having first-class support for Lua code in a professional IDE is a dream come true. This will make development with Lua in Leadwerks Game Engine 5 a great experience.
  11. Josh
    In Leadwerks Game Engine 4, bones are a type of entity. This is nice because all the regular entity commands work just the same on them, and there is not much to think about. However, for ultimate performance in Leadwerks 5 we treat bones differently. Each model can have a skeleton made up of bones. Skeletons can be unique for each model, or shared between models. Animation occurs on a skeleton, not a model. When a skeleton is animated, every model that uses that skeleton will display the same motion. Skeletons animations are performed on one or more separate threads, and the skeleton bones are able to skip a lot of overhead that would be required if they were entities, like updating the scene octree. This system allows for thousands of animated characters to be displayed with basically no impact on framerate:
    This is not some special case I created that isn't practical for real use. This is the default animation system, and you can always rely on it working this fast.
    There is one small issue that this design neglects, however, and that is the simple situation where we want to attach a weapon or other item to an animated character, that isn't built into that model. In Leadwerks 4 we would just parent the model to the bone, but we said bones are no longer entities so this won't work. So how do we do this? The answer is with bone attachments:
    auto gun = LoadModel(world, "Models/gun.gltf"); gun->Attach(player, "R_Hand"); Every time the model animation is updated, the gun orientation will be forced to match the bone we specified. If we want to adjust the orientation of the gun relative to the bone, then we can use a pivot:
    auto gun = LoadModel(world, "Models/gun.gltf"); auto gunholder = CreatePivot(world); gunholder->Attach(player, "R_Hand"); gun->SetParent(gunholder); gun->SetRotation(0,180,0); Attaching an entity to a bone does not take ownership of the entity, so you need to maintain a handle to it until you want it to be deleted. One way to do this is to just parent the entity to the model you are attaching to:
    auto gun = LoadModel(world, "Models/gun.gltf"); gun->Attach(player, "R_Hand"); gun->SetParent(player); gun = nullptr; //player model keeps the gun in scope In this example I created some spheres to indicate where the model bones are, and attached them to the model bones:
    Bone attachments involve three separate objects, the entity, the model, and the bone. This would be totally impossible to do without weak pointers in C++.
  12. Josh
    A new update is available for beta testers. This adds navmesh pathfinding, bone attachments, and the beginning of the Lua debugging capabilities.New commands for creating navigation meshes for AI pathfinding are included.
    NavMesh Pathfinding
    In Leadwerks Game Engine 5 you can create your own navmeshes and AI agents, with all your own parameters for player height, step height, walkable slope, etc.:
    shared_ptr<NavMesh> CreateNavMesh(shared_ptr<World> world, const float width, const float height, const float depth, const int tilesx, const int tilesz, const float agentradius = 0.4, const float agentheight = 1.8, const float agentstepheight = 0.501, const float maxslope = 45.01f); You can create AI agents yourself now:
    shared_ptr<NavAgent> CreateNavAgent(shared_ptr<NavMesh> navmesh, const float radius, const float height, const UpdateFlags updateflags) Here are some of the NavAgent methods you can use:
    void NavAgent::SetPosition(const float x, const float y, const float z); void NavAgent::SetRotation(const float angle); bool NavAgent::Navigate(const float x, const float y, const float z); New capabilities let you find a random point on the navmesh, or test to see if a point lies on a navmesh. As @reepblue pointed out, in addition to AI this feature could be used to test if a player is able to teleport to a position with VR locomotion:
    bool NavMesh::FindRandomPoint(Vec3& point) bool NavMesh::IntersectsPoint(const Vec3& point) You can call Entity::Attach(agent) to attach an entity to an agent so it follows it around.
    You can even create multiple navmeshes for different sized characters. In the video below, I created one navmesh for the little goblins, and another one with different parameters for the big guy. I created agents for each character on the appropriate sized navmesh, and then I created a big AI agent on both navmeshes. On the navmesh with big parameters, I use the regular navigation system, but the big agent on the little navmesh gets manually repositioned each frame. This results in an agent the little goblins walk around, and the end result is mixing of the two character sizes.
    The current implementation is static-only and will be built at the time of creation. You need to call Entity::SetNavigationMode(true) on any entities you want to contribute to the navmesh. Physics shape geometry will be used for navigation, not visible mesh geometry. I plan to add support for dynamic navmeshes that rebuild as the level changes next.
    Note that by default there is no interaction between physics and navigation. If you want AI agents to be collidable with the physics system, you need to create a physics object and position that as the agents move. This gives you complete control over the whole system.
    Bone Attachments
    To improve speed bones are a special type of object in Leadwerks Game Engine 5, and are an entity. Bone attachments allow you to "parent" an entity to a bone, so you can do things like place a sword in a character's hand. You can read more about bone attachments here:
    Lua Debugging in Visual Studio Code
    The beginnings of our Lua debugging system are taking shape. You can now launch your game directly from VSCode. To do this, open the project folder in VSCode. Install the "Lua Debugger" extension from DevCat. Press F5 and the game should start running. See the .vscode/launch.json file for more details.
  13. Josh
    An update is available for beta testers.
    What's new:
    GLTF animations now work! New example included. Any models from Sketchfab should work. Added Camera::SetGamma, GetGamma. Gamma is 1.0 by default, use 2.2 for dark scenes. Fixed bug that was creating extra bones. This is why the animation example was running slow in previous versions. Fixed bug where metalness was being read from wrong channel in metal-roughness map. Metal = R, roughness = G. Texture definitions in JSON materials are changed, but the old scheme is left in for compatibility. Textures are now an array:
    { "material": { "color": [ 1, 1, 1, 1 ], "emission": [ 0, 0, 0 ], "metallic": 0.75, "roughness": 0.5, "textures": [ { "slot": "BASE", "file": "./wall_01_D.tex" }, { "slot": "NORMAL", "file": "./wall_01_N.tex" } ] } } The slot value can be an integer from 0-31 or one of these strings:
    BASE NORMAL METALLIC_ROUGHNESS DISPLACEMENT EMISSION BRDF Bugs:
    FPS example menu freezes. Close window to exit instead. Looping animations are not randomized so the animation example will show characters that appear identical even though they are separate skeletons animating independently. Unskinned GLTF animation is not yet supported (requires bone attachments feature)
  14. Josh
    A new beta update to Leadwerks Game Engine 5 is available now.
    New stuff:
    Streaming terrain CopyRect and Texture::SetSubPixels Texture saving Note that the "SharedObject" class has been renamed to "Object" and that math classes (Vec3, Vec4, Plane, Mat3, etc.) no longer derive from anything.
  15. Josh
    Last year Google and Binomial LLC partnered to release the Basic Universal library as open-source. This library is the successor to Crunch. Both these libraries are like OGG compression for textures. They compress data very well into small file sizes, but once loaded the data takes the same space in memory as it normally does. The benefit is that it can reduce the size of your game files. Crunch only supports DXT compression, but the newer Basis library supports modern compression formats like BC5, which gives artifact-free compressed normal maps, and BC7, which uses the same amount of space as DXT5 textures but pretty much eliminates blocky DXT artifacts on color images.
    I've added a Basis plugin to the Plugin SDK on Github so you can now load and save .basis texture files in the Leadwerks Game Engine 5 beta. Testing with a large 4096x2048 image in a variety of formats, here are my results.
    TGA (uncompressed): 24 MB PNG (lossless compression): 6.7 MB DDS (DXT1 compression): 5.33 MB Zipped DDS: 2.84 MB JPEG: 1.53 MB BASIS: 573 KB
    The zipped DXT option is the most like what you are using now in Leadwerks Game Engine 4. Since the overwhelming majority of your game size comes from texture files, we can extrapolate that Basis textures can reduce your game's data size to as little as 20% what it is now.
    With all the new image IO features, I wanted to write a script to convert texture files out of the Leadwerks 4 .tex file format and into something more transparent. If you drop the script below into the "Scripts/Start" folder it will automatically detect and convert .tex files into .dds:
    function BatchConvertTextures(path, in_ext, out_ext, recursive) local dir = LoadDir(path) for n = 1, #dir do local ftype = FileType(path.."/"..dir[n]) if ftype == 1 then if ExtractExt(dir[n]) == in_ext then local savefile = path.."/"..StripExt(dir[n]).."."..out_ext if FileTime(path.."/"..dir[n]) > FileTime(savefile) then local pixmap = LoadPixmap(path.."/"..dir[n]) if pixmap ~= nil then pixmap:Save(savefile) end end end elseif ftype == 2 and recursive == true then BatchConvertTextures(path.."/"..dir[n], in_ext, out_ext, true) end end end BatchConvertTextures(".", "tex", "dds", true) Here are my freightyard materials, out of the opaque .tex format and back into something I can easily view in Windows Explorer:

    These files are all using the original raw pixels as the .tex files, which internally are very similar to DDS. In order to convert them to .basis format, we need to get them into RGBA format, which means we need a DXT decompression routine. I found one and quickly integrated it, then revised my script to convert the loaded pixmaps to uncompressed RGBA format and save as PNG files.
    --Make sure FreeImage plugin is loaded if Plugin_FITextureLoader == nil then Plugin_FITextureLoader = LoadPlugin("Plugins/FITextureLoader.dll") end function BatchConvertTextures(path, in_ext, out_ext, recursive, format) local dir = LoadDir(path) for n = 1, #dir do local ftype = FileType(path.."/"..dir[n]) if ftype == 1 then if ExtractExt(dir[n]) == in_ext then local savefile = path.."/"..StripExt(dir[n]).."."..out_ext if FileTime(path.."/"..dir[n]) > FileTime(savefile) then --Load pixmap local pixmap = LoadPixmap(path.."/"..dir[n]) --Convert to desired format, if specified if format ~= nil then if pixmap ~= nil then if pixmap.format ~= format then pixmap = pixmap:Convert(format) end end end --Save if pixmap ~= nil then pixmap:Save(savefile) end end end elseif ftype == 2 and recursive == true then BatchConvertTextures(path.."/"..dir[n], in_ext, out_ext, true) end end end BatchConvertTextures("Materials/Freightyard", "tex", "png", true, TEXTURE_RGBA) Cool! Now I have all my .tex files back out as PNGs I can open and edit in any paint program. If you have .tex files that you have lost the source images for, there is now a way to convert them back.

    Now I am going to try converting this folder of .tex files into super-compressed .basis files. First I will add a line of code to make sure the Basis plugin has been loaded:
    --Make sure Basis plugin is loaded if Plugin_Basis == nil then Plugin_Basis = LoadPlugin("Plugins/Basis.dll") end And then I can simply change the save file extension to "basis" and it should work:
    BatchConvertTextures("Materials/Freightyard", "tex", "basis", true, TEXTURE_RGBA) And here it is! Notice the file size of the selected image.

    If you want to view basis textures in Windows explorer there is a handy-dandy thumbnail previewer available.
    Now let's see what the final file sizes are for this texture set.
    Uncompressed TEX files (DXT only): 35.2 MB Zipped TEX files: 25.8 MB BASIS: 7.87 MB ? When converted to Basis files the same textures take just 30% the size of the zipped package, and 22% the size of the extracted TEX files. That makes Basis Universal definitely worth using!
  16. Josh
    The terrain system in Leadwerks Game Engine 4 allows terrains up to 64 square kilometers in size. This is big enough for any game where you walk and most driving games, but is not sufficient for flight simulators or space simulations. For truly massive terrain, we need to be able to dynamically stream data in and out of memory, at multiple resolutions, so we can support terrains bigger than what would otherwise fit in memory all at once.
    The next update of Leadwerks Game Engine 5 beta supports streaming terrain, using the following command:
    shared_ptr<StreamingTerrain> CreateStreamingTerrain(shared_ptr<World> world, const int resolution, const int patchsize, const std::wstring& datapath, const int atlassize = 1024, void FetchPatchInfo(TerrainPatchInfo*) = NULL) Let's looks at the parameters:
    resolution: Number of terrain points along one edge of the terrain, should be power-of-two. patchsize: The number of tiles along one edge of a terrain piece, should be a power-of-two number, probably 64 or 128. datapath: By default this indicates a file path and name but can be customized. atlassize: Width and height of texture atlas texture data is copied into. 1024 is usually fine. FetchPatchInfo: Optional user-defined callback to override the default data handler. A new Lua sample is included that creates a streaming terrain:
    local terrain = CreateStreamingTerrain(world, 32768, 64, "Terrain/32768/32768") terrain:SetScale(1,1000,1) The default fetch patch function can be used to make your own data handler. Here is the default function, which is probably more complex than what you need for streaming GIS data. The key parts to note are:
    The TerrainPatchInfo structure contains the patch X and Y position and the level of detail. The member patch->heightmap should be set to a pixmap with format TEXTURE_R16. The member patch->normalmap should be set to a pixmap with format TEXTURE_RGBA (for now). You can generate this from the heightmap using MakeNormalMap(). The scale value input into the MakeNormalMap() should be the terrain vertical scale you intend to use, times two, divided by the mipmap level. This ensures normals are calculated correctly at each LOD. For height and normal data, which is all that is currently supported, you should use the dimensions patchsize + 1, because the a 64x64 patch, for example, uses 65x65 vertices. Don't forget to call patch->UpdateBounds() to calculate the AABB for this patch. The function must be thread-safe, as it will be called from many different threads, simultaneously. void StreamingTerrain::FetchPatchInfo(TerrainPatchInfo* patch) { //User-defined callback if (FP_FETCH_PATCH_INFO != nullptr) { FP_FETCH_PATCH_INFO(patch); return; } auto stream = this->stream[TEXTURE_DISPLACEMENT]; if (stream == nullptr) return; int countmips = 1; int mw = this->resolution.x; while (mw > this->patchsize) { countmips++; mw /= 2; } int miplevel = countmips - 1 - patch->level; Assert(miplevel >= 0); uint64_t mipmapsize = Round(this->resolution.x * 1.0f / pow(2.0f, miplevel)); auto pos = mipmappos[TEXTURE_DISPLACEMENT][miplevel]; uint64_t rowpos; patch->heightmap = CreatePixmap(patchsize + 1, patchsize + 1, TEXTURE_R16); uint64_t px = patch->x * patchsize; uint64_t py = patch->y * patchsize; int rowwidth = patchsize + 1; for (int ty = 0; ty < patchsize + 1; ++ty) { if (py + ty >= mipmapsize) { patch->heightmap->CopyRect(0, patch->heightmap->size.y - 2, patch->heightmap->size.x, 1, patch->heightmap, 0, patch->heightmap->size.y - 1); continue; } if (px + rowwidth > mipmapsize) rowwidth = mipmapsize - px; rowpos = pos + ((py + ty) * mipmapsize + px) * 2; streammutex[TEXTURE_DISPLACEMENT]->Lock(); stream->Seek(rowpos); stream->Read(patch->heightmap->pixels->data() + (ty * (patchsize + 1) * 2), rowwidth * 2); streammutex[TEXTURE_DISPLACEMENT]->Unlock(); if (rowwidth < patchsize + 1) { patch->heightmap->WritePixel(patch->heightmap->size.x - 1, ty, patch->heightmap->ReadPixel(patch->heightmap->size.x - 2, ty)); } } patch->UpdateBounds(); stream = this->stream[TEXTURE_NORMAL]; if (stream == nullptr) { patch->normalmap = patch->heightmap->MakeNormalMap(scale.y * 2.0f / float(1 + miplevel), TEXTURE_RGBA); } else { pos = mipmappos[TEXTURE_NORMAL][miplevel]; Assert(pos < stream->GetSize()); patch->normalmap = CreatePixmap(patchsize + 1, patchsize + 1, TEXTURE_RGBA); rowwidth = patchsize + 1; for (int ty = 0; ty < patchsize + 1; ++ty) { if (py + ty >= mipmapsize) { patch->normalmap->CopyRect(0, patch->normalmap->size.y - 2, patch->normalmap->size.x, 1, patch->normalmap, 0, patch->normalmap->size.y - 1); continue; } if (px + rowwidth > mipmapsize) rowwidth = mipmapsize - px; rowpos = pos + ((py + ty) * mipmapsize + px) * 4; Assert(rowpos < stream->GetSize()); streammutex[TEXTURE_NORMAL]->Lock(); stream->Seek(rowpos); stream->Read(patch->normalmap->pixels->data() + uint64_t(ty * (patchsize + 1) * 4), rowwidth * 4); streammutex[TEXTURE_NORMAL]->Unlock(); if (rowwidth < patchsize + 1) { patch->normalmap->WritePixel(patch->normalmap->size.x - 1, ty, patch->normalmap->ReadPixel(patch->normalmap->size.x - 2, ty)); } } } } There are some really nice behaviors that came about naturally as a consequence of the design.
    Because the culling algorithm works its way down the quad tree with only known patches of data, the lower-resolution sections of terrain will display first and then be replaced with higher-resolution patches as they are loaded in. If the cache gets filled up, low-resolution patches will be displayed until the cache clears up and more detailed patches are loaded in. If all the patches in one draw call have not yet been loaded, the previous draw call's contents will be rendered instead. As a result, the streaming terrain is turning out to be far more robust than I was initially expecting. I can fly close to the ground at 650 MPH (the speed of some fighter jets) with no problems at all.
    There are also some issues to note:
    Terrain still has cracks in it. Seams in the normal maps will appear along the edges, for now. Streaming terrain does not display material layers at all, just height and normal. But there is enough to start working with it and piping data into the system.
     
  17. Josh
    The new engine features advanced image and texture manipulation commands that allow a much deeper level of control than the mostly automated pipeline in Leadwerks Game Engine 4. This article is a deep dive into the new image and texture system, showing how to load, modify, and save textures in a variety of file formats and compression modes.
    Texture creation has been finalized. Here is the command:
    shared_ptr<Texture> CreateTexture(const TextureType type, const int width, const int height, const TextureFormat format = TEXTURE_RGBA, const std::vector<shared_ptr<Pixmap> > mipchain = {}, const int layers = 1, const TextureFlags = TEXTURE_DEFAULT, const int samples = 0); It seems like once you get to about 6-7 function parameters, it starts to make more sense to fill in a structure and pass that to a function the way Vulkan and DirectX do. Still, I don't want to go down that route unless I have to, and there is little reason to introduce that inconsistency into the API just for a handful of long function syntaxes.
    The type parameter can be TEXTURE_2D, TEXTURE_3D, or TEXTURE_CUBE. The mipchain parameter contains an array of images for all miplevels of the texture.
    We also have a new SaveTexture command which takes texture data and saves it into a file. The engine has built-in support for saving DDS files, and plugins can also provide support for additional file formats. (In Lua we provide a table of pixmaps rather than an STL vector.)
    bool SaveTexture(const std::wstring& filename, const TextureType type, const std::vector<shared_ptr<Pixmap> > mipchain, const int layers = 1, const SaveFlags = SAVE_DEFAULT); This allows us to load a pixmap from any supported file format and resave it as a texture, or as another image file, very easily:
    --Load image local pixmap = LoadPixmap("Materials/77684-blocks18c_1.jpg") --Convert to RGBA if not already if pixmap.format ~= TEXTURE_RGBA then pixmap = pixmap:Convert(TEXTURE_RGBA) end --Save pixmap to texture file SaveTexture("OUTPUT.dds", TEXTURE_2D, {pixmap}, 1, SAVE_DEFAULT) If we open the saved DDS file in Visual Studio 2019 we see it looks exactly like the original, and is still in uncompressed RGBA format. Notice there are no mipmaps shown for this texture because we only saved the top-level image.

    Adding Mipmaps
    To add mipmaps to the texture file, we can specify the SAVE_BUILD_MIPMAPS in the flags parameter of the SaveTexture function.
    SaveTexture("OUTPUT.dds", TEXTURE_2D, {pixmap}, 1, SAVE_BUILD_MIPMAPS) When we do this we can see the different mipmap levels displayed in Visual Studio, and we can verify that they look correct.

    Compressed Textures
    If we want to save a compressed texture, there is a problem. The SAVE_BUILD_MIPMAPS flags won't work with compressed texture formats, because we cannot perform a bilinear sample on compressed image data. (We would have to decompressed, interpolate, and then recompress each block, which could lead to slow processing times and visible artifacts.) To save a compressed DDS texture with mipmaps we will need to build the mipmap chain ourselves and compress each pixmap before saving.
    This script will load a JPEG image as a pixmap, generate mipmaps by resizing the image, convert each mipmap to BC1 compression format, and save the entire mip chain as a single DDS texture file:
    --Load image local pixmap = LoadPixmap("Materials/77684-blocks18c_1.jpg") local mipchain = {} table.insert(mipchain,pixmap) --Generate mipmaps local w = pixmap.size.x local h = pixmap.size.y local mipmap = pixmap while (w > 1 and h > 1) do w = math.max(1, w / 2) h = math.max(1, h / 2) mipmap = mipmap:Resize(w,h) table.insert(mipchain,mipmap) end --Convert each image to BC1 (DXT1) compression for n=1, #mipchain do mipchain[n] = mipchain[n]:Convert(TEXTURE_BC1) end --Save mipchain to texture file SaveTexture("OUTPUT.dds", TEXTURE_2D, mipchain, 1, SAVE_DEFAULT) If we open this file in Visual Studio 2019 we can inspect the individual mipmap levels and verify they are being saved into the file. Also note that the correct texture format is displayed.

    This system gives us fine control over every aspect of texture files. For example, if you wanted to write a mipmap filter that blurred the image a bit with each resize, with this system you could easily do that.
    Building Cubemaps
    We can also save cubemaps into a single DDS or Basis file by providing additional images. In this example we will load a skybox strip that consists of six images side-by-side, copy sections of the sky to different images, and then save them all as a single DDS file.
    Here is the image we will be loading. It's already laid out in the order +X, -X, +Y, -Y, +Z, -Z, which is what the DDS format uses internally.

    First we will load the image as a pixmap and check to make sure it is six times as wide as it is high:
    --Load skybox strip local pixmap = LoadPixmap("Materials/zonesunset.png") if pixmap.size.x ~= pixmap.size.y * 6 then Print("Error: Wrong image aspect.") return end Next we will create a series of six pixmaps and copy a section of the image to each one, using the CopyRect method.
    --Copy each face to a different pixmap local faces = {} for n = 1, 6 do faces[n] = CreatePixmap(pixmap.size.y,pixmap.size.y,pixmap.format) pixmap:CopyRect((n-1) * pixmap.size.y, 0, pixmap.size.y, pixmap.size.y, faces[n], 0, 0) end To save a cubemap you must set the type to TEXTURE_CUBE and the layers value to 6:
    --Save as cube map SaveTexture("CUBEMAP.dds", TEXTURE_CUBE, faces, 6, SAVE_DEFAULT) And just like that, you've got your very own cubemap packed into a single DDS file. You can switch through all the faces of the cubemap by changing the frames value on the right:

    For uncompressed cubemaps, we can just specify the SAVE_BUILD_MIPMAPS and mipmaps will automatically be created and saved in the file:
    --Save as cube map, with mipmaps SaveTexture("CUBEMAP+mipmaps.dds", TEXTURE_CUBE, faces, 6, SAVE_BUILD_MIPMAPS) Opening this DDS file in Visual Studio 2019, we can view all cubemap faces and verify that the mipmap levels are being generated and saved correctly:

    Now in Leadwerks Game Engine 4 we store skyboxes in large uncompressed images because DXT compression does not handle gradients very well and causes bad artifacts. The new BC7 compression mode, however, is good enough to handle skyboxes and takes the same space as DXT5 in memory. We already learned how to save compressed textures. The only difference with a skybox is that you store mipmaps for each cube face, in the following order:
    for f = 1, faces do for m = 1, miplevels do The easy way though is to just save as the RGBA image as a Basis file, with mipmaps. The Basis compressor will handle everything for us and give us a smaller file:
    --Save as cube map, with mipmaps SaveTexture("CUBEMAP+mipmaps.basis", TEXTURE_CUBE, faces, 6, SAVE_BUILD_MIPMAPS) The outputted .basis file works correctly when loaded in the engine. Here are the sizes of this image in different formats, with mipmaps included:
    Uncompressed RGBA: 32 MB BC7 compressed DDS: 8 MB Zipped uncompressed RGBA: 4.58 MB Zipped BC7 compressed DDS: 3.16 MB Basis file: 357 KB Note that the Basis file still takes the same amount of memory as the BC7 file, once it is loaded onto the GPU. Also note that a skybox in Leadwerks Game Engine 5 consumes less than 1% the hard drive space of a skybox in Leadwerks 4.
    Another option is to save the cubemap faces as individual images and then assemble them in a tool like ATI's CubemapGen. This was actually the first approach I tried while writing this article. I loaded the original .tex file and saved out a bunch of PNG images, as shown below.

    Modifying Images
    The CopyRect method allows us to copy sections of images from one to another as long as the images use the same format. We can also copy from one section of an image to another area on itself. This code will load a pixmap, copy a rectangle from one section of the image to another, and resave it as a simple uncompressed DDS file with no mipmaps:
    --Load image local pixmap = LoadPixmap("Materials/77684-blocks18c_1.jpg") --Convert to RGBA if not already if pixmap.format ~= TEXTURE_RGBA then pixmap = pixmap:Convert(TEXTURE_RGBA) end --Let's make some changes :D pixmap:CopyRect(0,0,256,256,pixmap,256,256) --Save uncompressed DDS file SaveTexture("MODIFIED.dds", TEXTURE_2D, {pixmap}, 1, SAVE_DEFAULT) When you open the DDS file you can see the copy operation worked:

    We can even modify compressed images that use any of the FourCC-type compression modes. This includes BC1-BC5.. The only difference that is instead of copying individual pixels, we are now working with 4x4 blocks of pixels. The easiest way to handle this is to just divide all your numbers by four:
    --Convert to compressed format pixmap = pixmap:Convert(TEXTURE_BC1) --We specify blocks, not pixels. Blocks are 4x4 squares. pixmap:CopyRect(0,0,64,64,pixmap,64,64) --Save compressed DDS file SaveTexture("COMPRESSED+MODIFIED.dds", TEXTURE_2D, {pixmap}, 1, SAVE_DEFAULT) When we open the image in Visual Studio the results appear identical, but note that the format is compressed BC1. This means we can perform modifications on compressed images without ever decompressing them:

    The Pixmap class has a GetSize() method which returns the image size in pixels, and it also has a GetBlocks() function. With uncompressed images, these two methods return the same iVec2 value. With compressed formats, the GetBlocks() value will be the image width and height in pixels, divided by four. This will help you make sure you are not drawing outside the bounds of the image. Note that this technique will work just fine with BC1-BC5 format, but will not work with BC6h and BC7, because these use more complex data formats.
    Texture SetSubpixels
    We have a new texture method that functions like CopyRect but works on textures that are already loaded on the GPU. As we saw with pixmaps, it is perfectly safe to modify even compressed data as long as we remember that we are working with blocks, not pixels. This is the command syntax:
    void SetSubPixels(shared_ptr<Pixmap> pixmap, int x, int y, int width, int height, int dstx, int dsty, const int miplevel = 0, const int layer = 0); To do this we will load a texture, and then load a pixmap. Remember, pixmaps are image data stored in system memory and textures are stored on the GPU. If the loaded pixmap's format does not match the texture pixel format, then we will convert the pixmap to match the texture:
    --Load a texture local texture = LoadTexture("Materials/77684-blocks18c_1.jpg") --Load the pixmap local stamp = LoadPixmap("Materials/stamp.png") if stamp.format ~= texture.format then stamp = stamp:Convert(texture.format) end To choose a position to apply the pixmap to, I used the camera pick function and used to picked texture coordinate to calculate an integer offset for the texture:
    local mpos = window:GetMousePosition() if window:MouseDown(MOUSE_LEFT) == true and mpos:DistanceToPoint(mouseposition) > 50 then mouseposition = mpos local mousepos = window:GetMousePosition() local pick = camera:Pick(framebuffer, mousepos.x, mousepos.y, 0, true, 0) if pick ~= nil then local texcoords = pick:GetTexCoords() texture:SetSubPixels(stamp, 0, 0, stamp.size.x, stamp.size.y, texcoords.x * texture.size.x - stamp.size.x / 2, texcoords.y * texture.size.y - stamp.size.x / 2, 0, 0) end end And here it is in action, in a Lua script example to be included in the next beta release: Note that this command does not perform any type of blending, it only sets raw texture data.

    What you see above is not a bunch of decals. It's just one single texture that has had it's pixels modified in a destructive way. The stamps we applied cannot be removed except by drawing over them with something else, or reloading the texture.
    The above textures are uncompressed images, but if we want to make this work with FourCC-type compressed images (everything except BC6h and BC7) we need to take into account the block size. This is easy because with uncompressed images the block size is 1 and with compressed images it is 4:
    texture:SetSubPixels(stamp, 0, 0, stamp.size.x / stamp.blocksize, stamp.size.y / stamp.blocksize, texcoords.x * (texture.size.x - stamp.size.x / 2) / stamp.blocksize, (texcoords.y * texture.size.y - stamp.size.x / 2) / stamp.blocksize, 0, 0) Now our routine will work with uncompressed or compressed images. I have it working in the image below, but I don't know if it will look much different from the uncompressed version.
     
    I'm not 100% sure on the block / pixel part yet. Maybe I will just make a rule that says "units must be divisible by four for compressed images".
    Anyways, there you have it. This goes much deeper than the texture commands in Leadwerks Game Engine 4 and will allow a variety of user-created tools and extensions to be written for the new level editor. It all starts with the code API, and this is a fantastic set of features, if I do say so myself. Which I do.
  18. Josh
    A new beta is available in the beta forum. This adds new texture and pixmap features, Basis texture support, and support for customized project workflows. Use of Basis textures brought the download size down to less than 300 megabytes. New Lua examples are included:
    Build Texture Build Cubemap SetSubPixels
  19. Josh
    It's funny how all of the various features in the new engine are interconnected and development just flows from one to another. I was working on terrain, and I needed to save out some texture data so I implemented Pixmaps, and I wanted to add Basis support and DXT decompression, and then I started converting texture formats, and now I need a way to manage this all. This is an idea I have had for several years and I finally got to try it out.
    Leadwerks Game Engine 4 has a strictly defined workflow and it works well. If an image file is encountered it is converted to our own TEX texture file format. if an FBX file is encountered it is converted to our own MDL model file format. If a file change is detected by the editor, the source file is reconverted into the new file format, which is then loaded by your game. It works well and has been very reliable, but is somewhat limited.
    The new engine is all about options. We support import and export plugins, and I plan to support modding for various games as well as regular game development. We have new formats like BASIS that no model files have textures stored as. I don't want to hard-code a bunch of file type overrides, so instead each project can have its own customized workflow. This is loaded from a JSON file, but a visual tool will be included in the new editor. The file data looks like this:
    { "workflow": { "pipelines": [ { "type": "TEXTURE", "includeFile": ["jpg","jpeg","bmp","tga","png","psd"], "preferFile": ["basis","dds"], "comments": [ "Prefers BASIS and DDS files over source images files, in that order, if they are newer than the original image." ] }, { "type": "TEXTURE", "includeFile": ["r16"], "preferFile": ["dds"], "comments": [ "Prefers DDS files over R16 heightmap files, if they are newer than the original image." ] }, { "type": "MODEL", "includeFile": ["*"], "preferFile": ["glb","gltf"], "comments": [ "Prefers binray and text GLTF files over all other model formats, in that order, if they are newer than the original model." ] }, { "type": "SOUND", "includeFile": ["wav"], "preferFile": ["ogg","mp3"], "comments": [ "Prefers OGG and MP3 files over WAV, in that order, if they are newer than the original sound." ] } ] } } I created a script in the "Scripts/Start" folder with one line:
    LoadWorkflow("Config/workflow.json") According to the rules set out in the workflow scheme, any time LoadTexture() is called with a .tex file extension, if there is a .basis or .dds file in the same folder, and if that file is newer than the .tex file, or if the .tex file is missing, the engine will load that file instead. Under these conditions, if your code says this:
    local tex = LoadTexture("Materials/Brick/brick01.tex") You will see this printed output in the console:
    Loading texture "Materials/Brick/brick01.basis" Because this works internally in the texture load routine, the same goes for all material files and models that reference a .tex file. This will help bring all your projects forward to make use of new file formats like BASIS and GLTF, as well as new formats you might someday want to use.
    In the future I plan to support converters in the workflow scheme as well, that the editor will detect and act upon. So for example if you were working on a map for one of the Source Engine games, the editor could be configured to automatically convert all texture files into Valve's VTF texture format. If you want to do something else I haven't accounted for yet, well you can do that too.
    The packaging step will be configurable as well, so you can pack your game files into a ZIP archive or any other package file format you have a plugin for. For example, all your textures used in a scene could be packed into a WAD file for use with the game Quake so you can use modern tools for classic game mapping and mods.
    With this setup, I was able to convert most of our sample textures to DDS and Basis, and all the example demos seamlessly loaded the preferred texture files and displayed perfectly.
  20. Josh
    In Leadwerks Game Engine 4, terrain was a static object that could only be modified in the editor. Developers requested access to the terrain API but it was so complex I felt it was not a good idea to expose it. The new terrain system is better thought out and more flexible, but still fairly complicated because you can do so much with it. This article is a deep dive into the inner workings of the new terrain system.
    Creating Terrain
    Terrain can be treated as an editable object, which involves storing more memory, or as a static object, which loads faster and consumers less memory. There isn't two different types of terrain, it's just that you can skip loading some information if you don't plan on making the terrain deform once it is loaded up, and the system will only allocate memory as it is needed, based on your usage. The code below will create a terrain consisting of 2048 x 2048 points, divided into patches of 32 x 32 points.
    local terrain = CreateTerrain(world, 2048, 32) This will scale the terrain so there is one point every meter, with a maximum height of 100 meters and a minimum height of -100 meters. The width and depth of the terrain will both be a little over two kilometers:
    terrain:SetScale(1,200,1) Loading Heightmaps
    Let's look at how to load a heightmap and apply it to a terrain. Because RAW files do not contain any information on size or formats, we are going to first load the pixel data into a memory buffer and then create a pixmap from that with the correct parameters:
    --We have to specify the width, height, and format then create the pixmap from the raw pixel data. local buffer = LoadBuffer("Terrain/2048/2048.r16") local heightmap = CreatePixmap(2048, 2048, TEXTURE_R16, buffer) --Apply the heightmap to the terrain terrain:SetHeightMap(heightmap) Because we can now export image data we have some options. If we wanted we could save the loaded heightmap in a different format. I like R16 DDS files for these because unlike RAW/R16 heightmap data these images can be viewed in a DDS viewer like the one included in Visual Studio:
    heightmap:Save("Terrain/2048/2048_H.dds") Here is what it looks like if I open the saved file with Visual Studio 2019:

    After we have saved that file, we can then just load it directly and skip the RAW/R16 file:
    --Don't need this anymore! --local buffer = LoadBuffer("Terrain/2048/2048.r16") --local heightmap = CreatePixmap(2048, 2048, TEXTURE_R16, buffer) --Instead we can do this: local heightmap = LoadPixmap("Terrain/2048/2048_H.dds") --Apply the heightmap to the terrain terrain:SetHeightMap(heightmap) This is what is going on under the hood when you set the terrain heightmap:
    bool Terrain::SetHeightMap(shared_ptr<Pixmap> heightmap) { if (heightmap->size != resolution) { Print("Error: Pixmap size is incorrect."); return false; } VkFormat fmt = VK_FORMAT_R16_UNORM; if (heightmap->format != fmt) heightmap = heightmap->Convert(fmt); if (heightmap == nullptr) return false; Assert(heightmap->pixels->GetSize() == sizeof(terraindata->heightfielddata[0]) * terraindata->heightfielddata.size()); memcpy(&terraindata->heightfielddata[0], heightmap->pixels->buf, heightmap->pixels->GetSize()); ModifyHeight(0,0,resolution.x,resolution.y); return true } There is something important to take note of here. There are two copies of the height data. One is stored in system memory and is used for physics, raycasting, pathfinding, and other functions. The other copy of the height data is stored in video memory and is used to adjust the vertex heights when the terrain is drawn. In this case, the data is stored in the same format, just a single unsigned 16-bit integer, but other types of terrain data may be stored in different formats in system memory (RAM) and video memory (VRAM).
    Building Normals
    Now let's give the terrain some normals for nice lighting. The simple way to do this is to just recalulate all normals across the terrain. The new normals will be copied into the terrain normal texture automatically:
    terrain:BuildNormals() However, updating all the normals across the terrain is a somewhat time-consuming process. How time consuming is it? Let's find out:
    local tm = Millisecs() terrain:BuildNormals() Print(Millisecs() - tm) The printed output in the console says the process takes 1600 milliseconds (1.6 seconds) in debug mode and 141 in milliseconds release mode. That is quite good but the task is distributed across 8 threads on this machine. What if someone with a slower machine was working with a bigger terrain? If I disable multithreading, the time it takes is 7872 milliseconds in debug mode and 640 milliseconds in release mode. A 4096 x 4096 terrain would take four times as long, creating a 30 second delay before the game started, every single time it was run in debug mode. (In release mode it is so fast it could be generated dynamically all the time.) Admittedly, a developer using a single-core processor to debug a game with a 4096 x 4096 terrain is a sort of extreme case, but the whole design approach for Leadwerks 5 has been to target the extreme cases, like the ones I see while working on virtual reality projects at NASA.
    What can we do to eliminate this delay? The answer is caching. We can retrieve a pixmap from the terrain after building the normals, save it, and then load the normals straight from that file next time the game is run.
    --Build normals for the entire terrain terrain:BuildNormals() --Retrieve a pixmap containing the normals in R8G8 format normalmap = terrain:GetNormalMap() --Save the pixmap as an uncompressed R8G8 DDS file, which will be loaded next time as a texture normalmap:Save("Terrain/2048/2048_N.dds") There is one catch. If you ran the code above there would be no DDS file saved. The reason for this is that internally, the terrain system stores each point's normal as two bytes representing two axes of the vector. Whenever the third axis is needed, it is calculated from the other two with this formula:
    normal.z = sqrt(max(0.0f, 1.0f - (normal.x * normal.x + normal.y * normal.y))); The pixmap returned from the GetNormalMap() method therefore uses the format TEXTURE_RG, but the DDS file format does not support two-channel uncompressed images. In order to save this pixmap into a DDS file we have to convert it to a supported format. We will use TEXTURE_RGBA. The empty blue and alpha channels double the file size but we won't worry about that right now.
    --Build normals for the entire terrain terrain:BuildNormals() --Retrieve a pixmap containing the normals in R8G8 format normalmap = terrain:GetNormalMap() --Convert to a format that can be saved as an image normalmap = normalmap:Convert(TEXTURE_RGBA) --Save the pixmap as an uncompressed R8G8 DDS file, which will be loaded next time as a texture normalmap:Save("Terrain/2048/2048_N.dds") When we open the resulting file in Visual Studio 2019 we see a funny-looking normal map. This is just because the blue channel is pure black, for reasons explained.

    In my initial implementation I was storing the X and Z components of the normal, but I switched to X and Y. The reason for this is that I can use a lookup table with the Y component, since it is an unsigned byte, and use that to quickly retrieve the slope at any terrain point:
    float Terrain::GetSlope(const int x, const int y) { if (terraindata->normalfielddata.empty()) return 0.0f; return asintable[terraindata->normalfielddata[(y * resolution.x + x) * 2 + 1]]; } This is much faster than performing the full calculation, as shown below:
    float Terrain::GetSlope(const int x, const int y) { int offset = (y * resolution.x + x) * 2; int nx = terraindata->normalfielddata[offset + 0]; int ny = terraindata->normalfielddata[offset + 1]; Vec3 normal; normal.x = (float(nx) / 255.0f - 0.5f) * 2.0f; normal.y = float(ny) / 255.0f; normal.z = sqrt(Max(0.0f, 1.0f - (normal.x * normal.x + normal.y * normal.y))); normal /= normal.Length(); return 90.0f - ASin( normal.y ); } Since the slope is used in expensive layering operations and may be called millions of times, it makes sense to optimize it.
    Now we can structure our code so it first looks for the cached normals image and loads that before performing the time-consuming task of building normals from scratch:
    --Load the saved normal data as a pixmap local normalmap = LoadPixmap("Terrain/2048/2048_N.dds") if normalmap == nil then --Build normals for the entire terrain terrain:BuildNormals() --Retrieve a pixmap containing the normals in R8G8 format normalmap = terrain:GetNormalMap() --Convert to a format that can be saved as an image normalmap = normalmap:Convert(TEXTURE_RGBA) --Save the pixmap as an uncompressed R8G8 DDS file, which will be loaded next time as a texture normalmap:Save("Terrain/2048/2048_N.dds") else --Apply the texture to the terrain. (The engine will automatically create a more optimal BC5 compressed texture.) terrain:SetNormalMap(normalmap) end The time it takes to load normals from a file is pretty much zero, so in the worst-case scenario described we just eliminated a huge delay when the game starts up. This is just one example of how the new game engine is being designed with extreme scalability in mind.
    Off on a Tangent...
    Tangents are calculated in the BuildNormals() routine at the same time as normals, because they both involve a lot of shared calculations. We could use the Terrain:GetTangentMap() method to retrieve another RG image, convert it to RGBA, and save it as a second DDS file, but instead let's just combine normals and tangents with the Terain:GetNormalTangentMap() method you did not know existed until just now. Since that returns an RGBA image with all four channels filled with data there is no need to convert the format. Our code above can be replaced with the following.
    --Load the saved normal data as a pixmap local normaltangentmap = LoadPixmap("Terrain/2048/2048_NT.dds") if normaltangentmap == nil then --Build normals for the entire terrain terrain:BuildNormals() --Retrieve a pixmap containing the normals in R8G8 format normaltangentmap = terrain:GetNormalTangentMap() --Save the pixmap as an uncompressed R8G8 DDS file, which will be loaded next time as a texture normaltangentmap:Save("Terrain/2048/2048_NT.dds") else --Apply the texture to the terrain. (The engine will automatically create a more optimal BC5 compressed texture.) terrain:SetNormalTangentMap(normaltangentmap) end This will save both normals and tangents into a single RGBA image that looks very strange:

    Why do we even have options for separate normal and tangent maps? This allows us to save both as optimized BC5 textures, which actually do use two channels of data. This is the same format the engine uses internally, so it will give us the fastest possible loading speed and lowest memory usage, but it's really only useful for static terrain because getting the data back into a format for system memory would require decompression of the texture data:
    --Retrieve a pixmap containing the normals in R8G8 format normalmap = terrain:GetNormalMap() tangentmap = terrain:GetTangentMap() --Convert to optimized BC5 format normalmap = normalmap:Convert(TEXTURE_BC5) tangentmap = tangentmap:Convert(TEXTURE_BC5) --Save the pixmaps as an compressed BC5 DDS file, which will be loaded next time as a texture normalmap:Save("Terrain/2048/2048_N.dds") tangentmap:Save("Terrain/2048/2048_T.dds") When saved, these two images combined will use 50% as much space as the uncompressed RGBA8 image, but again don't worry about storage space for now. The saved normal map looks just the same as the uncompressed RGBA version, and the tangent map looks like this:

    Material Layers
    Terrain material layers to make patches of terrain look like rocks, dirt, or snow work in a similar manner but are still under development and will be discussed in detail later. For now I will just show how I am adding three layers to the terrain, setting some constraints for slope and height, and then painting the material across the entire terrain.
    --Add base layer local mtl = LoadMaterial("Materials/Dirt/dirt01.mat") local layerID = terrain:AddLayer(mtl) --Add rock layer mtl = LoadMaterial("Materials/Rough-rockface1.json") local rockLayerID = terrain:AddLayer(mtl) terrain:SetLayerSlopeConstraints(rockLayerID, 35, 90, 25) --Add snow layer mtl = LoadMaterial("Materials/Snow/snow01.mat") local snowLayerID = terrain:AddLayer(mtl) terrain:SetLayerHeightConstraints(snowLayerID, 50, 1000, 8) terrain:SetLayerSlopeConstraints(snowLayerID, 0, 35, 10) --Apply Layers terrain:SetLayer(rockLayerID, 1.0) terrain:SetLayer(snowLayerID, 1.0) Material layers can take a significant time to process, at least in debug mode, as we will see later. Fortunately all this data can be cached in a manner similar to what we saw with normals and tangents. This also produces some very cool images:

    Optimizing Load Time
    The way we approach terrain building depends on the needs of each game or application. Is the terrain static or dynamic? Do we want changes in the application to be saved back out to the hard drive to be retrieved later? We already have a good idea of how to manage dynamic terrain data, now let's look at static terrains, which will provide faster load times and a little bit lower memory usage.
    Terrain creation is no different than before:
    local terrain = CreateTerrain(world, 2048, 32) terrain:SetScale(1,200,1) Loading the heightmap works the same as before. I am using the R16 DDS file here but it makes absolutely no difference in terms of loading speed, performance, or memory usage.
    --Load heightmap local heightmap = LoadPixmap("Terrain/2048/2048_H.dds") --Apply the heightmap to the terrain terrain:SetHeightmap(heightmap) Now here is where things get interesting. Remember how I talked about the terrain data existing in both system and video memory? Well, I am going to let you in on a little secret: We don't actually need the normal and tangent data in system memory if we aren't editing the terrain. We can load the optimized BC5 textures and apply them directly to the terrain's material, and it won't even realize what happened!:
    --Load the saved normal data as texture local normaltexture = LoadTexture("Terrain/2048/2048_N.dds") --Apply the normal texture to the terrain material terrain.material:SetTexture(normaltexture, TEXTURE_NORMAL) --Load the saved tangent data as texture local tangenttexture = LoadTexture("Terrain/2048/2048_T.dds") --Apply the normal texture to the terrain material terrain.material:SetTexture(tangenttexture, TEXTURE_TANGENT) Because we never fed the terrain any normal or tangent data, that memory will never get initialized, saving us 16 megabytes of system memory on a 2048 x 2048 terrain. We also save the time of compressing two big images into BC5 format at runtime. In the material layer system, which will be discussed at a later time, this approach will save 32 megabytes of memory and some small processing time, Keep in mind all those numbers increase four times with the next biggest terrain.
    In debug mode the static and cached dynamic test are not bad, but the first time the dynamic test is run there is a long delay of  60 40 25 15 seconds (explanation at the end of this section). We definitely don't want that happening every time you debug your game. Load times are in milliseconds.
    Dynamic Terrain (Debug, First Run)
    Loading time: 15497 Dynamic Terrain (Debug, Cached Data)
    Loading time: 1606 Static Terrain (Debug)
    Loading time: 1078 When the application is run in release mode the load times are all very reasonable, although the static mode loads about five times faster than building all the data at runtime. Memory usage does not vary very significantly. Memory shown is in megabytes.
    Dynamic Terrain (Release, First Run)
    Loading time (milliseconds): 1834 Memory usage (MB): 396 Dynamic Terrain (Release, Cached Data)
    Loading time: 386 Memory usage: 317 Static Terrain (Release)
    Loading time: 346 Memory usage: 311 The conclusion is that making use of cached textures and only using dynamic terrains when you need them can significantly improve your load times when running in debug mode, which you will be doing during the majority of the time during development. If you don't care about any of these details it will be automatically handled for you when you save your terrain in the new editor but if you are creating terrains programmatically this is important to understand. If you are loading terrain data from the hard drive dynamically as the game runs (streaming terrain) then these optimizations could be very important.
    While writing this article I found that I could greatly decrease the loading time in debug mode when I replaced STL with my own sorting routines in some high-performance code. STL usually runs very fast but in debug mode can be onerous. It's scary stuff, but I actually remember doing this same routine back when I was using Blitz3D, which if I remember correctly did not having any sorting functions. I found this ran slightly faster than STL in release mode and much faster in debug mode. I was able to bring one computationally expensive routine down from 20 seconds to 4 seconds (in debug mode only, it runs fine in release either way).
    //Scary homemade sorting firstitem = 0; lastitem = mtlcount - 1; sortcount = 0; while (true) { minalpha = 0; minindex = -1; for (n = firstitem; n <= lastitem; ++n) { if (listedmaterials[n].y == -1) continue; if (minindex == -1) { minalpha = listedmaterials[n].x; minindex = n; } else { if (listedmaterials[n].x < minalpha) { minalpha = listedmaterials[n].x; minindex = n; } } } if (minindex == -1) break; if (minindex == firstitem) ++firstitem; if (minindex == lastitem) --lastitem; sortedmaterials[sortcount] = listedmaterials[minindex]; listedmaterials[minindex].y = -1; ++sortcount; } There may be some opportunities for further performance increase in some of the high-performance terrain code. It's just a matter of how much time I want to put into this particular aspect of the engine right now.
    Optimizing File Size
    Basis Universal is the successor to the Crunch library. The main improvement it makes is support for modern compression formats (BC5 for normals and BC7 to replace DXT5). BasisU is similar to OGG/MP3 compression in that it doesn't reduce the size of the data in memory, but it can significantly reduce the size when it is saved to a file. This can reduce the size of your game's data files. It's also good for data that is downloaded dynamically, like large GIS data sets. I have seen people claim this can improve load times but I have never seen any proof of this and I don't believe it is correct.
    Although we do not yet support BasisU files, I wanted to run the compressable files through it and see what how much hard drive space we could save. I am including only the images needed for the static terrain method, since that is how large data sets would most likely be used.
    Uncompressed (R16 / RGBA): 105 megabytes Standard Texture Compression (DXT5 + BC5): 48 megabytes Standard Texture Compression + Zip Compression: 18.7 megabytes BasisU + Standard Texture Compression: 26 megabytes BasisU + Standard Texture Compression + Zip Compression: 10.1 megabytes If we just look at one single 4096x4096 BC3 (DXT5) DDS file, when compressed in a zip file it is 4.38 megabytes. When compressed in a BasisU file, it is only 1.24 megabytes.
    4096x4096 uncompressed RGBA: 64 megabytes 4096x4096 DXT5 / BC3: 16 megabytes 4096x4096 DXT5 / BC3 + zip compression: 4.38 megabytes 4096x4096 BasisU: 1.24 megabytes It looks like we can save a fair amount of data by incorporating BasisU into our pipeline. However, the compression times are longer than we would want to use for terrain that is being frequently saved in the editor, and it should be performed in a separate step before final packaging of the game. With the open-source plugin SDK anyone could add a plugin to support this right now. There is also some texture data that should not be compressed, so our savings with BasisU is less than what we would see for normal usage. In general, it appears that BasisU can cut the size of your game files down to about a third of what they would be in a zip file.
    A new update with these changes will be available in the beta tester forum later today.
  21. Josh
    Textures in Leadwerks don't actually store any pixel data in system memory. Instead the data is sent straight from the hard drive to the GPU and dumped from memory, because there is no reason to have all that data sitting around in RAM. However, I needed to implement texture saving for our terrain system so I implemented a simple "Pixmap" class for handling image data:
    class Pixmap : public SharedObject { VkFormat m_format; iVec2 m_size; shared_ptr<Buffer> m_pixels; int bpp; public: Pixmap(); const VkFormat& format; const iVec2& size; const shared_ptr<Buffer>& pixels; virtual shared_ptr<Pixmap> Copy(); virtual shared_ptr<Pixmap> Convert(const VkFormat format); virtual bool Save(const std::string& filename, const SaveFlags flags = SAVE_DEFAULT); virtual bool Save(shared_ptr<Stream>, const std::string& mimetype = "image/vnd-ms.dds", const SaveFlags flags = SAVE_DEFAULT); friend shared_ptr<Pixmap> CreatePixmap(const int, const int, const VkFormat, shared_ptr<Buffer> data); friend shared_ptr<Pixmap> LoadPixmap(const std::wstring&, const LoadFlags); }; shared_ptr<Pixmap> CreatePixmap(const int width, const int height, const VkFormat format = VK_FORMAT_R8G8B8A8_UNORM, shared_ptr<Buffer> data = nullptr); shared_ptr<Pixmap> LoadPixmap(const std::wstring& path, const LoadFlags flags = LOAD_DEFAULT); You can convert a pixmap from one format to another in order to compress raw RGBA pixels into BCn compressed data. The supported conversion formats are very limited and are only being implemented as they are needed. Pixmaps can be saved as DDS files, and the same rules apply. Support for the most common formats is being added.
    As a result, the terrain system can now save out all processed images as DDS files. The modern DDS format supports a lot of pixel formats, so even heightmaps can be saved. All of these files can be easily viewed in Visual Studio itself. It's by far the most reliable DDS viewer, as even the built-in Windows preview function is missing support for DX10 formats. Unfortunately there's really no modern DDS viewer application like the old Windows Texture Viewer.

    Storing terrain data in an easy-to-open standard texture format will make development easier for you. I intend to eliminate all "black box" file formats so all your game data is always easily viewable in a variety of tools, right up until the final publish step.
  22. Josh
    I wanted to see if any of the terrain data can be compressed down, mostly to reduce GPU memory usage. I implemented some fast texture compression algorithms for BC1, BC3, BC4, BC5, and BC7 compression. BC6 and BC7 are not terribly useful in this situation because they involve a complex lookup table, so data from different textures can't be mixed and matched. I found two areas where texture compression could be used, in alpha layers and normal maps. I implemented BC3 compression for terrain alpha and could not see any artifacts. The compression is very fast, always less than one second even with the biggest textures I would care to use (4096 x 4096).
    For normals, BC1 (DXT1 and BC3 (DXT5) produce artifacts: (I accidentally left tessellation turned on high in these shots, which is why the framerate is low):

    BC5 gives a better appearance on this bumpy area and closely matches the original uncompressed normals. BC5 takes 1 byte per pixel, one quarter the size of uncomompressed RGBA. However, it only supports two channels, so we need one texture for normals and another for tangents, leaving us with a total 50% reduced size.

    Here are the results:
    2048 x 2048 Uncompressed Terrain:
    Heightmap = 2048 * 2048 * 2 = 8388608 Normal / tangents map = 16777216 Secret sauce = 67108864 Secret sauce 2 = 16777216 Total = 104 MB 2048 x 2048 Compressed Terrain:
    Heightmap = 2048 * 2048 * 2 = 8388608 Normal map = 4194304 Tangents = 4194304 Secret sauce = 16777216 Secret sauce 2 = 16777216 Total = 48 MB Additionally, for editable terrain an extra 32 MB of data needs to be stored, but this can be dumped once the terrain is made static. There are other things you can do to reduce the file size but it would not change the memory usage, and processing time is very high for "super-compression" techniques. I investigated this thoroughly and found the best compression methods for this situation that are pretty much instantaneous with no noticeable loss of quality, so I am satisfied.
  23. Josh
    A new update is available for beta testers. This adds a new LOD system to the terrain system, fixes the terrain normals, and adds some new features. The terrain example has been updated ans shows how to apply multiple material layers and save the data.

    Terrain in LE4 uses a system of tiles. The tiles are rendered at a different resolution based on distance. This works great for medium sized terrains, but problems arise when we have very large view distances. This is why it is okay to use a 4096x4096 terrain in LE4, but your camera range should only show a portion of the terrain at once. A terrain that size will use 1024 separate tiles, and having them all onscreen at once can cause slowdown just because of the number of objects. That have to be culled and drawn.

    Another approach is to progressively divide the terrain up into quadrants starting from the top and working down to the lowest level. When a box is created that is a certain distance from the camera, the algorithm stops subdividing it and draws a tile. The same resolution tile is drawn over and over, but it is stretched to cover different sized areas.

    This approach is much better suited to cover very large areas. At the furthest distance, the entire terrain will be drawn with just one single 32x32 patch. Here it is in action with a 2048x2048 terrain, the same size as The Zone:
    This example shows how to load a heightmap, add some layers to it, and save the data, or load the data we previously saved:
    --Create terrain local terrain = CreateTerrain(world,2048,32) terrain:SetScale(1,150,1) --Load heightmap terrain:LoadHeightmap("Terrain/2048/2048.r16", VK_FORMAT_R16_UNORM) --Add base layer local mtl = LoadMaterial("Materials/Dirt/dirt01.mat") local layerID = terrain:AddLayer(mtl) --Add rock layer mtl = LoadMaterial("Materials/Rough-rockface1.json") rockLayerID = terrain:AddLayer(mtl) terrain:SetLayerSlopeConstraints(rockLayerID, 35, 90, 25) --Add snow layer mtl = LoadMaterial("Materials/Snow/snow01.mat") snowLayerID = terrain:AddLayer(mtl) terrain:SetLayerHeightConstraints(snowLayerID, 50, 1000, 8) terrain:SetLayerSlopeConstraints(snowLayerID, 0, 35, 10) --Normals if FileType("Terrain/2048/2048_N.raw") == 0 then terrain:UpdateNormals() terrain:SaveNormals("Terrain/2048/2048_N.raw") else terrain:LoadNormals("Terrain/2048/2048_N.raw") end --Layers if FileType("Terrain/2048/2048_L.raw") == 0 or FileType("Terrain/2048/2048_A.raw") == 0 then terrain:SetLayer(rockLayerID, 1) terrain:SetLayer(snowLayerID, 1) terrain:SaveLayers(WriteFile("Terrain/2048/2048_L.raw")) terrain:SaveAlpha(WriteFile("Terrain/2048/2048_A.raw")) else terrain:LoadLayers("Terrain/2048/2048_L.raw") terrain:LoadAlpha("Terrain/2048/2048_A.raw") end The x86 build configurations have also been removed from the game template project. This is available now in the beta tester forum.
  24. Josh
    Documentation in Leadwerks 5 will start in the header files, where functions descriptions are being added directly like this:
    /// <summary> /// Sets the height of one terrain point. /// </summary> /// <param name="x">Horizontal position of the point to modify.</param> /// <param name="y">Vertical position of the point to modify.</param> /// <param name="height">Height to set, in the range -1.0 to +1.0.</param> virtual void SetHeight(const int x, const int y, const float height); This will make function descriptions appear automatically in Visual Studio, to help you write code faster and more easily:

    Visual Studio can also generate an XML file containing all of the project's function descriptions as part of the build process. The generated XML file will serve as the basis for the online documentation and Visual Studio Code extension for Lua. This is how I see it working:

    I am also moving all things private to private members. I found a cool trick that allows me to create read-only members. In the example below, you can access the "position" member to get an entity's local position, but you cannot modify it without using the SetPosition() method. This is important because modifying values often involves updating lots of things in the engine under the hood and syncing data with other threads. This also means that any method Visual Studio displays as you are typing is okay to use, and there won't be any undocumented / use-at-your-own risk types of commands like we had in Leadwerks 4.
    class Entity { private: Vec3 m_position; public: const Vec3& position; }; Entity::Entity() : position(m_position) {} It is even possible to make constructors private so that the programmer has to use the correct CreateTerrain() or whatever command, instead of trying to construct a new instance of the class, with unpredictable results. Interestingly, the constructor itself has to be added as a friend function for this to work.
    class Terrein { private: Terrain(); public: friend shared_ptr<World> CreateTerrain(shared_ptr<World>, int, int, int) }; The only difference is that inside the CreateTerrain function I have to do this:
    auto terrain = shared_ptr<Terrain>(new Terrain); instead of this, because make_shared() doesn't have access to the Terrain constructor. (If it did, you would be able to create a shared pointer to a new terrain, so we don't want that!)
    auto terrain = make_shared<Terrain>(); I have big expectations for Leadwerks 5, so it makes sense to pay a lot of attention to the coding experience you will have while using this. I hope you like it!
  25. Josh
    A new update is available for beta testers.
    Terrain
    The terrain building API is now available and you can begin working with it, This allows you to construct and modify terrains in pure code. Terrain supports up to 256 materials, each with its own albedo, normal, and displacement maps. Collision and raycasting are currently not supported.
    Fast C++ Builds
    Precompiled headers have been integrated into the example project. The Debug build will compile in about 20 seconds the first run, and compile in just 2-3 seconds thereafter. An example class is included which shows how to add files to your game project for optimum compile times. Even if you edit one of your header files, your game will still compile in just a few seconds in debug mode! Integrating precompiled headers into the engine actually brought the size of the static libraries down significantly, so the download is only about 350 MB now.
    Enums Everywhere
    Integer arguments have been replaced with enum values for window styles, entity bounds, and load flags. This is nice because the C++ compiler has some error checking so you don't do something like this:
    LoadTexture("grass.dds", WINDOW_FULLSCREEN); Operators have been added to allow combining enum values as bitwise flags.
    A new LOAD_DUMP_INFO LoadFlags value has been added which will print out information about loaded files (I need this to debug the GLTF loader!).
    Early Spring Cleaning
    Almost all the pre-processor macros have been removed from the Visual Studio project, with just a couple ones left. Overall the headers and project structure have been massively cleaned up.
×
×
  • Create New...