Jump to content

Josh

Staff
  • Content Count

    16,917
  • Joined

  • Last visited

Blog Entries posted by Josh

  1. Josh
    Leadwerks GUI is now functioning, on the beta branch, Windows only, Lua interpreter only.
     

     
    GUI Class
    static GUI* Create(Context* context)
    Creates a new GUI
     
    Widget Class
    static Widget* Create(const int x, const int y, const int width, const int height, Widget* parent, const int style=0)
    Creates a new Widget. Widgets can be made into buttons, dropdown boxes, or anything else by attaching a script.
     
    virtual bool SetScript(const std::string& path, const bool start = true)
    Sets a widget's script for drawing and logic
     
    virtual void SetText(const std::string& text)
    Sets a text string for the widget that can be retrieved with GetText().
     
    The widget script will call GUI drawing commands:
    virtual void DrawImage(Image* image, const int x, const int y);//lua
    virtual void DrawImage(Image* image, const int x, const int y, const int width, const int height);//lua
    virtual void SetColor(const float r, const float g, const float b);//lua
    virtual void SetColor(const float r, const float g, const float b, const float a);//lua
    virtual void DrawRect(const int x, const int y, const int width, const int height, const int fillmode = 0, const int radius = 0);//lua
    virtual void DrawText(std::string& text, const int x, const int y, const int width, const int height, const int style = 0);//lua
    virtual void DrawLine(const int x0, const int y0, const int x1, const int y1);//lua
     
    Images are also supported (these are loaded from a tex file):
    static Image* Load(std::string& path, GUI* gui);
     
    Images are a little funny. If the GUI is context-based it will internally use a texture. If the GUI is window-based the image will store a bitmap that varies with the operating system. The point is, the GUI drawing commands will work the same on either.
     
    The button script looks like this and provides a fully functional button:

    Script.pushed=false Script.hovered=false function Script:Draw() --System:Print("Paint Button") local pos = self.widget:GetPosition(true) local gui = self.widget:GetGUI() gui:SetColor(1,1,1,1) if self.pushed then gui:SetColor(0.2,0.2,0.2) else if self.hovered then gui:SetColor(0.3,0.3,0.3) else gui:SetColor(0.25,0.25,0.25) end end gui:DrawRect(pos.x,pos.y,self.widget.size.width,self.widget.size.height,0,3) gui:SetColor(0.9,0.9,0.9) local text = self.widget:GetText() if text~="" then if self.pushed then gui:DrawText(text,pos.x+1,pos.y+1,self.widget.size.width,self.widget.size.height,Text.Center+Text.VCenter) else gui:DrawText(text,pos.x,pos.y,self.widget.size.width,self.widget.size.height,Text.Center+Text.VCenter) end end gui:DrawRect(pos.x,pos.y,self.widget.size.width,self.widget.size.height,1,3) end function Script:MouseEnter(x,y) self.hovered = true self.widget:Redraw() end function Script:MouseLeave(x,y) self.hovered = false self.widget:Redraw() end function Script:MouseMove(x,y) --System:Print("MouseMove") end function Script:MouseDown(button,x,y) --System:Print("MouseDown") self.pushed=true self.widget:Redraw() end function Script:MouseUp(button,x,y) --System:Print("MouseUp") local gui = self.widget:GetGUI() self.pushed=false if self.hovered then EventQueue:Emit(Event.WidgetAction,self.widget) end self.widget:Redraw() end function Script:KeyDown(button,x,y) --System:Print("KeyDown") end function Script:KeyUp(button,x,y) --System:Print("KeyUp") end
     
    The button uses the new EventQueue class to emit an event:

    EventQueue:Emit(Event.WidgetAction,self.widget)
     
    The main script can then poll events and find out when the button is pushed. This code is inserted into the main loop to do that:

    while EventQueue:Peek() do local event = EventQueue:Wait() if event.id == Event.WidgetAction then if event.source == button then System:Print("The button was pressed!") end end end
     
    Support for event callbacks will also be added.
     
    The full main script to create a GUI and handle events looks like this:

    --Initialize Steamworks (optional) Steamworks:Initialize() --Set the application title title="$PROJECT_TITLE" --Create a window local windowstyle = window.Titlebar + window.Resizable-- + window.Hidden if System:GetProperty("fullscreen")=="1" then windowstyle=windowstyle+window.FullScreen end window=Window:Create(title,0,0,System:GetProperty("screenwidth","1024"),System:GetProperty("screenheight","768"),windowstyle) --window:HideMouse() --Create the graphics context context=Context:Create(window) if context==nil then return end --Create a GUI local gui = GUI:Create(context) --Create a new widget local button = Widget:Create(20,20,300,50,gui:GetBase()) --Set the widget's script to make it a button button:SetScript("Scripts/GUI/Button.lua") --Set the button text button:SetText("Button") --Create a world world=World:Create() world:SetLightQuality((System:GetProperty("lightquality","1"))) --Load a map local mapfile = System:GetProperty("map","Maps/start.map") if Map:Load(mapfile)==false then return end --window:Show() while window:KeyDown(Key.Escape)==false do --Process events while EventQueue:Peek() do local event = EventQueue:Wait() if event.id == Event.WidgetAction then if event.source == button then System:Print("The button was pressed!") end end end --If window has been closed, end the program if window:Closed() then break end --Handle map change if changemapname~=nil then --Clear all entities world:Clear() --Load the next map Time:Pause() if Map:Load("Maps/"..changemapname..".map")==false then return end Time:Resume() changemapname = nil end --Update the app timing Time:Update() --Update the world world:Update() --Render the world world:Render() --Render statistics context:SetBlendMode(Blend.Alpha) if DEBUG then context:SetColor(1,0,0,1) context:DrawText("Debug Mode",2,2) context:SetColor(1,1,1,1) context:DrawStats(2,22) context:SetBlendMode(Blend.Solid) else --Toggle statistics on and off if (window:KeyHit(Key.F11)) then showstats = not showstats end if showstats then context:SetColor(1,1,1,1) context:DrawText("FPS: "..Math:Round(Time:UPS()),2,2) end end --Refresh the screen context:Sync(true) end
     
    At this point the system contains everything you need to begin writing your own widget scripts. Small changes may occur in the API before the feature is finalized, but this is pretty close to the final product.
  2. Josh

    Articles
    The upcoming conference in December 2020, where my paper was accepted and I was planning to have a booth, has been cancelled. I withdrew the paper, because it's no fun without doing a Steve Jobs style presentation. My rollout plans revolved around this event, so with that eliminated the timeline is now going to be optimized for what I think will give the best results.
    I was getting ready for a very stressful few months in the run-up to this event, so now going back to quietly coding away is a little bit anticlimactic. Since it appears I will be doing this for the rest of the year at least, I need to find a way to make this way of life more sustainable. So this is probably a good time to start my surf/punk rock band I have always planned. I'm hoping to hit something that is like a combination of Weezer and Social Distortion, that hopefully takes on a life of its own.
  3. Josh

    Articles
    The terrain streaming / planet rendering stuff was the last of the feature creep. That finishes out the features I have planned for the first release of the new engine. My approach for development has been to go very broad so I could get a handle on how all the features work together, solve the hard problems, and then fill in the details when convenient.
    The hard problems are all solved so now it's just a matter of finishing things, Consequently, I don't think my blogs are going to make any more groundbreaking feature announcements, but rather are going to show steady improvement of each subsystem as we progress towards a finished product.
    The GUI is something I wanted to spend some more cycles on. The initial release of the new engine will be a pure programming SDK with GUI support, but the GUI I am implementing is also going to be the basis of the new editor, when that time comes. I decided that using Lua scripts to control widgets was a bad idea because when operating at-scale I think this will cause some small slowdown in the UI. My goals for the new editor are for it to load fast and be very snappy and responsive, and that is my highest priority. It is nice to have overarching design goals because then you know what you must do.
    I've started the process of converting our Lua widget scripts into C++ code. The API now has functions like CreatePanel(), CreateButton(), etc. and is much more formalized than the flexible-but-open-ended GUI system in Leadwerks 4. For customization, I am implementing a color system. We have a bunch of color constants like this:
        enum WidgetColor     {         WIDGET_COLOR_BACKGROUND,         WIDGET_COLOR_BORDER,         WIDGET_COLOR_FOREGROUND,         WIDGET_COLOR_SELECTION,         WIDGET_COLOR_HIGHLIGHT,         WIDGET_COLOR_AUX0,         WIDGET_COLOR_AUX1,         WIDGET_COLOR_AUX2,         WIDGET_COLOR_AUX3,     }; There is a Widget::SetColor() command that lets you set any of the above values. Now, this is not a complete set of colors. The GUI system uses a lot more colors than that. But these colors are generated by multiplying the defined color by some value to make it a little darker or a little lighter.
    This means I am making a decision to reduce the flexibility of the system in favor of more formalized feature support, better documentation, and better performance.
    I think we will be able to load a color scheme from a JSON file and that will allow enough customization that most things people want to do will be possible. For custom widget behavior, I think either an actor or a DLL plugin could be used. There are enough options for future extensibility that I feel like we will be okay deferring that decision for now, and I am not coding myself into a corner.
    Here's a shot of the current state of things:

    I probably have enough GUI code ahead of me I could just go silent for a month and stay busy with this. I don't really want to think about that for the rest of today. Goodnight.
  4. Josh
    In games we think of terrain as a flat plane subdivided into patches, but did you know the Earth is actually round? Scientists say that as you travel across the surface of the planet, a gradual slope can be detected, eventually wrapping all the way around to form a spherical shape! At small scales we can afford to ignore the curvature of the Earth but as we start simulating bigger and bigger terrains this must be accounted for. This is a big challenge. How do you turn a flat square shape into a sphere? One way is to make a "quad sphere", which is a subdivided cube with each vertex set to the same distance from the center:

    I wanted to be able to load in GIS datasets so we could visualize real Earth data. The problem is these datasets are stored using a variety of projection methods. Mercator projections are able to display the entire planet on a flat surface, but they suffer from severe distortion near the north and south poles. This problem is so bad that most datasets using Mercator projections cut off the data above and below 75 degrees or so:

    Cubic projections are my preferred method. This matches the quad sphere geometry and allows us to cover an entire planet with minimal distortion. However, few datasets are stored this way:

    It's not really feasible to re-map data into one preferred projection method. These datasets are enormous. They are so big that if I started processing images now on one computer, it might take 50 years to finish. We're talking thousands of terabytes of data that can be streamed in, most of which the user will never see even if they spend hours flying around the planet.
    There are many other projection methods:

    How can I make our terrain system handle a variety of projection methods ti display data from multiple sources? This was a difficult problem I struggled with for some time before the answer came to me.
    The solution is to use a user-defined callback function that transforms a flat terrain into a variety of shapes. The callback function is used for culling, physics, raycasting, pathfinding, and any other system in which the CPU uses the terrain geometry:
    #ifdef DOUBLE_FLOAT void Terrain::Transform(void TransformCallback(const dMat4& matrix, dVec3& position, dVec3& normal, dVec3& tangent, const std::array<double, 16>& userparams), std::array<double, 16> userparams) #else void Terrain::Transform(void TransformCallback(const Mat4& matrix, Vec3& position, Vec3& normal, Vec3& tangent, const std::array<float, 16>& userparams), std::array<float, 16> userparams) #endif An identical function is used in the terrain vertex shader to warp the visible terrain into a matching shape. This idea is similar to the vegetation system in Leadwerks 4, which simultaneously calculates vegetation geometry in the vertex shader and on the CPU, without actually passing any data back and forth.
    void TransformTerrain(in mat4 matrix, inout vec3 position, inout vec3 normal, inout vec3 tangent, in mat4 userparams) The following callback can be used to handle quad sphere projection. The position of the planet is stored in the first three user parameters, and the planet radius is stored in the fourth parameter. It's important to note that the position supplied to the callback is the terrain point's position in world space before the heightmap displacement is applied. The normal is just the default terrain normal in world space. If the terrain is not rotated, then the normal will always be (0,1,0), pointing straight up. After the callback is run the heightmap displacement will be applied to the point, in the direction of the new normal. We also need to calculate a tangent vector for normal mapping. This can be done most easily by taking the original position, adding the original tangent vector, transforming that point, and normalizing the vector between that and our other transformed position.
    #ifdef DOUBLE_FLOAT void TransformTerrainPoint(const dMat4& matrix, dVec3& position, dVec3& normal, dVec3& tangent, const std::array<double, 16>& userparams) #else void TransformTerrainPoint(const Mat4& matrix, Vec3& position, Vec3& normal, Vec3& tangent, const std::array<float, 16>& userparams) #endif { //Get the position and radius of the sphere #ifdef DOUBLE_FLOAT dVec3 center = dVec3(userparams[0], userparams[1], userparams[2]); #else Vec3 center = Vec3(userparams[0], userparams[1], userparams[2]); #endif auto radius = userparams[3]; //Get the tangent position before any modification auto tangentposition = position + tangent; //Calculate the ground normal normal = (position - center).Normalize(); //Calculate the transformed position position = center + normal * radius; //Calculate transformed tangent auto tangentposnormal = (tangentposition - center).Normalize(); tangentposition = center + tangentposnormal * radius; tangent = (tangentposition - position).Normalize(); } And we have a custom terrain shader with the same calculation defined below:
    #ifdef DOUBLE_FLOAT void TransformTerrain(in dmat4 matrix, inout dvec3 position, inout dvec3 normal, inout dvec3 tangent, in dmat4 userparams) #else void TransformTerrain(in mat4 matrix, inout vec3 position, inout vec3 normal, inout vec3 tangent, in mat4 userparams) #endif { #ifdef DOUBLE_FLOAT dvec3 tangentpos = position + tangent; dvec3 tangentnormal; dvec3 center = userparams[0].xyz; double radius = userparams[0].w; #else vec3 tangentpos = position + tangent; vec3 tangentnormal; vec3 center = userparams[0].xyz; float radius = userparams[0].w; #endif //Transform normal normal = normalize(position - center); //Transform position position = center + normal * radius; //Transform tangent tangentnormal = normalize(tangentpos - center); tangentpos = center + tangentnormal * radius; tangent = normalize(tangentpos - position); } Here is how we apply a transform callback to a terrain:
    #ifdef DOUBLE_FLOAT std::array<double, 16> params = {}; #else std::array<float, 16> params = {}; #endif params[0] = position.x; params[1] = position.y; params[2] = position.z; params[3] = radius; terrain->Transform(TransformTerrainPoint, params); We also need to apply a custom shader family to the terrain material, so our special vertex transform code will be used:
    auto family = LoadShaderFamily("Shaders/CustomTerrain.json"); terrain->material->SetShaderFamily(family); When we do this, something amazing happens to our terrain:

    If we create six terrains and position and rotate them around the center of the planet, we can merge them into a single spherical planet. The edges where the terrains meet don't line up on this planet because we are just using a single heightmap that doesn't wrap. You would want to use a data set split up into six faces:
    All our terrain features like texture splatting, LOD, tessellation, and streaming data are retained with this system. Terrain can be warped into any shape to support any projection method or other weird and wonderful ideas you might have.
  5. Josh

    Articles
    An update is available for Leadwerks 5 beta on Steam that adds a World::SetSkyColor() command. This allows you to set a gradient for PBR reflections when no skybox is in use.
    I learned with Leadwerks 4 that default settings are important. The vast majority of screenshots people show off are going to use whatever default rendering settings I program in. We need a good balance between quality and performance for the engine to use as defaults. Therefore, the engine will use SSAO and bloom effects by default, a gentle gradient will be applied to PBR reflections, and the metal / roughness values of new materials will each be 0.5. Here is the result when a simple box is created with a single directional light:

    And here is what a more complex model looks like, without any lights in the scene:

    You can use World::SetSkyColor() to change the intensity of the reflections:

    Or you can change the colors to get an entirely different look:

    A Lua example using this command is available in the "Scripts/Examples" folder.
    These feature will help you to get better graphics out of the new engine with minimal effort.
  6. Josh
    A new update is available for Leadwerks 5 beta. This adds the ability for to use post-processing effects together with render-to-texture. The SpriteLayer class has been renamed to Canvas and the Camera::AddSpriteLayer method has been renamed to Camera::AddCanvas.
    The beta has been moved to Steam and updates will be distributed there from now on. Beta testers were sent keys to install the program on their Steam accounts.
  7. Josh
    A new update is available to beta testers. This makes some pretty big changes so I wanted to release this before doing any additional work on the post-processing effects system.
    Terrain Fixed
    Terrain system is working again, with an example for Lua and C++.
    New Configuration Options
    New settings have been added in the "Config/settings.json" file:
    "MultipassCubemap": false, "MaxTextures": 512, "MaxCubemaps": 16, "MaxShadowmaps": 64, "MaxIntegerTextures": 32, "MaxUIntegerTextures": 32, "MaxCubeShadowmaps": 64, "MaxVolumeTextures": 16, "LuaErrorCommand": "code", "LuaErrorCommandArguments": "-g \"$(CurrentFile)\":$(LineNumber) \"$(AppDir)\"" The max texture values will allow you to reduce the array size the engine requires for textures. If you have gotten an error message about "not enough texture units" this setting can be used to bring your application down under the limit your hardware has.
    The Lua settings define the command that is run when a Lua error occurs. By default this will open Visual Studio code and display the file and line number an error occurs on. 
    String Classes
    I've implemented two string classes for better string handling. The String and WString class are derived from both the std::string / wstring AND the Object class, which means they can be used in a variable that accepts an object (like the Event.source member). 8-bit character strings will automatically convert to wide strings, but not the other way. All the Load commands used to have two overloads, one for narrow and one for wide strings. That has been replaced with a single command that accepts a WString, so you can call LoadTexture("brick,dds") without having to specify a wide string like this: L"brick.dds".
    The global string functions like Trim, Right, Mid, etc. have been added as methods on the two string classes, Eventually the global functions will be phased out.
    Lua Integration in Visual Studio Code
    Lua integration in Visual Studio Code is just about finished and it's amazing! Errors are displayed, debugging works great, and console output is displayed, just like any serious modern programming language. Developing with Lua in Leadwerks 5 is going to be a blast!

    Lua launch options are now available for Debug, Release, Debug 64f, and Release 64f.
    I feel the Lua support is good enough now that the .bat files are not needed. It's easier just to open VSCode and copy the example you want to run into Main.lua. These are currently located in "Scripts/Examples" but they will be moved into the documentation system in time.
    The black console window is going away and all executables are by default compiled as a windowed application, not a console app. The console output is still available in Visual Studio in the debug output, or it can be piped to a file with a .bat launcher.
    See the notes here on how to get started with VSCode and Lua.
  8. Josh
    A beta update is available.
    The ray tracing system is now using a smaller 128x128x128 grid. There is still only one single grid that does not move. Direct lighting calculation has been moved to the GPU. The GI will appear darker and won't look very good. Additional shader work is needed to make the data look right, and I probably need to implement a compute shader for parts of it. The system is now dynamic, although it current has a lot of latency. GI renders only get triggered when something moves, so if everything is still the GI data will not be updated. There is a lot of work left to do, but I wanted to get the structure of the program in place first and then refine everything.
    I tested the TEX loader plugin and it appeared to work fine with bluegrid.tex, so I did not investigate any further.
    I started to implement a more sophisticated custom pixel shader function but I realized I didn't really know how to do it. Should the normal map lookup take place before this function, or be skipped entirely? What if the user modified the texture coordinates? The whole thing is not as simple as I thought and I need to think about it more.
    With those stipulations stated, this is a good intermediate update.
  9. Josh
    Previously, I showed how to create a terrain data set from a single 32768x32768 heightmap. The files have been uploaded to our Github account here. We will load data directly from the Github repository with our load-from-URL feature because this makes it very easy to share code examples. Also, even if you fly around the terrain for a long time, you are unlikely to ever need to download the complete data set. Think about Google Earth. How long would it take you to view the entire planet at full resolution? It's more ground than you can cover, so there is no need to download the whole set.
    Creating a streaming terrain is similar to a regular terrain. We set the terrain resolution to 32768, which is the maximum resolution we have height data for. The terrain is split into patches of 64x64 tiles. We also supply a URL and a callback function to load sections of terrain data.
    auto terrain = CreateTerrain(world, 32768, 64, "https://github.com/Leadwerks/Documentation/raw/master/Assets/Terrain/32768", FetchPatchInfo); Let's take a look at the FetchPatchInfo callback function. The function receives the terrain and a structure that contains information about the section we want to grab.
    void FetchPatchInfo(shared_ptr<StreamingTerrain> terrain, TerrainPatchInfo& patchinfo) The TerrainPatchInfo structure looks like this:
    struct TerrainPatchInfo { iVec2 position; iVec2 size; int level; shared_ptr<Pixmap> heightmap; shared_ptr<Pixmap> normalmap; }; The most important parts of the TerrainPatchInfo structure are the position (iVec2), size (iVec2), and level (int) members. The level indicates the resolution level we are grabbing info for. Like model LODs, zero is the highest-resolution level and as the level gets higher, the resolution gets lower. At ground level we are likely to be viewing level 0 data. If we are looking down on the terrain from space we viewing the highest level, with the lowest resolution. Since our terrain is 32768x32768 and our patch size is 64, we can go up nine levels of detail before the terrain data fits into a single 64x64 heightmap. If you need to, take a look back at how we generated LOD data in my earlier article.
    The position parameter tells us where on the terrain the patch lies. The meaning of this value changes with the level we are at. Let's consider a small 4x4 terrain. At level 0, the maximum resolution, the patch positions are laid out like this:

    If we go up one level (1) we have a terrain made up of just four patches. Notice that the patch with position (1,1) is now in the lower-right hand corner of the terrain, even though in the previous image the tile with position (1,1) is in the upper left quadrant.

    At the highest LOD level (2) there is just a single tile with position (0,0):

    As you can see, the tile position by itself doesn't give us an accurate picture of where the tile is located. We need the LOD level to know what this value actually means.
    The size parameter tells us the size of the pixel data the patch expects to use. This will be the terrain patch size plus one. Why is it bigger than the terrain patch size we indicated in the CreateTerrain command? This is because an NxN patch of tiles uses (N+1)x(N+1) vertices. The 4x4 patch of tiles below uses 5x5 vertices. Since height and normal data is read per-vertex we need our texture data to match this layout. (This is not optimal for textures, which work best at power-of-two resolutions, but don't worry. The engine will copy these patches into a big texture atlas which is a power-of-two size.)

    To load height data, we just convert our level, x, and y values into a string and load the appropriate heightmap from our online repository. We have control over this because we previously saved all our heightmaps with the same naming convention.
    First we will load a 64x64 pixmap and copy that to the patch height data. This will cover most of the pixels except a one-pixel line along the right and lower edge:
    //Create path to heightmap file WString heightmappath = terrain->datapath + L"/LOD" + WString(patchinfo.level) + L"/" + WString(patchinfo.position.x) + L"_" + WString(patchinfo.position.y) + L".dds"; //Load heightmap patchinfo.heightmap = CreatePixmap(patchinfo.size.x + 1, patchinfo.size.y + 1, TEXTURE_RED16); //Load most of the patch auto pixmap = LoadPixmap(heightmappath, 0, 0, LOAD_QUIET); if (pixmap) { Assert(pixmap->size.x + 1 == patchinfo.heightmap->size.x); Assert(pixmap->size.y + 1 == patchinfo.heightmap->size.y); pixmap->CopyRect(0,0,pixmap->size.x,pixmap->size.y,patchinfo.heightmap,0,0); } Next we need to fill in the right edge of the height data. If we have not reached the edge of the terrain, we can load the next tile to the right and copy it's left edge into the right edge of our height data. The CountPatches() method will tell us how many patches the terrain has at this resolution. If we have reached the edge of the terrain, we just copy the column of pixels that is one pixel from the right edge:
    iVec2 patches = terrain->CountPatches(patchinfo.level); if (patchinfo.position.x < patches.x - 1) { //Copy left edge of the tile to the right of this one to the right edge of the patch WString path = terrain->datapath + L"/LOD" + WString(patchinfo.level) + L"/" + WString(patchinfo.position.x + 1) + L"_" + WString(patchinfo.position.y) + L".dds"; auto pixmap = LoadPixmap(path, 0, 0, LOAD_QUIET); if (pixmap) pixmap->CopyRect(0, 0, 1, pixmap->size.y, patchinfo.heightmap, patchinfo.heightmap->size.x - 1, 0); } else { //Edge of terrain reached, so copy the pixels second to last from the edge to the edge for (int y = 0; y < patchinfo.heightmap->size.y; ++y) { patchinfo.heightmap->WritePixel(patchinfo.heightmap->size.x - 1, y, patchinfo.heightmap->ReadPixel(patchinfo.heightmap->size.x - 2, y)); } } We will do basically the same thing to fill in the bottom edge:
    if (patchinfo.position.y < patches.y - 1) { //Copy top edge of the tile beneath this one to the bottom edge of the patch WString path = terrain->datapath + L"/LOD" + WString(patchinfo.level) + L"/" + WString(patchinfo.position.x) + L"_" + WString(patchinfo.position.y + 1) + L".dds"; auto pixmap = LoadPixmap(path,0,0,LOAD_QUIET); if (pixmap) pixmap->CopyRect(0, 0, pixmap->size.x, 1, patchinfo.heightmap, 0, patchinfo.heightmap->size.y - 1); } else { //Edge of terrain reached, so copy the pixels second to last from the edge to the edge for (int x = 0; x < patchinfo.heightmap->size.x; ++x) { patchinfo.heightmap->WritePixel(x, patchinfo.heightmap->size.y - 1, patchinfo.heightmap->ReadPixel(x, patchinfo.heightmap->size.y - 2)); } } We have to also fill in the very lower-right pixel:
    if (patchinfo.position.x < patches.x - 1 and patchinfo.position.y < patches.y - 1) { //Copy top edge of the tile beneath this one to the bottom edge of the patch WString path = terrain->datapath + L"/LOD" + WString(patchinfo.level) + L"/" + WString(patchinfo.position.x + 1) + L"_" + WString(patchinfo.position.y + 1) + L".dds"; auto pixmap = LoadPixmap(path, 0, 0, LOAD_QUIET); if (pixmap) pixmap->CopyRect(0, 0, 1, 1, patchinfo.heightmap, patchinfo.heightmap->size.x - 1, patchinfo.heightmap->size.y - 1); } else { //Write the lower-right pixel patchinfo.heightmap->WritePixel(patchinfo.heightmap->size.x - 1, patchinfo.heightmap->size.y - 1, patchinfo.heightmap->ReadPixel(patchinfo.heightmap->size.x - 2, patchinfo.heightmap->size.y - 2)); } We have our height data completely loaded now. Next we're going to generate the normal map ourselves. You data set might have normal maps already stored, but I prefer to generate these myself because normals need to be very precise and they must be generated for the exact height the terrain is being scaled to, in order for tessellated vertices to appear correctly. (Normals actually can't be scaled because it's a rotation problem, not a vector problem.)
    //Calculate the normal map - I'm not 100% sure on the height factor patchinfo.normalmap = patchinfo.heightmap->MakeNormalMap(TerrainHeight / pow(2,patchinfo.level), TEXTURE_RGBA); One thing to be very careful of is that the FetchPatchInfo callback is called on its own thread, and may be called several times at once on different threads to load data for different tiles. Any code executed in this function must be thread-safe! The nice thing about this is the engine does not stall if a patch of terrain is taking a long time to load.
    That's all it takes to get a streaming terrain up and running. You can replace the FetchPatchInfo callback with your own function and load data from any source. Texture layers / colors are something I am still working out, but this gives you all you need to display flat terrains with streaming data for big games and multi-domain simulations. Here are the results:
    Next we will begin warping the terrain geometry into arbitrary shapes in order to display planetary data with various projection methods.
  10. Josh
    Three years ago I realized we could safely distribute Lua script-based games on Steam Workshop without the need for a binary executable.  At the time this was quite extraordinary.
    http://www.develop-online.net/news/update-leadwerks-workshop-suggests-devs-can-circumvent-greenlight-and-publish-games-straight-to-steam/0194370
    Leadwerks Game Launcher was born.  My idea was that we could get increased exposure for your games by putting free demos and works in progress on Steam.  At the same time, I thought gamers would enjoy being able to try free indie games without the possibility of getting viruses.  Since then there have been some changes in the market.
    Anyone can publish a game to Steam for $100. Services like itch.io and GameJolt have become very popular, despite the dangers of malware. Most importantly, the numbers we see on the Game Launcher just aren't very high.  My own little game Asteroids3D is set up so the user automatically subscribes to it when the launcher starts.  Since March 2015 it has only gained 12,000 subscribers, and numbers of players for other games are much lower.  On the other hand, a simple game that was hosted on our own website a few years back called "Furious Frank" got 22,000 downloads.  That number could be much higher today if we had left it up.
    So it appears that Steam is good for selling products, but it is a lousy way to distribute free games.  In fact, I regularly sell more copies of Leadwerks Game Engine than I can give away free copies of Leadwerks Game Launcher.
    This isn't to say Game Launcher was a failure.  In many cases, developers reported getting download counts as high or higher than IndieDB, GameJolt, and itch.io.  This shows that the Leadwerks brand can be used to drive traffic to your games.
    On a technical level, the stability of Leadwerks Game Engine 4 means that I have been able to upgrade the executable and for the most part games seamlessly work with newer versions of the engine.  However, there are occasional problems and it is a shame to see a good game stop working.  The Game Launcher UI could stand to see some improvement, but I'm not sure it's worth putting a lot of effort into it when the number of installs is relatively low.
    Of course not all Leadwerks games are written in Lua.  Evayr has some amazing free C++ games he created, and we have several commercial products that are live right now, but our website isn't doing much to promote them.  Focusing on distribution through the Game Launcher left out some important titles and split the community.
    Finally, technological advancements have been made that make it easier for me to host large amounts of data on our site.  We are now hooking into Amazon S3 for user-uploaded file storage.  My bill last month was less than $4.00.
    A New Direction
    It is for these reasons I have decided to focus on refreshing our games database and hosting games on our own website.  You can see my work in progress here.
    https://www.leadwerks.com/games
    The system is being redesigned with some obvious inspiration from itch.io and the following values in mind:
    First and foremost, it needs to look good. Highly customizable game page. Clear call to action. There are two possible reasons to post your game on our site.  Either you want to drive traffic to your website or store page, or you want to get more downloads of your game.  Therefore each page has very prominent buttons on the top right to do exactly this.
    Each game page is skinnable with many options.  The default appearance is sleek and dark.

    You can get pretty fancy with your customizations.

    Next Steps
    The templates still need a lot of work, but it is 80% done.  You can begin playing around with the options and editing your page to your liking.  Comments are not shown on the page yet, as the default skin has to be overridden to match your page style, but they will be.
    You can also post your Game Launcher games here by following these steps:
    Find your game's file ID in the workshop.  For example if the URL is "http://steamcommunity.com/sharedfiles/filedetails/?id=405800821" then the file ID is "405800821". Subscribe to your item, start Steam, and navigate to the folder where Game Launcher Workshop items are stored:
    C:\Program Files (x86)\Steam\steamapps\workshop\content\355500 If your item is downloaded there will be a subfolder with the file ID:
    C:\Program Files (x86)\Steam\steamapps\workshop\content\355500\405800821 Copy whatever file is found in that folder into a new folder on your desktop.  The file might be named "data.zip" or it could be named something like "713031292550146077_legacy.bin".  Rename the file "data.zip" if it is. Copy the game launcher game files located here into the same folder on your desktop:
    C:\Program Files (x86)\Steam\steamapps\common\Leadwerks Game Launcher\Game When you double-click "game.exe" (or just "game" on Linux) your game should now run.  Rename the executable to your game's name, including the Linux executable if you want to support Linux. Now zip up the entire contents of that folder and upload it on the site here. You can also select older versions of Game Launcher in the Steam app properties if you want to distribute your game with an older executable.
    Save the Games
    There are some really great little games that have resulted from the game tournaments over the years, but unfortunately many of the download links in the database lead to dead links in DropBox and Google Drive accounts.  It is my hope that the community can work together to preserve all these fantastic gems and get them permanently uploaded to our S3 storage system, where they will be saved forever for future players to enjoy.
    If you have an existing game, please take a look at your page and make sure it looks right.
    Make any customizations you want for the page appearance. Clean up formatting errors like double line breaks, missing images, or dead links. Screenshots should go in the screenshot field, videos should go in the video field, and downloads should go in the downloads field. Some of the really old stuff can still be grabbed off our Google drive here.
    I appreciate the community's patience in working with me to try the idea of Game Launcher, but our results clearly indicate that a zip download directly from our website will get the most installs and is easiest for everyone.
  11. Josh
    Being able to support huge worlds is great, but how do you fill them up with content? Loading an entire planet into memory all at once isn't possible, so we need a system that allows us to stream terrain data in and out of memory dynamically. I wanted a system that could load data from any source, including local files on the hard drive or online GIS sources. Fortunately, I developed most of this system last spring and I am ready to finish it up now.
    Preparing Terrain Data
    The first step is to create a set of data to test with. I generated a 32768x32768 terrain using L3DT. This produces a 2.0 GB heightmap. The total terrain data with normals, terrain layers, and other data would probably exceed 10 GB, so we need to split this up into smaller pieces.
    Loading a 2 GB file into memory might be okay, but we have some special functionality in the new engine that can help with this. First, some terminology: A Stream is an open file that can be read from or written to. A Buffer is a block of memory that can have values poked / peeked at a specific offset. (This is called a "Bank" in Leadwerks 4.) A BufferStream  is a block of memory with an internal position value that allows reading and writing with Stream commands. We also have the new StreamBuffer class, which allows you to use Buffer commands on a file on the hard drive! The advantage here is you can treat a BufferStream like it's a big block of memory without actually loading the entire file into memory at once.
    Our Pixmap class allows easy manipulation, copying, and conversion of pixel data. The CreatePixmap() function can accept a Buffer as the source of the pixel data. The StreamBuffer class is derived from the Buffer class, so we can create a StreamBuffer from a file and then create a 32768x32768 pixmap without actually loading the data into memory like so:
    auto stream = ReadFile("Terrain/32768/32768.r16"); auto buffer = CreateStreamBuffer(stream,0,stream->GetSize()); auto pixmap = CreatePixmap(32768, 32768, TEXTURE_R16, buffer); So at this point we have a 32768x32768 heightmap that can be manipulated without actually using any memory.
    Next we are going to split the pixmap up into a bunch of smaller pixmaps and save each one as a separate file. To do this, we will create a single 1024x1024 pixmap:
    auto dest = CreatePixmap(1024, 1024, TEXTURE_R16); Then we simply walk through the original heightmap, copy a 1024x1024 patch of data to our small heightmap, and save each patch as a separate file in .dds format:
    CreateDir("Terrain/32768/LOD0"); for (int x = 0; x < pixmap->size.x / 1024; ++x) { for (int y = 0; y < pixmap->size.y / 1024; ++y) { pixmap->CopyRect(x * 1024, y * 1024, 1024, 1024, dest, 0, 0); dest->Save("Terrain/32768/LOD0/" + String(x) + "_" + String(y) + ".dds"); } } We end up with a set of 1024 smaller heightmap files. (I took this screenshot while the program was still processing, so at the time there were only 411 files saved.)

    Creating LODs
    When you are working with large terrains it is necessary to store data at multiple resolutions. The difference between looking at the Earth from orbit and at human-scale height is basically like the difference between macroscopic and microscopic viewing. (Google Earth demonstrates this pretty well.) We need to take our full-resolution data and resample it into a series of lower-resolution data sets. We can do that all in one go with the following code:
    int num = 32; // =32768/1024 int lod = 0; while (num > 0) { CreateDir("Terrain/32768/LOD" + String(lod+1)); for (int x = 0; x < num / 2; ++x) { for (int y = 0; y < num / 2; ++y) { auto pm00 = LoadPixmap("Terrain/32768/LOD" + String(lod) + "/" + String(x * 2 + 0) + "_" + String(y * 2 + 0) + ".dds"); auto pm10 = LoadPixmap("Terrain/32768/LOD" + String(lod) + "/" + String(x * 2 + 1) + "_" + String(y * 2 + 0) + ".dds"); auto pm01 = LoadPixmap("Terrain/32768/LOD" + String(lod) + "/" + String(x * 2 + 0) + "_" + String(y * 2 + 1) + ".dds"); auto pm11 = LoadPixmap("Terrain/32768/LOD" + String(lod) + "/" + String(x * 2 + 1) + "_" + String(y * 2 + 1) + ".dds"); pm00 = pm00->Resize(512, 512); pm10 = pm10->Resize(512, 512); pm01 = pm01->Resize(512, 512); pm11 = pm11->Resize(512, 512); pm00->CopyRect(0, 0, 512, 512, dest, 0, 0); pm10->CopyRect(0, 0, 512, 512, dest, 512, 0); pm01->CopyRect(0, 0, 512, 512, dest, 0, 512); pm11->CopyRect(0, 0, 512, 512, dest, 512, 512); dest->Save("Terrain/32768/LOD" + String(lod + 1) + "/" + String(x) + "_" + String(y) + ".dds"); } } num /= 2; lod++; } The LOD1 folder then contains 256 1024x1024 heightmaps. The LOD2 folder contains 64, and so on, all the way to LOD 5 which contains the entire terrain downsampled into a single 1024x1024 heightmap:

    Now we have a multi-resolution data set that can be dynamically loaded into the engine. (If we were loading data from an online GIS data set it would probably already be set up like this.) The next step will be to set up a custom callback function that handles the data loading.
  12. Josh
    In my work with NASA we visualize many detailed CAD models in VR. These models may consist of tens of millions of polygons and thousands of articulated sub-objects. This often results in rendering performance that is bottlenecked by the vertex rather than the fragment pipeline. I recently performed some research to determine how to maximize our rendering speed in these situations.
    Leadwerks 4 used separate vertex buffers, but in Leadwerks 5 I have been working exclusively with interleaved vertex buffers. Data is interleaved and packed tightly. I always knew this could make a small improvement in speed, but I underestimated how important this is. Each byte in the data makes a huge impact. Now vertex colors and the second texture coordinate set are two vertex attributes that are almost never used. I decided to eliminate these. If required, this data can be packed into a 1D texture, applied to a material, and then read in a custom vertex shader, but I don't think the cost of keeping this data in the default vertex structure is justified. By reducing the size of the vertex structure I was able to make rendering speed in vertex-heavy scenarios about four times faster.
    Our vertex structure has been cut down to a convenient 32 bytes:
    struct Vertex {     Vec3 position;     short texcoords[2];     signed char normal[3];     signed char displacement;     signed char tangent[4];     unsigned char boneweights[4];     unsigned char boneindices[4]; }; I created a separate vertex buffer for rendering shadow maps, which only require position data. I decided to copy the position data into this and store it separately. This requires about 15% more vertex memory usage, but results in a much more compact vertex structure for faster shadow rendering. I may pack the vertex texture coordinates in there, since that would result in a 16-byte-aligned structure. I did not see any difference in performance on my Nvidia card and I suspect this is the same cost as a 12-byte structure on most hardware.
    Using unsigned shorts instead of unsigned integers for mesh indices increases performance by 11%.
    A vertex-limited scene is one in which our default setting of using an early Z-pass can be a disadvantage, so I added an option to disable this on a per-camera basis.
    Finally, I found that vertex cache optimization tools can produce a significant performance increase. I implemented two different libraries. In order to do this, I added a new plugin function for filtering a mesh:
    int FilterMesh(char* filtername, char* params, GMFSDK::GMFVertex*& vertices, uint32_t& vertex_count, uint32_t*& indices, uint32_t& indice_count, int polygonpoints); This allows you to add new mesh processing routines such as flipping the indices of a mesh, calculating normals, or performing mesh modifications like bending, twisting, distorting, etc. Both libraries resulted in an additional 100% increase in framerate in vertex-limited scenes.
    What will this help with? These optimizations will make a real difference when rendering CAD models and point cloud data.
  13. Josh
    So far the new Voxel ray tracing system I am working out is producing amazing results. I expect the end result will look like Minecraft RTX, but without the enormous performance penalty of RTX ray tracing.
    I spent the last several days getting the voxel update speed fast enough to handle dynamic reflections, but the more I dig into this the more complicated it becomes. Things like a door sliding open are fine, but small objects moving quickly can be a problem. The worst case scenario is when the player is carrying an object in front of them. In the video below, the update speed is fast, but the limited resolution of the voxel grid makes the reflections flash quite a lot. This is due to the reflection of the barrel itself. The gun does not contribute to the voxel data, and it looks perfectly fine as it moves around the scene, aside from the choppy reflection of the barrel in motion.
    The voxel resolution in the above video is set to about 6 centimeters. I don't see increasing the resolution as an option that will go very far. I think what is needed is a separation of dynamic and static objects. A sparse voxel octree will hold all static objects. This needs to be precompiled and it cannot change, but it will handle a large amount of geometry with low memory usage. For dynamic objects, I think a per-object voxel grid should be used. The voxel grid will move with the object, so reflections of moving objects will update instantaneously, eliminating the problem we see above.
    We are close to having a very good 1.0 version of this system, and I may wrap this up soon, with the current limitations. You can disable GI reflections on a per-object basis, which is what I would recommend doing with dynamic objects like the barrels above. The GI and reflections are still dynamic and will adjust to changes in the environment, like doors opening and closing, elevators moving, and lights moving and turning on and off. (If those barrels above weren't moving, showing their reflections would be absolutely no problem, as I have demonstrated in previous videos.)
    In general, I think ray tracing is going to be a feature you can take advantage of to make your games look incredible, but it is something you have to tune. The whole "Hey Josh I created this one weird situation just to cause problems and now I expect you to account for this scenario AAA developers would purposefully avoid" approach will not work with ray tracing. At least not in the 1.0 release. You're going to want to avoid the bad situations that can arise, but they are pretty easy to prevent. Perhaps I can combine screen-space reflections with voxels for reflections of dynamic objects before the first release.
    If you are smart about it, I expect your games will look like this:
    I had some luck with real-time compression of the voxel data into BC3 (DXT5) format. It adds some delay to the updating, but if we are not trying to show moving reflections much then that might be a good tradeoff. Having only 25% of the data being sent to the GPU each frame is good for performance.
    Another change I am going to make it a system that triggers voxel refreshes, instead of constantly updating it no matter what. If you sit still and nothing is moving, then the voxel data won't get recalculated and processed, which will make the performance even faster. This makes sense if we expect most of the data to not change each frame.
    I haven't run any performance benchmarks yet, but from what I am seeing I think the performance penalty for using this system will be basically zero, even on integrated graphics. Considering what a dramatic upgrade in visuals this provides, that is very impressive.
    In the future, I think I will be able to account for motion in voxel ray tracing, as well as high-definition polygon raytracing for sharp reflections, but it's not worth delaying the release of the engine. Hopefully in this article I showed there are many factors, and many approaches we are can use to try to optimize for different aspects of the effect. For the 1.0 release of our new engine, I think we want to emphasize performance above all else.
  14. Josh
    Crowdfunding campaigns are a great way to kick off marketing for a game or product, with several benefits.
    Free promotion to your target audience. Early validation of an idea before you create the product. A successful crowdfunding campaign demonstrates organic consumer interest, which makes bloggers and journalists much more willing to give your project coverage. Oh yeah, there's also the financial aspect, but that's actually the least important part. If you make $10,000 in crowdfunding, you can leverage that campaign to make far more than that amount in sales of your final product. I did over a million dollars in sales on Steam starting with a $40,000 Kickstarter project.
    There are two types of crowdfunding projects. The first is something you don't really want to do unless you get paid enough to make it worthwhile. For this type of project you should set a goal for the minimum amount of money you would be able to finish the project for. There is more uncertainty with this type of campaign, but if you don't meet your goal you don't have to deliver anything. Failing early can be a good thing, because there's nothing worse than building a product and having nobody buy it. With the successful Leadwerks for Linux Kickstarter campaign, people were asking for Linux support and I said "Okay, put your money where your mouth is" and they did.
    The second type of project is something you would probably do anyways, and a crowdfunding campaign just gives you a way to test demand and make some extra cash early. For this type of project you should set a relatively low goal, something you think you can earn quickly. If your campaign fails, that puts you in an awkward position because then you have to either cancel the project or admit you didn't actually need the money. A successful campaign does put you on the hook with a delivery date and a firm description of the product, so make sure your goals are realistic and attainable within your planned time frame.
    For a campaign to be successful you need to prepare. Don't just kick off a campaign without having an existing fanbase. You need to build an email list of people interested in your project before the campaign starts. But if you haven't done that yet, there is another way...
    With my crowdfunding campaign for the new engine coming up in October, there is an opportunity for others to latch on to the success of the upcoming campaign. I have an extensive email list I don't use very often, and my more formal blog articles regularly get 20,000+ views. Plus I now have some reach on Steam, and a lot more customers than back in 2013. I expect my campaign will hit its target goal within the first few days. Once my goal is reached, it would be easy for me to post an announcement saying "Oh hey, check out these other projects built with my technology" and add links on my project page. Your project could link back to mine and to others, and we can create a network of projects utilizing the new game engine technology. I think my new campaign will be very successful, and jumping onto that will probably give you a better result than you would get otherwise.
    Another thing to consider is that with the new ray-tracing technology, even simple scenes look incredible. I think there is a temporary window of opportunity where games that utilize this type of technology will stand out and automatically get more attention because the graphics look so dramatically better. My final results will make your game look like the shot from Minecraft RTX below, but the voxel method I am using will run fast on all hardware:

    So if you have a game project made with the new engine, or something that would look good in the new engine, there is an opportunity to piggyback your crowdfunding campaign off of mine. What makes a good game pitch? Demonstrating gameplay, having a playable demo, a track record of past published games, and gameplay videos all make a much better case than pages of bullet points. (I like animated GIFs because they show a lot more than a static screenshot but they are dead simple and fun.) You need to inspire the audience to believe in your concept, and for them to believe in your ability to deliver. So put your best foot forward!
  15. Josh
    I've been working to make my previously demonstrated voxel ray tracing system fully dynamic. Getting the voxel data to update fast enough was a major challenge, and it forced me to rethink the design. In the video below you can see the voxel data being updated at a sufficient speed. Lighting has been removed, as I need to change the way this runs.
    I plan to keep two copies of the data in memory and let the GPU interpolate smoothly in between them, in order to smooth out the motion. Next I need to add the direct lighting and GI passes back in, which will add an additional small delay but hopefully be within a tolerable threshold.
  16. Josh
    A new beta update is available. The raytracing implementation has been sped up significantly. The same limitations of the current implementation still apply, but the performance will be around 10x faster, as the most expensive part of the raytrace shader has been precomputed and cached.
    The Material::SetRefraction method has also been exposed to Lua. The Camera::SetRefraction method is now called "SetRefractionMode".
    The results are so good, I don't have any plans to use any kind of screen-space reflection effect.
     
  17. Josh
    An update is available for beta testers.
    All Lua errors should now display the error message and open the script file and go to the correct line the error occurs on.
    The voxel raytracing system is now accessible. To enable it, just call Camera:SetGIMode(true).
    At this time, only a single voxel grid with dimensions of 32 meters, centered at the origin is in use. The voxel grid will only be generated once, at the time the SetGIMode() method is called. Only the models that have already been loaded will be included when the voxel grid is built. Building takes several seconds in debug mode but less than one second in release. Raytraced GI and reflections do not take into account material properties yet, so there is no need to adjust PBR material settings at this time. Skyboxes and voxels are not currently combined. Only one or the other is shown. Performance is much faster than Nvidia RTX but still has a lot of room for improvement. If it is too slow for you right now, use a smaller window resolution. It will get faster as I work on it more. The raytracing stuff makes such a huge difference that I wanted to get a first draft out to the testers as quickly as possible. I am very curious to see what you are able to do with it.
  18. Josh
    PBR materials look nice, but their reflections are only as good as the reflection data you have. Typically this is done with hand-placed environment probes that take a long time to lay out, and display a lot of visual artifacts. Nvidia's RTX raytracing technology is interesting, but it struggles to run old games on a super-expensive GPU, My goal in Leadwerks 5 is to have automatic reflections and global illumination that doesn't require any manual setup, with fast performance.
    I'm on the final step of integrating our voxel raytracing data into the standard lighting shader and the results are fantastic. I found I could compress the 3D textures in BC3 format in real-time and save a ton of memory that way. However, I discovered that only about 1% of the 3D voxel texture actually has any data in it! That means there could be a lot of room for improvement with a sparse voxel octree texture of some sort, which could allow greater resolution. In any case, the remaining implementation of this feature will be very interesting. (I believe the green area on the back wall is an artifact caused by the BC3 compression.)
    I think I can probably render the raytracing component of the scene in a separate smaller buffer and the denoise it like I did with SSAO to make the performance hit negligible on this. Another interesting thing is that the raytracing automatically creates it's own ambient occlusion effect.
    Here is the current state, showing the raytraced component only. It works great with our glass refraction effects.
    Next I will start blending it into the PBR material lighting calculation a little better.
    Here's an updated video that shows it worked into the lighting more:
     
  19. Josh
    An update is available that adds the new refraction effect. It's very easy to create a refractive transparent material:
    auto mtl = CreateMaterial(); mtl->SetTransparent(true); mtl->SetRefraction(0.02); The default FPS example shows some nice refraction, with two overlapping layers of glass, with lighting on all layers. It looks great with some of @TWahl's PBR materials.

    If you want to control the strength of the refraction effect on a per-pixel basis add an alpha channel to your normal map.
    I've configured the launch.json for Visual Studio Code so that the current selected file is passed to the program in the command line. By default, game executable will run the "Scripts/Main.lua" file. If however, the current selected Lua file in the VSCode IDE is a file located in "Scripts/Examples" the executable will launch that one instead. This design allows you to quickly run a different script without overwriting Main.lua, but won't accidentally run a different script if you are working on something else.

    The whole integration with Visual Studio Code has gotten really nice.

    A new option "frameBufferColorFormat" is added to the Config/settings.json file to control the default color format for texture buffers .I have it set to 37 (VK_FORMAT_R8G8B8A8_UNORM) but you can set it to 91 (VK_R16G16B16A16_UNORM) for high-def color, but you probably won't see anything without an additional tone mapping post-processing effect.
    Slow performance in the example game has been fixed. There are a few things going on here. Physics weren't actually the problem, it was the Lua debugger. The biggest problem was an empty Update() function that all the barrels had in their script. Now, this should not really be a problem, but I suspect the routine in vscode-debugger.lua that finds the matching chunk name is slow and can be optimized quite a lot. I did not want to make any additional changes to it right now, but in the future I think this can be further improved. But anyways, the FPS example will be nice and snappy now and runs normally.
    Application shut down will be much faster now, as I did some work to clean up the way the engine cleans itself up upon termination.
  20. Josh
    Heat haze is a difficult problem. A particle emitter is created with a transparent material, and each particle warps the background a bit. The combined effect of lots of particles gives the whole background a nice shimmering wavy appearance. The problem is that when two particles overlap one another they don't blend together, because the last particle drawn is using the background of the solid world for the refracted image. This can result in a "popping" effect when particles disappear, as well as apparent seams on the edges of polygons.

    In order to do transparency with refraction the right way, we are going to render all our transparent objects into a separate color texture and then draw that texture on top of the solid scene. We do this in order to accommodate multiple layers of transparency and refraction. Now, the correct way to handle multiple layers would be to render the solid world, render the first transparency object, then switch to another framebuffer and use the previous framebuffer color attachment for the source of your refraction image. This could be done per-object, although it could get very expensive, flipping back and forth between two framebuffers, but that still wouldn't be enough.
    If we render all the transparent surfaces into a single image, we can blend their normals, refractive index, and other properties, and come up with a single refraction vector that combined the underlying surfaces in the best way possible.
    To do this, the transparent surface color is rendered into the first color attachment. Unlike deferred lighting, the pixels at this point are fully lit.

    The screen normals are stored in an additional color attachment. I am using world normals in this shot but later below I switched to screen normals:

    These images are drawn on top of the solid scene to render all transparent objects at once. Here we see the green box in the foreground is appearing in the refraction behind the glass dragon.

    To prevent this from happening, we need add another color texture to the framebuffer and render the pixel Z position into it. I am using the R32_SFLOAT format. I use the separate blend mode feature in Vulkan, and set the blend mode to minimum so that the smallest value always gets saved in the texture. The Z-position is divided by the camera far range in the fragment shader, so that the saved values are always between 0 and 1. The clear color for this attachment is set to 1,1,1,1, so any value written into the buffer will replace the background. Note this is the depth of the transparent pixels, not the whole scene, so the area in the center where the dragon is occluded by the box is pure white, since those pixels were not drawn.

    In the transparency pass, the Z position of the transparent pixel is compared to the Z position at the refracted texcoords. If the refracted position is closer to the camera than the transparent surface, the refraction is disabled for that pixel and the background directly behind the pixel is shown instead. There is some very slight red visible in the refraction, but no green.

    Now let's see how well this handles heat haze / distortion. We want to prevent the problem when two particles overlap. Here is what a particle emitter looks like when rendered to the transparency framebuffer, this time using screen-space normals. The particles aren't rotating so there are visible repetitions in the pattern, but that's okay for now.

    And finally here is the result of the full render. As you can see, the seams and popping is gone, and we have a heavy but smooth distortion effect. Particles can safely overlap without causing any artifacts, as their normals are just blended together and combined to create a single refraction angle.

     
  21. Josh
    A new update is available that improves Lua integration in Visual Studio Code and fixes Vulkan validation errors.
    The SSAO effect has been improved with a denoise filter. Similar to Nvidia's RTX raytracing technology, this technique smooths the results of the SSAO pass, resulting in a better appearance.

    It also requires far fewer sample and the SSAO pass can be run at a lower resolution. I lowered the number of SSAO samples from 64 to 8 and decreased the area of the image to 25%, and it looks better than the SSAO in Leaqdwerks 4, which could appear somewhat grainy. With default SSAO and bloom effects enabled, I see no difference in framerate compared to the performance when no post-processing effects are in use.
    I upgraded my install of the Vulkan SDK to 1.2 and a lot of validation errors were raised. They are all fixed now. The image layout transition stuff is ridiculously complicated, and I can see no reason why this is even a feature! This could easily be handled by the driver just storing the current state and switching whenever needed, which is exactly what I ended up doing with my own wrapper class. In theory, everything should work perfectly on all supported hardware now since the validation layers say it is correct.
    You can now explicitly state the validation layers you want loaded, in settings.json, although there isn't really any reason to do this:
    "vkValidationLayers": { "debug": [ "VK_LAYER_LUNARG_standard_validation", "VK_LAYER_KHRONOS_validation" ] } Debugging Lua in Visual Studio Code is improved. The object type will now be shown so you can more easily navigate debug information.

    That's all for now!
  22. Josh
    A new update is available for beta testers.
    The dCustomJoints and dContainers DLLs are now optional if your game is not using any joints (even if you are using physics).
    The following methods have been added to the collider class. These let you perform low-level collision tests yourself:
    Collider::ClosestPoint Collider::Collide Collider::GetBounds Collider::IntersectsPoint Collider::Pick The PluginSDK now supports model saving and an OBJ save plugin is provided. It's very easy to convert models this way using the new Model::Save() method:
    auto plugin = LoadPlugin("Plugins/OBJ.dll"); auto model = LoadModel(world,"Models/Vehicles/car.mdl"); model->Save("car.obj"); Or create models from scratch and save them:
    auto box = CreateBox(world,10,2,10); box->Save("box.obj"); I have used this to recover some of my old models from Leadwerks 2 and convert them into GLTF format:

    There is additional documentation now on the details of the plugin system and all the features and options.
    Thread handling is improved so you can run a simple application that handles 3D objects and exits out without ever initializing graphics.
    Increased strictness of headers for private and public members and methods.
    Fixed a bug where directional lights couldn't be hidden. (Check out the example for the CreateLight command in the new docs.)
    All the Lua scripts in the "Scripts\Start" folder are now executed when the engine initializes, instead of when the first script is run. These will be executed for all programs automatically, so it is useful for automatically loading plugins or workflows. If you don't want to use Lua at all, you can delete the "Scripts" folder and the Lua DLL, but you will need to load any required plugins yourself with the LoadPlugin command.
    Shadow settings are simplified. In Leadwerks 4, entities could be set to static or dynamic shadows, and lights could use a combination of static, dynamic, and buffered modes. You can read the full explanation of this feature in the documentation here. In Leadwerks 5, I have distilled that down to two commands. Entity::SetShadows accepts a boolean, true to cast shadows and false not to. Additionally, there is a new Entity::MakeStatic method. Once this is called on an entity it cannot be moved or changed in any way until it is deleted. If MakeStatic() is called on a light, the light will store an intermediate cached shadowmap of all static objects. When a dynamic object moves and triggers a shadow redraw, the light will copy the static shadow buffer to the shadow map and then draw any dynamic objects in its range. For example, if a character walks across a room with a single point light, the character model has to be drawn six times but the static scene geometry doesn't have to be redrawn at all. This can result in an enormous reduction of rendered polygons. (This is something id Software's Doom engine does, although I implemented it first.)
    In the documentation example the shadow polygon count is 27000 until I hit the space key to make the light static. The light then renders the static scene (everything except the fan blade) into an image, there thereafter that cached image is coped to the shadow map before the dynamic scene objects are drawn. This results in the shadow polygons rendered to drop by a lot, since the whole scene does not have to be redrawn each frame.

    I've started using animated GIFs in some of the documentation pages and I really like it. For some reason GIFs feel so much more "solid" and stable. I always think of web videos as some iframe thing that loads separately, lags and doesn't work half the time, and is embedded "behind" the page, but a GIF feels like it is a natural part of the page.

    My plan is to put 100% of my effort into the documentation and make that as good as possible. Well, if there is an increased emphasis on one thing, that necessarily means a decreased emphasis on something else. What am I reducing? I am not going to create a bunch of web pages explaining what great features we have, because the documentation already does that. I also am not going to attempt to make "how to make a game" tutorials. I will leave that to third parties, or defer it into the future. My job is to make attractive and informative primary reference material for people who need real usable information, not to teach non-developers to be developers. That is my goal with the new docs.
  23. Josh
    A new update is available to beta testers.
    I updated the project to the latest Visual Studio 16.6.2 and adjusted some settings. Build speeds are massively improved. A full rebuild of your game in release mode will now take less than ten seconds. A normal debug build, where just your game code changes, will take about two seconds. (I found that "Whole program optimization" completely does not work in the latest VS and when I disabled it everything was much faster. Plus there's the precompiled header I added a while back.)
    Delayed DLL loading is enabled. This makes it so the engine only loads DLLs when they are needed. If they aren't used by your application, they don't have to be included. If you are not making a VR game, you do not need to include the OpenVR DLL. You can create a small utility application that requires no DLLs in as little as 500 kilobytes. It was also found that the dContainers lib from Newton Dynamics is not actually needed, although the corresponding DLLs are (if your application uses physics).
    A bug in Visual Studio was found that requires all release builds add the setting "/OPT:NOREF,NOICF,NOLBR" in the linker options:
    https://github.com/ThePhD/sol2/issues/900
    A new StringObject class derived from both the WString and Object classes is added. This allows the FileSystemWatcher to store the file path in the event source member when an event occurs. A file rename event will store the old file name in the event.extra member.
    The Entity::Pick syntax is changes slightly, removing the X and Y components for the vector projected in front of the entity. See the new documentation for details.
    The API is being finalized and the new docs system has a lot of finished C++ pages. There's a lot of new stuff documented in there like message dialogs, file and folder request dialogs, world statistics, etc. The Buffer class (which replaces the LE4 "Bank" class) is official and documented. The GUI class has been renamed to "Interface".
    Documentation has been organized by area of functionality instead of class hierarchy. It feels more intuitive to me this way.

    I've also made progress using SWIG to make a wrapper for the C# programming language, with the help of @klepto2 and @carlb. It's not ready to use yet, but the feature has gone from "unknown" to "okay, this can be done". (Although SWIG also supports Lua, I think Sol2 is better suited for this purpose.)
  24. Josh
    The Leadwerks 5 beta has been updated.
    A new FileSystemWatcher class has been added. This can be used to monitor a directory and emit events when a file is created, deleted, renamed, or overwritten. See the documentation for details and an example. Texture reloading now works correctly. I have only tested reloading textures, but other assets might work as well.
    CopyFile() will now work with URLs as the source file path, turning it into a download command.
    Undocumented class methods and members not meant for end users are now made private. The goal is for 100% of public methods and members to be documented so there is nothing that appears in intellisense that you aren't allowed to use.
    Tags, key bindings, and some other experimental features are removed. I want to develop a more cohesive design for this type of stuff, not just add random ways to do things differently.
    Other miscellaneous small fixes.
  25. Josh
    An often-requested feature for terrain building commands in Leadwerks 5 is being implemented. Here is my script to create a terrain. This creates a 256 x 256 terrain with one terrain point every meter, and a maximum height of +/- 50 meters:
    --Create terrain local terrain = CreateTerrain(world,256,256) terrain:SetScale(256,100,256) Here is what it looks like:

    A single material layer is then added to the terrain.
    --Add a material layer local mtl = LoadMaterial("Materials/Dirt/dirt01.mat") local layerID = terrain:AddLayer(mtl) We don't have to do anything else to make the material appear because by default the entire terrain is set to use the first layer, if a material is available there:

    Next we will raise a few terrain points.
    --Modify terrain height for x=-5,5 do for y=-5,5 do h = (1 - (math.sqrt(x*x + y*y)) / 5) * 20 terrain:SetElevation(127 + x, 127 + y, h) end end And then we will update the normals for that whole section, all at once. Notice that we specify a larger grid for the normals update, because the terrain points next to the ones we modified will have their normals affected by the change in height of the neighboring pixel.
    --Update normals of modified and neighboring points terrain:UpdateNormals(127 - 6, 127 - 6, 13, 13) Now we have a small hill.

    Next let's add another layer and apply it to terrain points that are on the side of the hill we just created:
    --Add another layer mtl = LoadMaterial("Materials/Rough-rockface1.json") rockLayerID = terrain:AddLayer(mtl) --Apply layer to sides of hill for x=-5,5 do for y=-5,5 do slope = terrain:GetSlope(127 + x, 127 + y) alpha = math.min(slope / 15, 1.0) terrain:SetMaterial(rockLayerID, 127 + x, 127 + y, alpha) end end We could improve the appearance by giving it a more gradual change in the rock layer alpha, but it's okay for now.

    This gives you an idea of the basic terrain building API in Leadwerks 5, and it will serve as the foundation for more advanced terrain features. This will be included in the next beta.
×
×
  • Create New...