Jump to content

Blogs

Clustered Forward Rendering

I decided I want the voxel GI system to render direct lighting on the graphics card, so in order to make that happen I need working lights and shadows in the new renderer. Tomorrow I am going to start my implementation of clustered forward rendering to replace the deferred renderer in the next game engine. This works by dividing the camera frustum up into sectors, as shown below. A list of visible lights for each cell is sent to the GPU. If you think about it, this is really another voxel algorithm. The whole idea of voxels is that it costs too much processing power to calculate something expensive for each pixel, so lets calculate it for a 3D grid of volumes and then grab those settings for each pixel inside the volume. In the case of real-time global illumination, we also do a linear blend between the values based on the pixel position. Here's a diagram of a spherical point light lying on the frustum. But if we skew the frustum so that the lines are all perpendicular, we can see this is actually a voxel problem, and it's the light that is warped in a funny way, not the frustum. I couldn't figure out how to warp the sphere exactly right, but it's something like this. For each pixel that is rendered, you transform it to the perpendicular grid above and perform lighting using only the lights that are present in that cell. This tecnnique seems like a no-brainer, but it would not have been possible to do this when our deferred renderer first came to be. GPUs were not nearly as flexible back then as they are now, and things like a variable-length for loop would be a big no-no. Well, something else interesting occurred to me while I was going over this. The new engine is an ambitious project, with a brand new editor to be built from scratch. That's going to take a lot of time. There's a lot of interest in the features I am working on now, and I would like to get them out sooner rather than later. It might be possible to incorporate the clustered forward renderer and voxel GI into Leadwerks Game Engine 4 (at which point I would probably call it 5) but keep the old engine architecture. This would give Leadwerks a big performance boost (not as big as the new architecture, but still probably 2-3x in some situations). The visuals would also make a giant leap forward into the future. And it might even be possible to release in time for Christmas. All the shaders would have to be modified, but if you just updated your project everything would run in the new Leadwerks Game Engine 5 without any problem. This would need to be a paid update, probably with a new app ID on Steam. The current Workshop contents would not be accessible from the new app ID, but we have the Marketplace for that. This would also have the benefit of bringing the editor up to date with the new rendering methods, which would mean the existing editor could be used seamlessly with the new engine. We presently can't do this because the new engine and Leadwerks 4 use completely different shaders. This could solve a lot of problems and give us a much smoother transition from here to where we want to go in the future: Leadwerks Game Engine 4 (deferred rendering, existing editor) [Now] Leadwerks Game Engine 5 (clustered forward rendering, real-time GI, PBR materials, existing architecture, existing editor) [Christmas 2018] Turbo Game Engine (clustered forward rendering, new architecture,  new editor) [April 2020] I just thought of this a couple hours ago, so I can't say right now for sure if we will go this route, but we will see. No matter what, I want to get a version 4.6 out first with a few features and fixes. You can read more about clustered forward rendering in this article

Josh

Josh

Introducing Leadwerks Marketplace

Steam Workshop was a compelling idea to allow game asset authors to sell their items for use with Leadwerks Game Engine. However, the system has turned out to have some fundamental problems, despite my best efforts to work around it. Free items are not curated, causing the store to fill with low quality content. Some people have reported trouble downloading items. The publishing process is pretty tedious. The check-out process requires adding funds to Steam Wallet, and is just not very streamlined. At the same time, three new technologies have emerged that make it possible to deliver a better customer and seller experience through our website. Amazon S3 offers cheap and reliable storage of massive amounts of data. Paypal credit card tokens allow us to safely store a token on our server that can't be used anywhere else, instead of a credit card number. This eliminates the danger of a potential website hack revealing your information. Invision Power Board has integrated both these technologies into our commerce system. it would not have been possible to build a web store a few years ago because the cost of server space would have been prohibitive, and storing hundreds of gigs of data would have made my website backup process unsustainable. So at the time, the unlimited storage of Steam and their payment processing system was very appealing. That is no longer the case. To solve the problems of Steam Workshop, and give you easy access to a large library of ready-to-use game assets, I am happy to introduce Leadwerks Marketplace. The main page shows featured assets, new content, and the most popular items, with big thumbnails everywhere.   When you view an item, screenshots, videos, and detailed technical specifications are shown: How does Leadwerks Marketplace improve things?: Easy download of zip files that are ready to use with Leadwerks. You can use File > Import menu item to extract them to the current project, or just unzip them yourself. All content is curated. Items are checked for compatibility and quality. Clear technical specifications for every file, so you know exactly what you are getting. Cheap and reliable storage forever with Amazon S3. Any DLCs or Workshop Store items you purchased can be downloaded from Leadwerks Marketplace by linking your Steam account to your profile. Easy publishing of your items with our web-based uploader. We're launching with over 50 gigabytes of game assets, and more will be added continuously. To kick off the launch we're offering some items at major discounts during the Summer Game Tournament. Here are a few assets to check out: Get "The Zone" for just $4.99: Or download our Mercenary character for free! Just create your free Leadwerks account to gain access. Other items will be featured on sale during the Summer Game Tournament: After purchasing an item, you can download it immediately. All your purchased items will be shown in the Purchases area, which you can access from the user menu in the top-right of this the website header: Here all your purchased items will be available to download, forever: If you are interested in selling your game assets to over 20,000 developers, you can upload your items now. Sellers receive a 70% royalty for each sale, with a minimum payout of just $20. See the content guidelines for details and contact me if you need any help. If you have a lot of good content, we can even convert your assets for you and make them game-ready for Leadwerks, so there's really no risk to you. Browse game assets now.  

Admin

Admin

Skeletonized character!

The character has been skeletoned, with this we can start working initially on the animation of walking, running and bending down.  

Yue

Yue

Player Mesh

The player's mesh, I'm preparing it to be skeletonized, the process of assigning bones and joints for later movements, such as walking, running, bending, etc.
In the process I have used blender to lower the amount of polygons in the mesh and have some degree of optimization in the engine.        

Yue

Yue

Voxel Cone Tracing Part 5 - Hardware Acceleration

I was having trouble with cone tracing and decided to first try a basic GI algorithm based on a pattern of raycasts. Here is the result: You can see this is pretty noisy, even with 25 raycasts per voxel. Cone tracing uses an average sample, which eliminates the noise problem, but it does introduce more inaccuracy into the lighting. Next I wanted to try a more complex scene and get an estimate of performance. You may recognize the voxelized scene below as the "Sponza" scene frequently used in radiosity testing: Direct lighting takes 368 milliseconds to calculate, with voxel size of 0.25 meters. If I cut the voxel grid down to a 64x64x64 grid then lighting takes just 75 milliseconds. These speeds are good enough for soft GI that gradually adjusts as lighting changes, but I am not sure if this will be sufficient for our purposes. I'd like to do real-time screen-independent reflections. I thought about it, and I thought about it some more, and then when I was done with that I kept thinking about it. Here's the design I came up with: The final output is a 3D texture containing light data for all six possible directions.  (So a 256x256x256 grid of voxels would actually be 1536x256x256 RGB, equal to 288 megabytes.) The lit voxel array would also be six times as big. When a pixel is rendered, three texture lookups are performed on the 3D texture and multiplied by the normal of the pixel. If the voxel is empty, there is no GI information for that volume, so maybe a higher mipmap level could be used (if mipmaps are generated in the last step). The important thing is we only store the full-resolution voxel grid once. The downsampled voxel grid use an alpha channel for coverage. For example, a pixel with 0.75 alpha would have six out of eight solid child voxels. I do think voxelization is best performed on the CPU due to flexibility and the ability to cache static objects. Direct lighting, in this case, would be calculated from shadowmaps. So I have to implement the clustered forward renderer before going forward with this.

Josh

Josh

Free camera Chapter III

What's new with me? Well, always something new to learn. The thing is that programming is my hobby, and I've always compared it to filling out a crossword puzzle, it's understanding how things work, and most of the time I find myself like an archaeologist deciphering hieroglyphics of a strange lost civilization. The point is that I always make the mistake that it's an engine bug, I guess it's the ego that blooms, and not the humility of acknowledging that I don't understand something, starting with the fact that my mother tongue is not English, and asking for help in the forums is sometimes a bit confusing.  The point is that perseverance has paid off, and I finally understand the meaning of pickups on entities in the engine. At this point, from the center of the character's head, there is a lightning bolt whose destiny is the camera and with this I can detect where it collides somewhere on the stage, and from this point I can only establish that the camera does not cross the walls, but the important thing that has had me bearing children has been solved, and as always after a challenge comes another new one.  So, here's the script that hasn't let me sleep these days, I guess I deserve a medal for this, although this is very easy for others, for me it's a great victory.  Translated with www.DeepL.com/Translator Script.Camara = nil --entity "Camara" Script.Yue = nil --entity "Yue" Script.Escenario = nil --entity "Escenario" posCamara = Vec3() posPivote = Vec3() function Script:Start() self.Escenario:SetPickMode( Entity.PolygonPick ) self.entity:SetPickMode(0) self.Yue:SetPickMode( 0 ) end function Script:UpdateWorld() -- Giro Camara GiroCamara(self) -- Colision Camara posCamara = self.Camara:GetPosition(true) posPivote= self.entity:GetPosition(true) if world:Pick( posPivote.x, posPivote.y, posPivote.z, posCamara.x, posCamara.y, posCamara.z, PickInfo() , 0, true ) then System:Print("Col OK") else System:Print( "Col Not") end end pivote = Vec3() -- Giro Camara libre function GiroCamara(self) --Get the mouse movement local sx = Math:Round(context:GetWidth()/2) local sy = Math:Round(context:GetHeight()/2) local mouseposition = window:GetMousePosition() local dx = mouseposition.x - sx local dy = mouseposition.y - sy --Adjust and set the camera rotation pivote.x = pivote.x + dy / 10.0 pivote.y = pivote.y + dx / 10.0 self.entity:SetRotation(pivote) --Move the mouse to the center of the screen window:SetMousePosition(sx,sy) if pivote.x >= 45 then pivote.x = 45 elseif pivote.x <= -45 then pivote.x = -45 end end Now, I hope to make a simple summer game of a forklift simulator, although I doubt if I can do it, as Leadwerks' vehicle system is out of service in version 4.5.

Yue

Yue

Summer Game Tournament

Summer is here, and you know what that means! Yes, it is time for another LEGENDARY game tournament. This year the theme is "Retro Gaming". Create a modern take on an arcade game hearkening back to the days of NES, Neo Geo, Sega, or just make anything you want. Either way, you get this totally radical poster as a prize! How does it work?  For one month, the Leadwerks community builds small playable games.  Some people work alone and some team up with others.  At the end of the month we release our projects to the public and play each other's games.  The point is to release something short and sweet with a constrained timeline, which has resulted in many odd and wonderful mini games for the community to play. WHEN: The tournament begins Thursday, June 21, and ends on July 31st at the stroke of midnight. HOW TO PARTICIPATE: Publish your retro-or-other-themed game to the Games Showcase before the deadline. You can work as a team or individually. Use blogs to share your work and get feedback as you build your game. Games must have a preview image, title, and contain some minimal amount of gameplay (there has to be some way to win the game) to be considered entries. It is expected that most entries will be simple, given the time constraints. This is the perfect time to try making a VR game or finish that idea you've been waiting to make! PRIZES: All participants will receive a limited-edition 11x17" poster commemorating the event. To receive your prize you need to fill in your name, mailing address, and phone number (for customs) in your account info. At the end of the tournament we will post a roundup blog featuring your entries. Let's go!

Admin

Admin

Voxel Cone Tracing Part 4 - Direct Lighting

Now that we can voxelize models, enter them into a scene voxel tree structure, and perform raycasts we can finally start calculating direct lighting. I implemented support for directional and point lights, and I will come back and add spotlights later. Here we see a shadow cast from a single directional light: And here are two point lights, one red and one green. Notice the distance falloff creates a color gradient across the floor: The idea here is to first calculate direct lighting using raycasts between the light position and each voxel: Then once you have the direct lighting, you can calculate approximate global illumination by gathering a cone of samples for each voxel, which illuminates voxels not directly visible to the light source: And if we repeat this process we can simulate a second bounce, which really fills in all the hidden surfaces: When we convert model geometry to voxels, one of the important pieces of information we lose are normals. Without normals it is difficult to calculate damping for the direct illumination calculation. It is easy to check surrounding voxels and determine that a voxel is embedded in a floor or something, but what do we do in the situation below? The thin wall of three voxels is illuminated, which will leak light into the enclosed room. This is not good: My solution is to calculate and store lighting for each face of each voxel. Vec3 normal[6] = { Vec3(-1, 0, 0), Vec3(1, 0, 0), Vec3(0, -1, 0), Vec3(0, 1, 0), Vec3(0, 0, -1), Vec3(0, 0, 1) }; for (int i = 0; i < 6; ++i) { float damping = max(0.0f,normal[i].Dot(lightdir)); //normal damping if (!isdirlight) damping *= 1.0f - min(p0.DistanceToPoint(lightpos) / light->range[1], 1.0f); //distance damping voxel->directlighting[i] += light->color[0] * damping; } This gives us lighting that looks more like the diagram below: When light samples are read, the appropriate face will be chosen and read from. In the final scene lighting on the GPU, I expect to be able to use the triangle normal to determine how much influence each sample should have. I think it will look something like this in the shader: vec4 lighting = vec4(0.0f); lighting += max(0.0f, dot(trinormal, vec3(-1.0f, 0.0f, 0.0f)) * texture(gimap, texcoord + vec2(0.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(1.0f, 0.0f, 0.0f)) * texture(gimap, texcoord + vec2(1.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, -1.0f, 0.0f)) * texture(gimap, texcoord + vec2(2.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 1.0f, 0.0f)) * texture(gimap, texcoord + vec2(3.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 0.0f, -1.0f)) * texture(gimap, texcoord + vec2(4.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 0.0f, 1.0f)) * texture(gimap, texcoord + vec2(5.0 / texwidth, 0.0)); This means that to store a 256 x 256 x 256 grid of voxels we actually need a 3D RGB texture with dimensions of 256 x 256 x 1536. This is 288 megabytes. However, with DXT1 compression I estimate that number will drop to about 64 megabytes, meaning we could have eight voxel maps cascading out around the player and still only use about 512 megabytes of video memory. This is where those new 16-core CPUs will really come in handy! I added the lighting calculation for the normal Vec3(0,1,0) into the visual representation of our voxels and lowered the resolution. Although this is still just direct lighting it is starting to look interesting: The last step is to downsample the direct lighting to create what is basically a mipmap. We do this by taking the average values of each voxel node's children: void VoxelTree::BuildMipmaps() { if (level == 0) return; int contribs[6] = { 0 }; for (int i = 0; i < 6; ++i) { directlighting[i] = Vec4(0); } for (int ix = 0; ix < 2; ++ix) { for (int iy = 0; iy < 2; ++iy) { for (int iz = 0; iz < 2; ++iz) { if (kids[ix][iy][iz] != nullptr) { kids[ix][iy][iz]->BuildMipmaps(); for (int n = 0; n < 6; ++n) { directlighting[n] += kids[ix][iy][iz]->directlighting[n]; contribs[n]++; } } } } } for (int i = 0; i < 6; ++i) { if (contribs[i] > 0) directlighting[i] /= float(contribs[i]); } } If we start with direct lighting that looks like the image below: When we downsample it one level, the result will look something like this (not exactly, but you get the idea): Next we will begin experimenting with light bounces and global illumination using a technique called cone tracing.

Josh

Josh

Voxel Cone Tracing Part 3 - Raycasting

I added a raycast function to the voxel tree class and now I can perform raycasts between any two positions. This is perfect for calculating direct lighting. Shadows are calculated by performing a raycast between the voxel position and the light position, as shown in the screenshot below. Fortunately the algorithm seems to work great an there are no gaps or cracks in the shadow: Here is the same scene using a voxel size of 10 centimeters: If we move the light a little lower, you can see a shadow appearing near two edges of the floor: Why is this happening? Well, the problem is that at those angles, the raycast is hitting the neighboring voxel on the floor next to the voxel we are testing: You might think that if we just move one end of the ray up to the top of the voxel it will work fine, and you'd be right, in this situation. But with slightly different geometry, we have a new problem. So how do we solve this? At any given time, a voxel can have up to three faces that face the light (but it might have as few as one). In the image below I have highlighted the two voxel faces on the right-most voxel that face the light: If we check the neighboring voxels we can see that the voxel to the left is occupied, and therefore the left face does not make a good position to test from: But the top voxel is clear, so we will test from there: If we apply the same logic to the other geometry configuration I showed, we also get a correct result. Of course, if both neighboring voxels were solid then we would not need to perform a raycast at all because we know the light would be completely blocked at this position. The code to do this just checks which side of a voxel the light position is on. As it is written now, up to three raycasts may be performed per voxel: if (lightpos.x < voxel->bounds.min.x) { if (GetSolid(ix - 1, iy, iz) == false) { result = IntersectsRay(p0 - Vec3(voxelsize * 0.5f, 0.0f, 0.0f), lightpos); } } if (lightpos.x > voxel->bounds.max.x and result == false) { if (GetSolid(ix + 1, iy, iz) == false) { result = IntersectsRay(p0 + Vec3(voxelsize * 0.5f, 0.0f, 0.0f), lightpos); } } if (lightpos.y < voxel->bounds.min.y and result == false) { if (GetSolid(ix, iy - 1, iz) == false) { result = IntersectsRay(p0 - Vec3(0.0f, voxelsize * 0.5f, 0.0f), lightpos); } } if (lightpos.y > voxel->bounds.max.y and result == false) { if (GetSolid(ix, iy + 1, iz) == false) { result = IntersectsRay(p0 + Vec3(0.0f, voxelsize * 0.5f, 0.0f), lightpos); } } if (lightpos.z < voxel->bounds.min.z and result == false) { if (GetSolid(ix, iy, iz - 1) == false) { result = IntersectsRay(p0 - Vec3(0.0f, 0.0f, voxelsize * 0.5f), lightpos); } } if (lightpos.z > voxel->bounds.max.z and result == false) { if (GetSolid(ix, iy, iz + 1) == false) { result = IntersectsRay(p0 + Vec3(0.0f, 0.0f, voxelsize * 0.5f), lightpos); } } .With this correction the artifact disappears: It even works correctly at a lower resolution: Now our voxel raycast algorithm is complete. The next step will be to calculate direct lighting on the voxelized scene using the lights that are present.

Josh

Josh

Working in Free Camera Chapter II

We continue with the development of the free camera. I don't really know how I fixed it, but it was all a bit of a trial and error. The character walks forward and backward, always walking in the direction where the camera displays the scene.   In the end, I had to try different methods to relate the pivot camera and camera objects.

      Scripts Lua.

  -- Script controlador Lua Script.Yue = nil --entity "Malla Jugador" Script.Pivote = nil --entity "Pivote" -- Variables controlador personaje velocidad = 0.0 function Script:Start() self.Pivote:SetPosition( self.Yue:GetPosition().x, self.Yue:GetPosition().y+1.5, self.Yue:GetPosition().z ) end unction Script:UpdatePhysics() self.Yue:SetPosition( self.entity:GetPosition() ) self.Pivote:SetPosition( self.Yue:GetPosition().x, self.Yue:GetPosition().y+1.5, self.Yue:GetPosition().z ) -- Movimiento Personaje. if window:KeyDown(Key.W ) then velocidad = 1 self.Yue:SetRotation(0,self.Pivote:GetRotation(true).y,0 ) elseif window:KeyDown(Key.S ) then velocidad = -1 else velocidad = 0.0 end self.entity:SetInput( self.Pivote:GetRotation(true).y, velocidad, 0) end function Script:PostRender(context) context:SetBlendMode(Blend.Alpha) context:SetColor(1,0,0,1) context:DrawText(self.Yue:GetRotation(true).y,2,2) context:SetBlendMode(Blend.Solid) context:SetColor(1,1,1,1) context:DrawStats(2,22) end

  -- SCript PivoteCamara.lua function Script:PostRender(context) --Get the mouse movement local sx = Math:Round(context:GetWidth()/2) local sy = Math:Round(context:GetHeight()/2) local mouseposition = window:GetMousePosition() local dx = mouseposition.x - sx local dy = mouseposition.y - sy --Adjust and set the camera rotation pivote.x = pivote.x + dy / 10.0 pivote.y = pivote.y + dx / 10.0 self.entity:SetRotation(pivote) --Move the mouse to the center of the screen window:SetMousePosition(sx,sy) end
 

Yue

Yue

Voxel Cone Tracing Part 2 - Sparse Octree

At this point I have successfully created a sparse octree class and can insert voxelized meshes into it. An octree is a way of subdividing space into eight blocks at each level of the tree: A sparse octree doesn't create the subnodes until they are used. For voxel data, this can save a lot of memory. It was difficult to get the rounding and all the math completely perfect (and it has to be completely perfect!) but now I have a nice voxel tree that can follow the camera around and is aligned correctly to the world axis and units. The code that inserts a voxel is pretty interesting: A voxel tree is created with a number of levels, and the size of the structure is equal to pow(2,levels). For example, an octree with 8 levels creates a 3D grid of 256x256x256 voxels. Individual voxels are then inserted to the top-level tree node, which recursively calls the SetSolid() function until the last level is reached. A voxel is marked as "solid" simply by having a voxel node at the last level (0). (GetChild() has the effect of finding the specified child and creating it if it doesn't exist yet.) A bitwise flag is used to test which subnode should be called at this level. I didn't really work out the math, I just intuitively went with this solution and it worked as I expected: void VoxelTree::SetSolid(const int x, const int y, const int z, const bool solid) { int flag = pow(2, level); if (x < 0 or y < 0 or z < 0) return; if (x >= flag * 2 or y >= flag * 2 or z >= flag * 2) return; flag = pow(2, level - 1); int cx = 0; int cy = 0; int cz = 0; if ((flag & x) != 0) cx = 1; if ((flag & y) != 0) cy = 1; if ((flag & z) != 0) cz = 1; if (solid) { if (level > 0) { GetChild(cx, cy, cz)->SetSolid(x & ~flag, y & ~flag, z & ~flag, true); } } else { if (level > 0) { if (kids[cx][cy][cz] != nullptr) { kids[cx][cy][cz]->SetSolid(x & ~flag, y & ~flag, z & ~flag, false); } } else { //Remove self auto parent = this->parent.lock(); Assert(parent->kids[position.x][position.y][position.y] == Self()); parent->kids[position.x][position.y][position.y] = nullptr; } } } The voxel tree is built by adding all scene entities into the tree. From there it was easy to implement a simple raycast to see if anything was above each voxel, and color it black if another voxel is hit: And here is the same program using a higher resolution voxel tree. You can see it's not much of a stretch to implement ambient occlusion from here: At a voxel size of 0.01 meters (the first picture) the voxelization step took 19 milliseconds, so it looks like we're doing good on speed. I suspect the rest of this project will be more art than science. Stay tuned!

Josh

Josh

Working on the free camera Chapter 1

Implementing the free camera that will follow the character.  Currently the camera rotates around the character in any direction in 360 degrees and is at a specific distance always looking at the character.  To do this, I had to nest a character controller, a very small mesh ("cube") that is the father of the camera, and through the movements of the mouse it turns the cube and therefore the camera itself. Translated with www.DeepL.com/Translator


    The script for rotating the camera is as follows.       pivote = Vec3() function Script:Start() self.entity:SetScale(0.01, 0.01, 0.01 ) self.entity:SetRotation( 0, 0, 90 ) end function Script:PostRender(context) --Get the mouse movement local sx = Math:Round(context:GetWidth()/2) local sy = Math:Round(context:GetHeight()/2) local mouseposition = window:GetMousePosition() local dx = mouseposition.x - sx local dy = mouseposition.y - sy --Adjust and set the camera rotation pivote.x = pivote.x + dy / 10.0 pivote.y = pivote.y + dx / 10.0 self.entity:SetRotation(pivote) --Move the mouse to the center of the screen window:SetMousePosition(sx,sy) end This script attaches to a small cube in the center of the character.

  This script attaches to a small cube in the center of the character, which is the one that turns and the camera as the daughter of this cube makes 360-degree turns in all directions around the character's controller. 
Eventually I have to limit the rotation on the Y-axis, where it only reaches about 45 degrees in both positive and negative. 
That's all for today.

Yue

Yue

Bladequest - Is it the best FREE Leadwerks Game?

Hi Guys, It has been very silent around Phodex Framework and my projects, but thats because I really focused on game development and nothing else. Posting stuff and creating content costs time and I was also very busy with studying. However as promised in my last entry I was working on an actual game. I tried to create a small, but quality experience including decent design, nice graphics and, at least for a one man indie game, complex game mechanics such as first person melee and range combat. If it is the yet best free Leadwerks game is up to you. You can try it out for FREE!   About Bladequest Bladequest is my welcome gift for you and I really would like to share my knowledge and skill to create great games from your feedback and input! To do so, I need your help! That I can achieve a game you like to play, let me know what your likings are. To do so just participate in this survey! Bladequest is meant to show you, what I am able to do and what could be possible.  It is currently lacking some features and content, but we can build upon it, whatever we like! Join my email list to stay in touch, share ideas and craft an epic game together. Let’s do it! Before working on Bladequest I worked on a project called Dreamdale, which I decided to cancel for the moment, as I want to go towards smaller games and Dreamdale turned out to be a pretty huge project. I most likely will use some of the ideas in one way or another. If you are interested in the project check it out here! What next? So after I recieved enough feedback about Bladequest and about your preferences, I can sit down and concept some ideas for cool games. I then will present them to you and we will step by step create an awesome game together. Learning from my former mistakes, I would also like to start small and slowly grow bigger, so better expect a smaller and simpler, but well-thought out game. I even though about mobile games, although I prefer developing for PC, but I will do anything I can to deliever just the best experience.   Get Bladequest - TFC NOW!   Find out more about who I am and why I do what I do! Click here! As always stay cool! Markus from Phodex

Development elements Game

Well, with more time on vacation free from college work, I'm looking forward to doing what I like. Trying to make a video game, although that has never been the goal, is to improve my skills in front of the little I know in design and development of projects, in this case computer games.  Now, as I'm just an amateur, it's worth noting that I'm only here repeating the same things over and over again, but at the end of all this repetition you get an expert's air, since you can say that I understand some things.
Ok, so far, the models are taken from the internet, especially from the Yobi3D site, so my job with those models is to accommodate them to work properly with the engine, and give them a skeleton and animate the character. And the same thing happens with textures, all the credits to third parties that put their work on the Internet and in a way make our lives much easier. 
In that case, I love leadwerks, it makes life a lot easier, as I mentioned I'm just an amateur and a few years ago I created a prototype of the forklift in Blitz3D, and by that time it was a real pain in the ***, but with Leadwerks, everything is much, much easier, I guess what I learned with that old engine, has helped me a lot. No?.   The test or development scenario has the objective of scaling the character and the elements such as the forklift, pallets, boxes, etc. to the correct height of the character controller.  So an image can say a lot more than what I'm talking about, but the main idea is that, in the development scenario, to implement the character in 3d, the physics of the vehicle ("Someday in lewadwerks 4.9...), to create a menu, a hud system, a loading bar, a scoring system etc.  Translated with www.DeepL.com/Translator


     

Yue

Yue

Development Scenario

A simple development scenario, where the forklift vehicle system, a third person character and test elements for the respective game will be implemented. 

Yue

Yue

PROJECT LAUNCH

The project is officialy going to start soon... 15.06.2018 JOIN US NOW!   ©2018Gamehawk Visit us: www.gamehawk.de

Gamehawk

Gamehawk

What Makes a Good Brand Name?

In evaluating possible company names I have come up with the following criteria which I used to choose a name for our new game engine. Spelling and Pronunciation
The name should be unambiguous in spelling. This helps promote word-of-mouth promotion because when someone hears the name for the first time, they can easily find it online. Similarly, the name when read should be unambiguous in pronunciation. This helps the name travel from written to spoken word and back. Can you imagine telling someone else the name of this...establishment...and having them successfully type the name into a web browser?: Shorter is Better
Everything else aside, fewer letters is generally better. Here is a very long company name: And here is perhaps the shortest software company name in history. Which do you think is better? The Name Should "Pop"
A good company or product name will use hard consonants like B, T, K, X, and avoid soft sounding letters like S and F. The way a name sounds can actually influence perception of the brand, aside from the name meaning. The name "Elysium", besides being hard to pronounce and spell, is full of soft consonants that sound weak. "Blade Runner", on the other hand, starts with a hard B sound and it just sounds good. Communicate Meaning
The name should communicate the nature of the product or company. The name "Uber" doesn't mean anything except "better", which is why the company Uber originally launched as UberCab. Once they got to a certain size it was okay to drop the "cab" suffix, but do you remember the first time you heard of them? You probably thought "what the heck is an Uber?" The Leadwerks Brand
So according to our criteria above, the name Leadwerks satisfies the following conditions: The name "pops" and sounds cool. It's not too long. But here's where it falls short: Ambiguity in spelling (Leadworks?) Ambiguity in pronunciation. Leadwerks is pronounced like Led Zeppelin, but many people read it as "Leed-works". The name doesn't mean anything, even if it sounds cool. It's just a made-up word. These are the reasons I started thinking about naming the new engine something different. New Engine, New Name
So with this in mind, I set out to find a new name for the new coming engine. I was stumped until I realized that there are only so many words in the English language, and any good name you come up will invariably have been used previously in some other context, hopefully in another industry or product type. Realizing this gave me more leeway, as I did not have to come up with something completely unique the world has never heard before. Our early benchmarks indicate the new engine is a performance monster, with incredible results I did not even dream were possible. Together with the rapid development pipeline of Leadwerks, I knew I wanted to focus on speed. Finally, there was one name I kept coming back to for weeks on end. I was able to obtain a suitable domain name. I am now filing a trademark for use of this name, which requires that I begin using it commercially, which is why I am now revealing the name for the first time...                             Keep scrolling.                               How does this name stack up?: Unambiguous spelling and pronunciation. It's short. The name "pops". It communicates the defining feature of the product. Now think about our goals for the new engine's name. Will people have any trouble remembering this name? Is there any ambiguity about what the product stands for, and the promise it makes? If two developers are at a Meetup group and one of them says "I made this with Turbo" is there any doubt what the promise of this product is, i.e. massive performance? The name even works on a subconscious level. Anyone having trouble with their game performance (in other slow engines that aren't Turbo) will naturally wonder how fast it could be running in ours. The fact that the name has a positive emotional response for many people and a strong connection to the game industry is a plus. Turbo Game Engine is an unambiguous brand name that takes a stand and makes a clear promise of one thing: speed, which is incredibly important in the days of VR and 240 hz screens.

Josh

Josh

Voxel Cone Tracing

I've begun working on an implementation of voxel cone tracing for global illumination. This technique could potentially offer a way to perfrorm real-time indirect lighting on the entire scene, as well as real-time reflections that don't depend on having the reflected surface onscreen, as screen-space reflection does. I plan to perform the GI calculations all on a background CPU thread, compress the resulting textures using DXTC, and upload them to the GPU as they are completed. This means the cost of GI should be quite low, although there is going to be some latency in the time it takes for the indirect lighting to match changes to the scene. We might continue to use SSR for detailed reflections and only use GI for semi-static light bounces, or it might be fast enough for moving real-time reflections. The GPU-based implementations I have seen of this technique are techically impressive but suffer from terrible performance, and we want something fast enough to run in VR. The first step is to be able to voxelize models. The result of the voxelization operation is a bunch of points. These can be fed into a geometry shader that generates a box around each one: void main() { vec4 points[8]; points[0] = projectioncameramatrix[0] * (geometry_position[0] + vec4(-0.5f * voxelsize.x, -0.5f * voxelsize.y, -0.5f * voxelsize.z, 0.0f)); points[1] = projectioncameramatrix[0] * (geometry_position[0] + vec4(0.5f * voxelsize.x, -0.5f * voxelsize.y, -0.5f * voxelsize.z, 0.0f)); points[2] = projectioncameramatrix[0] * (geometry_position[0] + vec4(0.5f * voxelsize.x, 0.5f * voxelsize.y, -0.5f * voxelsize.z, 0.0f)); points[3] = projectioncameramatrix[0] * (geometry_position[0] + vec4(-0.5f * voxelsize.x, 0.5f * voxelsize.y, -0.5f * voxelsize.z, 0.0f)); points[4] = projectioncameramatrix[0] * (geometry_position[0] + vec4(-0.5f * voxelsize.x, -0.5f * voxelsize.y, 0.5f * voxelsize.z, 0.0f)); points[5] = projectioncameramatrix[0] * (geometry_position[0] + vec4(0.5f * voxelsize.x, -0.5f * voxelsize.y, 0.5f * voxelsize.z, 0.0f)); points[6] = projectioncameramatrix[0] * (geometry_position[0] + vec4(0.5f * voxelsize.x, 0.5f * voxelsize.y, 0.5f * voxelsize.z, 0.0f)); points[7] = projectioncameramatrix[0] * (geometry_position[0] + vec4(-0.5f * voxelsize.x, 0.5f * voxelsize.y, 0.5f * voxelsize.z, 0.0f)); vec3 normals[6]; normals[0] = (vec3(-1,0,0)); normals[1] = (vec3(1,0,0)); normals[2] = (vec3(0,-1,0)); normals[3] = (vec3(0,1,0)); normals[4] = (vec3(0,0,-1)); normals[5] = (vec3(0,0,1)); //Left geometry_normal = normals[0]; gl_Position = points[0]; EmitVertex(); gl_Position = points[4]; EmitVertex(); gl_Position = points[3]; EmitVertex(); gl_Position = points[7]; EmitVertex(); EndPrimitive(); //Right geometry_normal = normals[1]; gl_Position = points[1]; EmitVertex(); gl_Position = points[2]; EmitVertex(); ... } Here's a goblin who's polygons have been turned into Lego blocks. Now the thing most folks nowadays don't realize is that if you can voxelize a goblin, well then you can voxelize darn near anything. Global illumination will then be calculated on the voxels and fed to the GPU as a 3D texture. It's pretty complicated stuff but I am very excited to be working on this right now. If this works, then I think environment probes are going to completely go away forever. SSR might continue to be used as a low-latency high-resolution first choice when those pixels are available onscreen. We will see. It is also interesting that the whole second-pass reflective water technique will probably go away as well, since this technique should be able to handle water reflections just like any other material.

Josh

Josh

Three improvements I made to Leadwerks Game Engine 5 today

First, I was experiencing some crashes due to race conditions. These are very very bad, and very hard to track down. The problems were being caused by reuse of thread returned objects. Basically, a thread performs some tasks, returns an object with all the processed data, and then once the parent thread is done with that data it is returned to a pool of objects available for the thread to use. This is pretty complicated, and I found that when I switched to just creating a new return object each time the thread runs, the speed was the same as before. So the system is nice and stable now. I tend to be very careful about sharing data between threads and only doing it in a prescribed manner (through a command buffer and using separate objects) and I will continue to use this approach. Second, I added a built-in mouselook mode for cameras. You can call Camera::SetFreeLook(true) and get a automatic mouse controls that make the camera look around. I am not doing this to make things easier, I am doing it because it allows fast snappy mouse looking even if your game is running at a lower frequency. So you can run your game at 30 hz, giving you 33 milliseconds for all your game code to complete, but it will feel like 60+ hz because the mouse will update in the rendering thread, which is running at a faster speed. The same idea will be used to eliminate head movement latency in VR. Finally, I switched the instance indexes that are uploaded to the GPU from integers to 16-bit unsigned shorts. You can still have up to 131072 instances of a single object, because the engine will store instances above and below 65536 in two separate batches, and then send an integer to the shader to add to the instance index. Again, this is an example of a hard limit I am putting in place in order to make a more structured and faster performing engine, but it seems like the constraints I am setting so far are unlikely to even be noticed. Animation is working great, and performance is just as fast as before I started adding it, so things are looking good. Here's a funny picture of me trying to add things to the renderer to slow it down and failing : I'm not sure what I will tackle next. I could work on threading the physics and AI, spend some time exploring new graphics options, or implement lighting so that we have a basic usable version of Leadwerks 5 for indoors games. What would you like to see next in the Leadwerks Game Engine 5 Alpha?

Josh

Josh

Threaded Animation

The animation update routine has been moved into its own thread now where it runs in the background as you perform your game logic. We can see in the screenshot below that animation updates for 1025 characters take about 20 milliseconds on average. (Intel graphics below, otherwise it would be 1000 FPS lol.) In Leadwerks 4 this would automatically mean that your max framerate would be 50 FPS, assuming nothing else in the game loop took any time at all. Because of the asynchronous threaded design of Leadwerks 5, this otherwise expensive operation has no impact whatsoever on framerate! The GPU is being utilized by a good amount (96%) while the CPU usage is actually quite low at 10%, even though there are four threads running: Although the performance here is within an acceptable limit for a game running with a 30 hz loop (it's under 33 milliseconds) it would be too slow for a 60 hz game. (Note that game frequency and framerate are two different things.) In order to get the animation time under the 16.667 milliseconds that a 60 hz game allows, we can split the animation job up onto several different threads. This job is very easily parallelized, so the animation time is just the single-threaded time divided by the number of threads. We can't make our game run faster by adding more threads unnecessarily, all we have to do is make sure the job is completed within the allocated amount of time so the engine keeps running at the correct speed. When I split the task into two threads, the average update time is about 10 milliseconds, and CPU usage only goes up 2%. Splitting the task into 16 threads brings the average time down to 1-2 milliseconds, and CPU usage is still only at 15%. What does this mean? Well, it seems each thread is spending a lot of time paused (intentionally) and we haven't begun to scratch the surface of CPU utilization. So I will do my best to keep the CPU clear for all your game code and at the same time Leadwerks Game Engine 5 will be using A LOT of threads in the background for animation, physics, navigation, and rendering. The performance we're seeing with this system is absolutely incredible, beyond anything I imagined it would be when I started building an engine specifically designed for modern PC hardware. We're seeing results that are 10 times faster than anything else out there. In fact, here are 10,000 animated characters running at 200+ FPS on a GEForce 1070 with no LOD or any special optimization tricks. (Thanks to @AggrorJorn for the screen capture.) It remains to be seen how performance is when lights, physics, and AI are added, but so far it looks extremely good. In fact, for anyone making an RTS game with lots of characters, Leadwerks 5 may be the only reasonable choice due to the insane performance it gets!

Josh

Josh

Three Types of Optimization

In designing the new engine, I have found that there are three distinct types of optimization. Streamlining
This is refinement. You make small changes and try to gain a small amount of performance. Typically, this is done as a last step before releasing code. The process can be ongoing, but suffers from diminishing returns after a while. When you eliminate unnecessary math based on guaranteed assumptions you are streamlining code. For example, a 4x4 matrix multiplication can skip the calculations to fill the right-most column if the matrices are guaranteed to be orthogonal (non-sheared). Quality Degradation
This is when you downgrade the quality of your results within a certain tolerable level where it won't be noticed much. An example of this is using a low-resolution copy of a model when it is far away from the camera. Quality degradation can be pretty arbitrary, and can mask your true performance, so it's best to keep an option to disable this. Architectural
By designing algorithms in a way that makes maximum use of hardware and produces the most optimum results, we can greatly increase performance. Architectural optimization produces groundbreaking changes that can be ten or 100 times faster than the old architecture. An example of this is GPU hardware, which produces a massive performance increase over software rendering. We're seeing a lot of these types of improvements in Leadwerks Game Engine 5 because the entire system is being designed to make maximum use of modern graphics hardware.  

Josh

Josh

×