Jump to content

Blogs

This is what YOUR feedback reveals!

Hi guys! Just wanted to share the first video of the DevTalk series with you, where I keep you up to date about the development, upcoming features etc. It also is a very cool way to communicate with you, so make sure to subscribe and comment to tell me your ideas. I am working very, very hard and there is a lot of awesome stuff coming soon, so stay tuned! Markus from Phodex Games! Give Feeback for Bladequest!
My Youtube Channel!
Your Games Wishlist so I can craft a game to your needs!

Phodex Games

Phodex Games

Simple Car

Here there is a way of making a simple car based on hinges This is not a tutorial, is just simply a stating point for the ones that want/like to play arround physics and hinges... I included the entire project and the distribution executable to test the scripts so, you have as I say a starting point and just that, hpe it helps someone This is the editor view and the initial placement of the parts needed Basically I made 3 scripts physicsProperties.lua wheel.lua steer.lua First you have to create and place the car body or chassis, 4 wheels and 6 auxiliary pivots/ or any brush you like (a cube is fine) for the hinges 4 of the auxiliary entities are for the wheels hinges and 2 for the wheels steer. Place the wheel hinge and steer centered with the wheel. After that you may set some script parameters: Wheel scritp: Basically, the position of the entity script holder is used to create a hinge between the "parent" and the "child" you choose (in the picture above: between the auxiliary entity SteerFL and the WheelFL) If vel control is checked then, a motor is enabled for that hinge and keys Up/Down are used to increase/decrease speed If vel control is not cheched, no motor is enabled and the wheel is free to run Be carefull whit the Hinge Pin, which dictates the axis over which the wheel will rotate, in this case I used X axis, but if you use other pieces direction/alignement you should adjust this values.   Steer script: The steer hinge is used to turn the wheel to handle car heading, so the pin is the Y axis Limits and Motor are needed to control the steer Limits is for how much the steer will turn right/left using the default keys left/right arrow When you press left or ritght key, the right limit will be set as the hinge angle and the hinge will try to reach this angle at the "steer speed", the same, but whit the left limit happen if you press the left key.   physicsProperties just let you adjust the friction of the wheels and or the floor Script.sfrict=0--float "static friction" Script.kfrict=0--float "kinetic friction" function Script:Start() System:Print("phy properties start") self.entity:SetFriction(self.sfrict, self.kfrict) end so simple, and in the editor it looks: Here is a hand drawing of how scripts, objects, parent/child are connected       Here is the wheel script Script.currspeed = 0 Script.parent = nil--Entity "Parent" Script.child = nil--Entity "Child" Script.pin = Vec3(0,0,1) --Vec3 "Hinge Pin" Script.motorspeed=500--float "Motor speed" Script.velcontrolled=false--bool "velControl" function Script:Start() System:Print("wheel start") self.entity:Hide() local pos = self.entity:GetPosition(false) --true for global if self.child ~= nil then if self.parent ~= nil then self.joint = Joint:Hinge(pos.x, pos.y, pos.z, self.pin.x, self.pin.y, self.pin.z, self.child, self.parent) else Debug:Error("no parent assigned") end else Debug:Error("no child assigned") end end function Script:setMotorSpeed(speed) if self.velcontrolled then System:Print("setMotorSpeed: "..speed) self.currspeed = speed if speed~=0 then self.joint:EnableMotor() end self.joint:SetMotorSpeed(self.currspeed) end end function Script:UpdateWorld() if self.motorspeed>0 then self.joint:SetAngle(self.joint:GetAngle()+100) else self.joint:SetAngle(self.joint:GetAngle()-100) end if App.window:KeyDown(Key.Space) then self:setMotorSpeed(0) end if self.velcontrolled then if App.window:KeyDown(Key.Up) then self.currspeed = self.currspeed + 10 if self.currspeed>self.motorspeed then self.currspeed=self.motorspeed end if self.currspeed == 10 then self.joint:EnableMotor() end self.joint:SetMotorSpeed(self.currspeed) end if App.window:KeyDown(Key.Down) then self.currspeed = self.currspeed - 10 if self.currspeed<-self.motorspeed then self.currspeed=-self.motorspeed end self.joint:SetMotorSpeed(self.currspeed) end end end Here is the steer scritp Script.parent = nil--Entity "Parent" Script.child = nil--Entity "Child" Script.pin = Vec3(0,1,0) --Vec3 "Hinge Pin" Script.useLimits=false--bool "use limits" Script.limits = Vec2(-45,45) --Vec2 "Limits" Script.useMotor=flase--bool "use motor" Script.motorspeed=50--float "Steer speed" function Script:Start() System:Print("steer start") if self.child == nil then Debug:Error("No child assigned.") end if self.parent == nil then Debug:Error("No parent assigned.") end self.entity:Hide() local pos = self.entity:GetPosition() self.joint = Joint:Hinge(pos.x, pos.y, pos.z, self.pin.x, self.pin.y, self.pin.z, self.child, self.parent) if self.useLimits then self.joint:EnableLimits() self.joint:SetLimits(self.limits.x,self.limits.y) end if self.useMotor then self.joint:SetMotorSpeed(self.motorspeed) self.joint:EnableMotor() end end function Script:UpdateWorld() local direction=0 if App.window:KeyDown(Key.Left) then direction=self.limits.x end if App.window:KeyDown(Key.Right) then direction=self.limits.y end self.joint:SetAngle(direction) end   here the distro: simpleCarDistro.zip here the project: simpleCar.zip and here a video...   enjoy   Juan

Charrua

Charrua

Clustered Forward Rendering - Fun with Light Types

By modifying the spotlight cone attenuation equation I created an area light, with shadow. And here is a working box light The difference here is the box light uses orthographic projection and doesn't have any fading on the edges, since these are only meant to shine into windows. If I scale the box light up and place it up in the sky, it kind of looks like a directional light. And it kind of is, expect a directional light would either use 3-4 different box lights set at radiating distances from the camera position (cascaded shadow maps) or maybe something different. We have a system now that can handle a large number of different lights so I can really arrange a bunch of box lights in any way I want to cover the ground and give good usage of the available texels. Here I have created three box lights which are lighting the entire courtyard with good resolution. My idea is to create something like the image on the right. It may not look more efficient, but in reality the majority of pixels in cascaded shadow maps are wasted space because the FOV is typically between 70-90 degrees and the stages have to be square. This would also allow the directional light to act more like a point or spot light. Only areas of the scene that move have to be updated instead of drawing the whole scene three extra times every frame. This would also allow the engine to skip areas that don't have any shadow casters in them, like a big empty terrain (when terrain shadows are disabled at least). Spot, and area lights are just the same basic formula of a 2D shadowmap rendered from a point in space with some direction. I am trying to make a generic texture coordinate calculation by multiplying the global pixel position by the shadow map projection matrix times the inverse light matrix, but so far everything I have tried is failing. If I can get that working, then the light calculation in the shader will only have two possible light types, one for pointlights which use a cube shadowmap lookup, and another branch for lights that use a 2D shadowmap.

Josh

Josh

How to Request a Payout from Leadwerks Marketplace

Some of you are earning money selling your game assets in Leadwerks Marketplace. This quick article will show you how to request a payout from the store for money you have earned. First, you need to be signed into your Leadwerks account. Click the drop-down user menu in the upper right corner of the website header and click on the link that says "Account Balance". On the next page you can see your account balance. As long as it is $20 or more you can withdraw the balance into your PayPal account by hitting the "Withdraw Funds" button. Now just enter your PayPal email address and press the "Withdraw" button. After that the withdrawal will be deducted from your balance and the withdrawal request will show in your account history. Shortly after that you will receive the funds in your PayPal account. You can sell your game assets in Leadwerks Marketplace and earn a 70% commission on each transaction.

Josh

Josh

Clustered Forward Rendering - Multiple Light Types

I added spotlights to the forward clustered renderer. It's nothing too special, but it does demonstrate multiple light types working within a single pass. I've got all the cluster data and the light index list packed into one texture buffer now. GPU data needs to be aligned to 16 bytes because everything is built around vec4 data. Consequently, some of the code that handles this stuff is really complicated. Here's a sample of some of the code that packs all this data into an array. for (auto it = occupiedcells.begin(); it != occupiedcells.end(); it++) { pos = it->first; visibilityset->lightgrid[pos.z + pos.y * visibilityset->lightgridsize.x + pos.x * visibilityset->lightgridsize.y * visibilityset->lightgridsize.x] = visibilityset->lightgrid.size() / 4 + 1; Assert((visibilityset->lightgrid.size() % 4) == 0); for (int n = 0; n < 4; ++n) { visibilityset->lightgrid.push_back(it->second.lights[n].size()); } for (int n = 0; n < 4; ++n) { if (!it->second.lights[n].empty()) { visibilityset->lightgrid.insert(visibilityset->lightgrid.end(), it->second.lights[n].begin(), it->second.lights[n].end()); //Add padding to make data aligned to 16 bytes int remainder = 4 - (it->second.lights[n].size() % 4); for (int i = 0; i < remainder; ++i) { visibilityset->lightgrid.push_back(0); } Assert((visibilityset->lightgrid.size() % 4) == 0); } } } And the shader is just as tricky: //------------------------------------------------------------------------------------------ // Point Lights //------------------------------------------------------------------------------------------ countlights = lightcount[0]; int lightgroups = countlights / 4; if (lightgroups * 4 < countlights) lightgroups++; int renderedlights = 0; for (n = 0; n < lightgroups; ++n) { lightindices = texelFetch(texture11, lightlistpos + n); for (i = 0; i < 4; ++i) { if (renderedlights == countlights) break; renderedlights++; lightindex = lightindices[n]; ... I plan to add boxlights next. These use orthographic projection (unlike spotlights, which us perspective) and they have a boundary defined by a bounding box, with no edge softening. They have one purpose, and one purpose only. You can place them over windows for indoor scenes, so you can have light coming in a straight line, without using an expensive directional light. (The developer who made the screenshot below used spotlights, which is why the sunlight is spreading out slightly.) I am considering doing away with cascaded shadow maps entirely and using an array of box lights that automatically rearrange around the camera, or a combination of static and per-object shadows. I hope to find another breakthrough with the directional lights and do something really special. For some reason I keep thinking about the outdoor scenery in the game RAGE and while I don't think id's M-M-MEGATEXTURES!!! are the answer, CSM seem like an incredibly inefficient way to distribute texels and I hope to come up with something better. Other stuff I am considering Colored shadows (that are easy to use). Volumetric lights either using a light mesh, similar to the way lights work in the deferred renderer, or maybe a full-screen post-processing effect that traces a ray out per pixel and calculates lighting at each step. Area lights (easy to add, but there are a lot of possibilities to decide on). These might be totally unnecessary if the GI system is able to do this, so I'm not sure. IES lighting profiles. I really want to find a way to render realistic light refraction, but I can't think of any way to do it other than ray-tracing: It is possible the voxel GI system might be able to handle something of this nature, but I think the resolution will be pretty low. We'll see. So I think what I will do is add the boxlights, shader includes, diffuse and normal maps, bug test everything, make sure map loading works, and then upload a new build so that subscribers can try out their own maps in the beta and see what the speed difference is.

Josh

Josh

Multiple Shadows

Texture arrays are a feature that allow you to pack multiple textures into a single one, as long as they all use the same format and size. In reality, this is just a convenience feature that packs all the textures into a single 3D texture. It allows things like cubemap lookups with a 3D texture, but the implementation is sort of inconsistent. In reality it would be much better if we were just given 1000 texture units to use. However, these can be used to pack all scene shadow maps into a single texture so that they can be rendered in a single pass with the clustered forward renderer. The results are great and speed is very fast. However, there are some limitations. I said early on that my top priority with the design of the new renderer is speed. That means I will make decisions that favor speed over other flexibility, and here is a situation where we will see that in action. All scene shadow maps need to be packed into a single array texture of fixed size, which means there is a hard upper limit on total shadow-casting lights in the world. I've also discovered that my beautiful variance shadow maps use a ton of memory. At maximum quality they use an RGBA 32-bit floating point format, so that means a single 1024x1024 cubemap consumes 96 megabytes! (A standard shadow map at the same resolution uses 24 megabytes VRAM). Because all shadows are packed into a single texture, the driver can't even page the data in and out of video memory. If you don't have enough VRAM, you will get an OUT_OF_MEMORY error. So anticipating and handling this issue will be important. Hopefully I can just use appropriate defaults. I think I can cut the size of the VSMs down to 25%, but without the beautiful shadow scattering effect. Because the textures all have to be the same size, it is also impossible to set just one light to use higher resolution settings. If you want speed, I have to build more constraints into the engine. This is the kind of thing I was talking about. I want great graphics and the absolute fastest performance, so that is what Ia m doing. Okay, so with all that information and disclaimers out of the way, I give you the first shot showing multiple lights being rendered with shadows in a single pass in our new forward renderer. Here are three lights: And here I lowered the shadow map resolution and added 50 randomly placed lights. There are some artifacts and glitches, but it's still a pretty cool shot. All running in real-time, in a single pass: Keep in mind this is all before any indirect lighting has been added. The future looks bright!

Josh

Josh

Multisampled Shadowmaps

Because variance shadow maps allow us to store pre-blurred shadow maps it also allows us to take advantage of multipled textures. MSAA is a technique that renders extra pixels around the target pixel and averages the results. This can help bring out fine lines that are smaller than a pixel onscreen, and it also greatly reduces jagged edges. I wanted to see how well this would work for rendering shadow maps, and to see if I could reduce the ragged edge appearance that shadow maps are sometimes prone to. Below is the shadow rendered at 1024x1024 with no multisampling and a 3x3 blur: Using a 4X MSAA texture eliminates the appearance of jagged edges in the shadow: Here they are side by side: This is very exciting stuff because we are challenging some of the long-held limitations of real-time graphics.  

Josh

Josh

Entry progress

Making a multiplayer game about territory conquest, like a board game. The winner will have the most territory. Something like risk but no dice rolls and more simple.   So far i have: A socket server (written in go).This has the territories stored in db. Text communication protocol (using json). For example this is a register packet:  {"t":0, "p":{"name":"test"}} Game client that can select territories (using c++ poco network libraries) Sqlite3 for game data , i really like this wrapper (makes things less verbose): https://github.com/SqliteModernCpp/sqlite_modern_cpp   Working on implementing game play server side. Here is main menu trying to be in line with retro theme

aiaf

aiaf

Realistic Penumbras

Shadows with a constant softness along their edges have always bugged me. Real shadows look like this. Notice the shadow becomes software the further away it gets from the door frame. Here is a mockup of roughly what that shadow looks like with a constant softness around it. It looks so fake! How does this effect happen? There's not really any such thing as a light that all emits from a single point. The closest thing would be a very small bulb, but that still has volume. Because of this, shadows have a soft edge around them that gets less sharp the further away from the occluding object they are. I think some of this also has to do with photons hitting the edge of the object and scattering a bit as they go past it. The edge of the photon catches on the object and knocks it off course. We have some customers who need very realistic renderings, ideally as close to a photo as possible, and I wanted to see if I could create this behavior with our variance shadow maps. Here are the results: The shadows are sharp when they start being cast and become more blurry as light is scattered. Here's another shot. The shadows actually look real instead of just being blobby silhouettes. This is really turning out great!

Josh

Josh

Variance Shadow Maps

After a couple days of work I got point light shadows working in the new clustered forward renderer. This time around I wanted to see if I could get a more natural look for shadow edges, as well as reduve or eliminate shadow acne. Shadow acne is an effect that occurs when the resolution of the shadow map is too low, and incorrect depth comparisons start being made with the lit pixels: By default, any shadow mapping alogirthm will look like this, because not every pixel onscreen has an exact match in the shadow map when the depth comparison is made: We can add an offset to the shadow depth value to eliminate this artifact: \ However, this can push the shadow back too far, and it's hard to come up with values that cover all cases. This is especially problematic with point lights that are placed very close to a wall. This is why the editor allows you to adjust the light range of each light, on an individual basis. I came across a techniqe called variance shadow mapping. I've seen this paper years ago, but never took the time to implement it because it just wasn't a big priority. This works by writing the depth and depth squared values into a GL_RG texture (I use 32-bit floating points). The resulting image is then blurred and the variance of the values can be calculated from the average squared depth stored in the green channel. Then we use Chebyshev's inequality to get an average shadow value: So it turns out, statistics is actually good for something useful after all. Here are the results: The shadow edges are actually soft, without any graininess or pixelation. There is a black border on the edge of the cubemap faces, but I think this is caused by my calculated cubemap face not matching the one the hardware uses to perform the texture lookup, so I think it can be fixed. As an added bonus, this eliminates the need for a shadow offset. Shadow acne is completely gone, even in the scene below with a light that is extremely close to the floor. The banding you are seeing is added in the JPEG compression and it not visible in the original render. Finally, because the texture filtering is so smooth, shadowmaps look much higher resolution than with PCF filtering. By increasing the light range, I can light the entire scene, and it looks great just using a 1024x1024 cube shadow map. VSMs are also quite fast because they only require a single texture lookup in the final pass. So we get better image quality, and probably slightly faster speed. Taking extra time to pay attention to small details like this is going to make your games look great soon!

Josh

Josh

Celebrating the success of Bladequest!

I decided to do a small party, eating some delicious food, celebrating the success of Bladequest - The First Chapter. It had over 450 downloads over within only 7 days and counting, I had the first few sales and Josh is supporting me bringing the game on steam. This is really awesome, thank you soo much, you make it possible for me to live my dream and make me feel the fire deep in my hearth, which gives me the power and motivation to bring my upcoming games on a new exciting quality level! I will soon start a Devlog kind of series on YouTube, where I introduce cool new features and content waiting for you in my next project, so you can give direct feedback and together we can make it awesome :D! Stay cool! Markus from Phodex Games! Grab the Game here! Give Feeback for Bladequest!
My Youtube Channel!
Your Games Wishlist so I can craft a game to your needs!

Phodex Games

Phodex Games

About the day night cycle

Hey I know you can use for this the nice shader of Shadmar, but now if you want to try something else, This is somewhat like you can obtain if you use a big light sphere rounding around the player. I speed up the cycle quicker for the video to see all the day/night. I used the directional light rotation for the shadows, the color of the sky-sphere for the colorourfull effect and the color of the directional light for the luminosity, getting the no shadow night light. Clouds/rain has been hidden to see better. Here is a demo:  

Marcousik

Marcousik

Rainy weather in forest - performance test LE 4.3

I'm happy with this because I spent a long time in testing how to make rain without slowing the game too much. I wanted the rain to have "splash" effect on the ground and to stop correctly as the player could stay under a roof or hides under a big stone etc etc.. I got big problems with collision testing and very low performances when using the shaped vegetation tool. Now that's fine, because I do not use the collision anymore, saving much performances. Here is a little demo:   Here is a bit longer one with 2 rainfalls:   Hope you enjoy !

Marcousik

Marcousik

Clustered Forward Rendering Victory

I got the remaining glitches worked out, and the deal is that clustered forward rendering works great. It has more flexibility than deferred rendering and it performs a lot faster. This means we can use a better materials and lighting system, and at the same time have faster performance, which is great for VR especially. The video below shows a scene with 50 lights working with fast forward rendering One of the last things I added was switching from a fixed grid size of 16x16x16 to an arbitrary layout that can be set at any time. Right now I have it set to 16x8x64, but I will have to experiment to see what the optimum dimensions are. There are a lot of things to add (like shadows!) but I have zero concern about everything else working. The hard part is done, and I can see that this technique works great.

Josh

Josh

Awesome Redesign & Update & Steam!

Doesn’t the new design of Bladequest look awesome? To celebrate it I just released Bladequest - TFC 1.4.0, including the new design and a voice over for the intro and outro screen, spoken by myself, introducing the project. I think this adds a more personal touch to the game. What do you think about it? The game is now also available on gamejolt, running very good there! Bladequest was downloaded over 100 times within 3 days, thats awesome! To celebrate that I started a sale for the GOLD Edition. Get it 67% OFF! Thanks to Josh Bladequest is also coming to Steam soon! I am very excited to continue development and deliever more content and better quality. Enjoy the update and have a great time! Markus from Phodex Games! Give Feeback for Bladequest!
My Youtube Channel!
Your Games Wishlist so I can craft a game to your needs!

Phodex Games

Phodex Games

Clustered Forward Rendering Progress

In order to get the camera frustum space dividing up correctly, I first implemented a tiled forward renderer, which just divides the screen up into a 2D grid. After working out the math with this, I was then able to add the third dimension and make an actual volumetric data structure to hold the lighting information. It took a lot of trial and error, but I finally got it working. This screenshot shows the way the camera frustum is divided up into a cubic grid of 16x16x16 cells. Red and green show the XY position, while the blue component displays the depth: And here you can see the depth by itself, enhanced for visibility: I also added dithering to help hide light banding that can appear in gradients. Click on the image below to view it properly: I still have some bugs to resolve, but the technique basically works. I have no complete performance benchmarks yet to share but I think this approach is a lot faster than deferred rendering. It also allows much more flexible lighting, so it will work well with the advanced lighting system I have planned.

Josh

Josh

Clustered Forward Rendering - First Performance Metrics

I was able to partially implement clustered forward rendering. At this time, I have not divided the camera frustum up into cells and I am just handing a single point light to the fragment shader, but instead of a naive implementation that would just upload the values in a shader uniform, I am going through the route of sending light IDs in a buffer. I first tried texture buffers because they have a large maximum size and I already have a GPUMemBlock class that makes them easy to work with. Because the GPU likes things to be aligned to 16 bytes, I am treating the buffer as an array of ivec4s, which makes the code a little trickier, thus we have a loop within a loop with some conditional breaks: vec4 CalculateLighting(in vec3 position, in vec3 normal) { vec4 lighting = vec4(0.0f); int n,i,lightindex,countlights[3]; vec4 lightcolor; ivec4 lightindices; mat4 lightmatrix; vec2 lightrange; vec3 lightdir; float l,falloff; //Get light list offset int lightlistpos = 0;//texelFetch(texture12, sampleCoord, 0).x; //Point Lights countlights[0] = texelFetch(texture11, lightlistpos).x; for (n = 0; n <= countlights[0] / 4; ++n) { lightindices = texelFetch(texture11, lightlistpos + n); for (i = 0; i < 4; ++i) { if (n == 0 && i == 0) continue; //skip first slot since that contains the light count if (n * 4 + i > countlights[0]) break; //break if we go out of bounds of the light list lightindex = lightindices[1]; lightmatrix[3] = texelFetch(texture15, lightindex * 4 + 3); vec3 lightdir = position - lightmatrix[3].xyz; float l = length(lightdir); falloff = max(0.0f,-dot(normal,lightdir/l)); if (falloff <= 0.0f) continue; lightrange = texelFetch(texture15, lightindex * 4 + 4).xy; falloff *= max(0.0f, 1.0f - l / lightrange.y); if (falloff <= 0.0f) continue; lightmatrix[0] = texelFetch(texture15, lightindex * 4); lightmatrix[1] = texelFetch(texture15, lightindex * 4 + 1); lightmatrix[2] = texelFetch(texture15, lightindex * 4 + 2); lightcolor = vec4(lightmatrix[0].w,lightmatrix[1].w,lightmatrix[2].w,1.0f); lighting += lightcolor * falloff; } } return lighting; } I am testing with Intel graphics in order to get a better idea of where the bottlenecks are. My GEForce 1080 just chews through this without blinking an eye, so the slower hardware is actually helpful in tuning performance. I was dismayed at first when I saw my framerate drop from 700 to 200+. I created a simple scene in Leadwerks 4 with one point light and no shadows, and the performance was quite a bit worse on this hardware, so it looks like I am actually doing well. Here are the numbers: Turbo (uniform buffer): 220 FPS Turbo (texture buffer): 290 FPS Leadwerks 4: 90 FPS Of course a discrete card will run much better. The depth pre-pass has a very slight beneficial effect in this scene, and as more lights and geometry are added, I expect the performance savings will become much greater. Post-processing effects like bloom require a texture with the scene rendered to it, so this system will still need to render to a single color texture when these effects are in use. The low quality settings, however, will render straight to the back buffer and thus provide a much better fallback for low-end hardware. Here we can see the same shader working with lots of lights. To get good performance out of this, the camera frustum needs to be divided up into cells with a list of relevant lights for each cell. There are two more benefits to this approach. Context multisample antialiasing can be used when rendering straight to the back buffer. Of course, we can do the same with deferred rendering and multisample textures now, so that is not that big of a deal. What IS a big deal is the fact that transparency with shadows will work 100%, no problems. All the weird tricks and hacks we have tried to use to achieve this all go away. (The below image is one such hack that uses dithering combined with MSAA to provide 50% transparency...sort of.) Everything else aside, our first tests reveal more than a 3X increase in performance over the lighting approach that Leadwerks 4 uses. Things look fantastic!

Josh

Josh

Taking Care of Business

This is about financial stuff, and it's not really your job to care about that, but I still think this is cool and wanted to share it with you. People are buying stuff on our website, and although the level of sales is much lower than Steam it has been growing. Unlike Steam, sales through our website are not dependent on a third party and cannot be endangered by flooded marketplaces, strange decisions, and other random events. Every customer I checked who used a credit card has kept it on file for further purchases. The credit card number isn't actually stored on our server, and I never see it. Instead, PayPal stores the number and we store a token that can only be used for purchases through this domain, so it is impossible for a hacker to steal your credit card from our site. I feel better doing things this way because it's much safer for everyone. Anyways, having a customer who has the ability to buy anything they want on your site at a moment's notice is a very good thing. You give them a good product and they can easily buy it, if it is something they want. This is what I hoped to see by introducing these features to the site, so it is nice to see it working. Thank you for your support! I will work hard to bring you more great software.

Josh

Josh

Clustered Forward Rendering

I decided I want the voxel GI system to render direct lighting on the graphics card, so in order to make that happen I need working lights and shadows in the new renderer. Tomorrow I am going to start my implementation of clustered forward rendering to replace the deferred renderer in the next game engine. This works by dividing the camera frustum up into sectors, as shown below. A list of visible lights for each cell is sent to the GPU. If you think about it, this is really another voxel algorithm. The whole idea of voxels is that it costs too much processing power to calculate something expensive for each pixel, so lets calculate it for a 3D grid of volumes and then grab those settings for each pixel inside the volume. In the case of real-time global illumination, we also do a linear blend between the values based on the pixel position. Here's a diagram of a spherical point light lying on the frustum. But if we skew the frustum so that the lines are all perpendicular, we can see this is actually a voxel problem, and it's the light that is warped in a funny way, not the frustum. I couldn't figure out how to warp the sphere exactly right, but it's something like this. For each pixel that is rendered, you transform it to the perpendicular grid above and perform lighting using only the lights that are present in that cell. This tecnnique seems like a no-brainer, but it would not have been possible to do this when our deferred renderer first came to be. GPUs were not nearly as flexible back then as they are now, and things like a variable-length for loop would be a big no-no. Well, something else interesting occurred to me while I was going over this. The new engine is an ambitious project, with a brand new editor to be built from scratch. That's going to take a lot of time. There's a lot of interest in the features I am working on now, and I would like to get them out sooner rather than later. It might be possible to incorporate the clustered forward renderer and voxel GI into Leadwerks Game Engine 4 (at which point I would probably call it 5) but keep the old engine architecture. This would give Leadwerks a big performance boost (not as big as the new architecture, but still probably 2-3x in some situations). The visuals would also make a giant leap forward into the future. And it might even be possible to release in time for Christmas. All the shaders would have to be modified, but if you just updated your project everything would run in the new Leadwerks Game Engine 5 without any problem. This would need to be a paid update, probably with a new app ID on Steam. The current Workshop contents would not be accessible from the new app ID, but we have the Marketplace for that. This would also have the benefit of bringing the editor up to date with the new rendering methods, which would mean the existing editor could be used seamlessly with the new engine. We presently can't do this because the new engine and Leadwerks 4 use completely different shaders. This could solve a lot of problems and give us a much smoother transition from here to where we want to go in the future: Leadwerks Game Engine 4 (deferred rendering, existing editor) [Now] Leadwerks Game Engine 5 (clustered forward rendering, real-time GI, PBR materials, existing architecture, existing editor) [Christmas 2018] Turbo Game Engine (clustered forward rendering, new architecture,  new editor) [April 2020] I just thought of this a couple hours ago, so I can't say right now for sure if we will go this route, but we will see. No matter what, I want to get a version 4.6 out first with a few features and fixes. You can read more about clustered forward rendering in this article

Josh

Josh

Introducing Leadwerks Marketplace

Steam Workshop was a compelling idea to allow game asset authors to sell their items for use with Leadwerks Game Engine. However, the system has turned out to have some fundamental problems, despite my best efforts to work around it. Free items are not curated, causing the store to fill with low quality content. Some people have reported trouble downloading items. The publishing process is pretty tedious. The check-out process requires adding funds to Steam Wallet, and is just not very streamlined. At the same time, three new technologies have emerged that make it possible to deliver a better customer and seller experience through our website. Amazon S3 offers cheap and reliable storage of massive amounts of data. Paypal credit card tokens allow us to safely store a token on our server that can't be used anywhere else, instead of a credit card number. This eliminates the danger of a potential website hack revealing your information. Invision Power Board has integrated both these technologies into our commerce system. it would not have been possible to build a web store a few years ago because the cost of server space would have been prohibitive, and storing hundreds of gigs of data would have made my website backup process unsustainable. So at the time, the unlimited storage of Steam and their payment processing system was very appealing. That is no longer the case. To solve the problems of Steam Workshop, and give you easy access to a large library of ready-to-use game assets, I am happy to introduce Leadwerks Marketplace. The main page shows featured assets, new content, and the most popular items, with big thumbnails everywhere.   When you view an item, screenshots, videos, and detailed technical specifications are shown: How does Leadwerks Marketplace improve things?: Easy download of zip files that are ready to use with Leadwerks. You can use File > Import menu item to extract them to the current project, or just unzip them yourself. All content is curated. Items are checked for compatibility and quality. Clear technical specifications for every file, so you know exactly what you are getting. Cheap and reliable storage forever with Amazon S3. Any DLCs or Workshop Store items you purchased can be downloaded from Leadwerks Marketplace by linking your Steam account to your profile. Easy publishing of your items with our web-based uploader. We're launching with over 50 gigabytes of game assets, and more will be added continuously. To kick off the launch we're offering some items at major discounts during the Summer Game Tournament. Here are a few assets to check out: Get "The Zone" for just $4.99: Or download our Mercenary character for free! Just create your free Leadwerks account to gain access. Other items will be featured on sale during the Summer Game Tournament: After purchasing an item, you can download it immediately. All your purchased items will be shown in the Purchases area, which you can access from the user menu in the top-right of this the website header: Here all your purchased items will be available to download, forever: If you are interested in selling your game assets to over 20,000 developers, you can upload your items now. Sellers receive a 70% royalty for each sale, with a minimum payout of just $20. See the content guidelines for details and contact me if you need any help. If you have a lot of good content, we can even convert your assets for you and make them game-ready for Leadwerks, so there's really no risk to you. Browse game assets now.  

Admin

Admin

Voxel Cone Tracing Part 5 - Hardware Acceleration

I was having trouble with cone tracing and decided to first try a basic GI algorithm based on a pattern of raycasts. Here is the result: You can see this is pretty noisy, even with 25 raycasts per voxel. Cone tracing uses an average sample, which eliminates the noise problem, but it does introduce more inaccuracy into the lighting. Next I wanted to try a more complex scene and get an estimate of performance. You may recognize the voxelized scene below as the "Sponza" scene frequently used in radiosity testing: Direct lighting takes 368 milliseconds to calculate, with voxel size of 0.25 meters. If I cut the voxel grid down to a 64x64x64 grid then lighting takes just 75 milliseconds. These speeds are good enough for soft GI that gradually adjusts as lighting changes, but I am not sure if this will be sufficient for our purposes. I'd like to do real-time screen-independent reflections. I thought about it, and I thought about it some more, and then when I was done with that I kept thinking about it. Here's the design I came up with: The final output is a 3D texture containing light data for all six possible directions.  (So a 256x256x256 grid of voxels would actually be 1536x256x256 RGB, equal to 288 megabytes.) The lit voxel array would also be six times as big. When a pixel is rendered, three texture lookups are performed on the 3D texture and multiplied by the normal of the pixel. If the voxel is empty, there is no GI information for that volume, so maybe a higher mipmap level could be used (if mipmaps are generated in the last step). The important thing is we only store the full-resolution voxel grid once. The downsampled voxel grid use an alpha channel for coverage. For example, a pixel with 0.75 alpha would have six out of eight solid child voxels. I do think voxelization is best performed on the CPU due to flexibility and the ability to cache static objects. Direct lighting, in this case, would be calculated from shadowmaps. So I have to implement the clustered forward renderer before going forward with this.

Josh

Josh

Summer Game Tournament

Summer is here, and you know what that means! Yes, it is time for another LEGENDARY game tournament. This year the theme is "Retro Gaming". Create a modern take on an arcade game hearkening back to the days of NES, Neo Geo, Sega, or just make anything you want. Either way, you get this totally radical poster as a prize! How does it work?  For one month, the Leadwerks community builds small playable games.  Some people work alone and some team up with others.  At the end of the month we release our projects to the public and play each other's games.  The point is to release something short and sweet with a constrained timeline, which has resulted in many odd and wonderful mini games for the community to play. WHEN: The tournament begins Thursday, June 21, and ends on July 31st at the stroke of midnight. HOW TO PARTICIPATE: Publish your retro-or-other-themed game to the Games Showcase before the deadline. You can work as a team or individually. Use blogs to share your work and get feedback as you build your game. Games must have a preview image, title, and contain some minimal amount of gameplay (there has to be some way to win the game) to be considered entries. It is expected that most entries will be simple, given the time constraints. This is the perfect time to try making a VR game or finish that idea you've been waiting to make! PRIZES: All participants will receive a limited-edition 11x17" poster commemorating the event. To receive your prize you need to fill in your name, mailing address, and phone number (for customs) in your account info. At the end of the tournament we will post a roundup blog featuring your entries. Let's go!

Admin

Admin

Voxel Cone Tracing Part 4 - Direct Lighting

Now that we can voxelize models, enter them into a scene voxel tree structure, and perform raycasts we can finally start calculating direct lighting. I implemented support for directional and point lights, and I will come back and add spotlights later. Here we see a shadow cast from a single directional light: And here are two point lights, one red and one green. Notice the distance falloff creates a color gradient across the floor: The idea here is to first calculate direct lighting using raycasts between the light position and each voxel: Then once you have the direct lighting, you can calculate approximate global illumination by gathering a cone of samples for each voxel, which illuminates voxels not directly visible to the light source: And if we repeat this process we can simulate a second bounce, which really fills in all the hidden surfaces: When we convert model geometry to voxels, one of the important pieces of information we lose are normals. Without normals it is difficult to calculate damping for the direct illumination calculation. It is easy to check surrounding voxels and determine that a voxel is embedded in a floor or something, but what do we do in the situation below? The thin wall of three voxels is illuminated, which will leak light into the enclosed room. This is not good: My solution is to calculate and store lighting for each face of each voxel. Vec3 normal[6] = { Vec3(-1, 0, 0), Vec3(1, 0, 0), Vec3(0, -1, 0), Vec3(0, 1, 0), Vec3(0, 0, -1), Vec3(0, 0, 1) }; for (int i = 0; i < 6; ++i) { float damping = max(0.0f,normal[i].Dot(lightdir)); //normal damping if (!isdirlight) damping *= 1.0f - min(p0.DistanceToPoint(lightpos) / light->range[1], 1.0f); //distance damping voxel->directlighting[i] += light->color[0] * damping; } This gives us lighting that looks more like the diagram below: When light samples are read, the appropriate face will be chosen and read from. In the final scene lighting on the GPU, I expect to be able to use the triangle normal to determine how much influence each sample should have. I think it will look something like this in the shader: vec4 lighting = vec4(0.0f); lighting += max(0.0f, dot(trinormal, vec3(-1.0f, 0.0f, 0.0f)) * texture(gimap, texcoord + vec2(0.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(1.0f, 0.0f, 0.0f)) * texture(gimap, texcoord + vec2(1.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, -1.0f, 0.0f)) * texture(gimap, texcoord + vec2(2.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 1.0f, 0.0f)) * texture(gimap, texcoord + vec2(3.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 0.0f, -1.0f)) * texture(gimap, texcoord + vec2(4.0 / texwidth, 0.0)); lighting += max(0.0f, dot(trinormal, vec3(0.0f, 0.0f, 1.0f)) * texture(gimap, texcoord + vec2(5.0 / texwidth, 0.0)); This means that to store a 256 x 256 x 256 grid of voxels we actually need a 3D RGB texture with dimensions of 256 x 256 x 1536. This is 288 megabytes. However, with DXT1 compression I estimate that number will drop to about 64 megabytes, meaning we could have eight voxel maps cascading out around the player and still only use about 512 megabytes of video memory. This is where those new 16-core CPUs will really come in handy! I added the lighting calculation for the normal Vec3(0,1,0) into the visual representation of our voxels and lowered the resolution. Although this is still just direct lighting it is starting to look interesting: The last step is to downsample the direct lighting to create what is basically a mipmap. We do this by taking the average values of each voxel node's children: void VoxelTree::BuildMipmaps() { if (level == 0) return; int contribs[6] = { 0 }; for (int i = 0; i < 6; ++i) { directlighting[i] = Vec4(0); } for (int ix = 0; ix < 2; ++ix) { for (int iy = 0; iy < 2; ++iy) { for (int iz = 0; iz < 2; ++iz) { if (kids[ix][iy][iz] != nullptr) { kids[ix][iy][iz]->BuildMipmaps(); for (int n = 0; n < 6; ++n) { directlighting[n] += kids[ix][iy][iz]->directlighting[n]; contribs[n]++; } } } } } for (int i = 0; i < 6; ++i) { if (contribs[i] > 0) directlighting[i] /= float(contribs[i]); } } If we start with direct lighting that looks like the image below: When we downsample it one level, the result will look something like this (not exactly, but you get the idea): Next we will begin experimenting with light bounces and global illumination using a technique called cone tracing.

Josh

Josh

×