Jump to content

JMK

Staff
  • Content Count

    16,601
  • Joined

  • Last visited

Everything posted by JMK

  1. SSAO in Vulkan:

     out.gif.a75773be6af8afec0ae62aa6f360abc7.gif

  2. A new update is available that adds post-processing effects in Leadwerks 5 beta. To use a post-processing effect, you load it from a JSON file and apply it to a camera like so: auto fx = LoadPostEffect("Shaders/PostEffects/SSAO.json"); camera->AddPostEffect(fx); You can add as many effects as you want, and they will be executed in sequence. The JSON structure looks like this for a simple effect: { "postEffect": { "subpasses": [ { "shader": { "vertex": "Shaders/PostEffects/PostEffect.vert.spv", "fragment": "Shaders/PostEffects/SSAO.frag.spv" } } ] } } Multiple subpasses are supported for custom blurring and chains of shaders. This Gaussian blur effect uses several intermediate buffers to blur and downsample the image: { "postEffect": { "buffers": [ { "size": [0.5, 0.5] }, { "size": [0.25, 0.25] }, { "size": [0.125, 0.125] } ], "subpasses": [ { "target": 0, "shader": { "vertex": "Shaders/PostEffects/PostEffect.vert.spv", "fragment": "Shaders/PostEffects/blurx.frag.spv" } }, { "target": 1, "shader": { "vertex": "Shaders/PostEffects/PostEffect.vert.spv", "fragment": "Shaders/PostEffects/blury.frag.spv" } }, { "target": 2, "shader": { "vertex": "Shaders/PostEffects/PostEffect.vert.spv", "fragment": "Shaders/PostEffects/blurx.frag.spv" } }, { "shader": { "vertex": "Shaders/PostEffects/PostEffect.vert.spv", "fragment": "Shaders/PostEffects/blury.frag.spv" } } ] } } A new file is located in "Config/settings.json". This file contains information for the engine when it initializes. You can specify a default set of post-processing effects that will automatically be loaded whenever a camera is created. If you don't want any post-processing effects you can either change this file, or call Camera::ClearPostEffects() after creating a camera. Customizable properties are not yet supported but I plan to add these so you can modify the look of an effect on-the-fly. Fixed physics bug reported by @wadaltmon Other changes: EnablePhysics is renamed to SetPhysicsMode. EnableGravity is renamed to SetGravityMode. EnableSweptCollision is renamed to SetSweptCollision. COLLISION_CHARACTER is renamed to COLLISION_PLAYER
  3. Insane Apple switching Mac to ARM chips (not joking): https://appleinsider.com/articles/20/06/19/how-to-play-games-on-an-arm-mac

    1. aiaf

      aiaf

      Less power consumption with decent performance, maybe not a bad idea in long term.

      Most likely vulkan wont be suported by apple , i think they started with metal before vulkan was released.

      Seems like some major games are ported anyway, so they still have enough users ...

       

    2. JMK

      JMK

      MoltenVK allows Vulkan to run on Apple devices.

      The problem is that the new computers will be much slower than the old computers. The Mac will just be 15" iPhone.

    3. JMK

      JMK

      Most of your Steam games no longer work in Catalina.

      What made Mac good was when it got more compatible with Windows. This will be very bad. I was going to buy an iMac but not now!

  4. The collision type doesn’t affect the behavior though, that is just a filter for collision.
  5. No, the player controller does not use the same physics as normal rigid bodies.
  6. It took three days but I've got a basic post-processing effect working with Vulkan now: 🤪
  7. The new engine uses a fixed time step in the main loop.
  8. Yes, that is the same functionality, although I don't think you have have a global variable like that in a shader, unless you use atomics, which you almost certainly do not want to do. It's also best to do things on a time basis. You will always get the current time in the shader, but setting variables each frame can be problematic because for every game update, you might have 2-3 frames rendered in the rendering thread. So values changing smoothly would not be exactly smooth. It's better to set a start and stop time, and let the shader do the interpolation based on the current time, which is updated each render frame.
  9. I am working out the details now, but that should be doable.
  10. I believe the subpass approach described above would cover that case, because you can copy the contents of the framebuffer to another image and then access that image the next frame.
  11. Define "alter"? You mean change the source code? Shaders in Vulkan are loaded from compiled SPV files. This is nice because there is never any question whether a shader will compile successfully on one card or another. It would be possible to integrate the shader compiler code into the engine, but that's not really something I see a need for.
  12. Yes, that should be possible. Instead of using shader uniforms you will just have access to a set of up to maybe 16 float values, or something like that, and the JSON strings will give them names for identification. So you could call something like effect->SetProperty("strength", 0.5). You don't need a shader to render to a texture. There is a command called Camera::SetTarget() or Camera::SetRenderTarget (check the docs) that does this. Depth textures never get blurred. You can't blur them and get useful results. I think the default post effects will be indicated in a settings.json file. The engine will just by default load up whatever effects that file indicates. You can call Camera::ClearPostEffects() and then add your own shaders, or modify the settings file. The point is, I know if I don't have some default settings then 90% of screenshots in the new engine will have no post-processing effects and it will look bad. Post effects are drawn in the order they are added to the camera. You can use separate effects. I am just saying if you know you want bloom, SSAO, and tone mapping, it is more efficient to just pack it all into one single shader rather than drawing it in several passes. The default blur will just downsample the image several times, down to something very small like 1x1. So you could access whatever blur level you want. Maybe level 4 for bloom and the last level for iris adjustment. The only downside is customized blurring would not really be possible. With this design I don't see any way to make that horizontal blur bloom effect (I guess it's called "anamorphic"): Still, that is a fairly edge case. It would be easier for me to just hard-code an additional blur option than to implement some complicated flowchart scheme. Are there any other weird cases where a linear progression drawing from one buffer to the next one isn't going to be sufficient? Are there any other weird custom blurs I should be thinking about, or anything of that nature? Hmmm, perhaps the answer is not a flowchart, and not hard-coded options, but a series of subpass settings in the JSON file, something like this: { "postEffect": { "buffers": [ { "width": 0.5, "height": 0.5 }, { "width": 0.5, "height": 0.5 }, { "width": 0.25, "height": 0.25 }, { "width": 0.25, "height": 0.25 }, { "width": 0.125, "height": 0.125 }, { "width": 0.125, "height": 0.125, "index": 0 } ], "shader": "Shaders/PostEffects/Bloom.frag.spv", "subpasses": [ { "source": "COLOR", "target": 0, "shader": "Shaders/PostEffects/Utility/hblur.frag.spv" }, { "source": 0, "target": 1, "shader": "Shaders/PostEffects/Utility/vblur.frag.spv" }, { "source": 1, "target": 2, "shader": "Shaders/PostEffects/Utility/hblur.frag.spv" }, { "source": 2, "target": 3, "shader": "Shaders/PostEffects/Utility/vblur.frag.spv" }, { "source": 3, "target": 4, "shader": "Shaders/PostEffects/Utility/hblur.frag.spv" }, { "source": 4, "target": 5, "shader": "Shaders/PostEffects/Utility/vblur.frag.spv" } ], "parameters": [ { "name": "overdrive", "value": 1.2 }, { "name": "cutoff", "value": 0.15 } ] } }
  13. I am starting work on the new post-effects system and am creating this topic for discussion. The system works by loading a JSON file that defines a shader, adjustable shader settings the user can modify, and which textures are required. In the new system you do not have to blur images yourself. Instead just set "requireBlurredColor" to true and you will get a blurred image of the previous color texture in the effects stack. This was a problem in some of @shadmar's great effects even, because a lot of his scripts did not account for a changing buffer size. The system works like this: auto fx = LoadPostEffect("PostEffects/Bloom.json"); camera->AddPostEffect(fx); There is no setting a shader or script as a post effect. There are no scripted post effects at all, because no Lua code runs in the rendering thread. "Bloom.json" will look something like this. Note that no vertex shader is defined: { "postEffect": { "shader": "Shaders/PostEffects/Bloom.spv", "requireBlurImage": true, "parameters": [ { "name": "overdrive", "value": 1.2 }, { "name": "cutoff", "value": 0.15 } ] } } The only new effect I can think of that I want to add is per-pixel motion blur. In addition we should have: Bloom / Iris adjustment DOF (near and far, with adjustable ranges) SSAO (still useful for fine details) Godrays (may work for all lights, not just directional. I think I can use a raytracing approach for this.) I think I will skip SSR because the voxel raytracing replaces this and provides more consistent results. There are lots of other little effects that could be added like film grain, grayscale, etc. but I think this is best left as an external pack of shaders made by me or other developers. Distance fog is built into the main shaders and does not require a post-processing effect. I might actually remove the camera SetGamma command and merge this with some kind of tone mapping effect. I think bloom, tone mapping, and SSAO will be enable in all created cameras, by default. In fact these can probably be combined into a single shader. Am I missing anything else important? Is there an another effect I should add, or a problem I have not accounted for?
  14. Press the Install button, and the DLC contents will be added to your current project.
  15. Version 1.0.0

    The classic multi-tool for a variety of uses. Model is in glTF format, textures are included in PNG and DDS format.

    $4.99

  16. Version 1.0.0

    13 downloads

    This subterranean environment from has been converted into GLTF binary format.

    Free

  17. OBJ model saver plugin source code added in plugin SDK: https://github.com/Leadwerks/PluginSDK

  18. I am also adding model saving capabilities to the plugin SDK so you can load a model in code and save it as a different format. I am eager to get some of my old models into GLTF format.
  19. UU3D is good but does not import Leadwerks scenes.
  20. A new update is available for beta testers. The dCustomJoints and dContainers DLLs are now optional if your game is not using any joints (even if you are using physics). The following methods have been added to the collider class. These let you perform low-level collision tests yourself: Collider::ClosestPoint Collider::Collide Collider::GetBounds Collider::IntersectsPoint Collider::Pick The PluginSDK now supports model saving and an OBJ save plugin is provided. It's very easy to convert models this way using the new Model::Save() method: auto plugin = LoadPlugin("Plugins/OBJ.dll"); auto model = LoadModel(world,"Models/Vehicles/car.mdl"); model->Save("car.obj"); Or create models from scratch and save them: auto box = CreateBox(world,10,2,10); box->Save("box.obj"); I have used this to recover some of my old models from Leadwerks 2 and convert them into GLTF format: There is additional documentation now on the details of the plugin system and all the features and options. Thread handling is improved so you can run a simple application that handles 3D objects and exits out without ever initializing graphics. Increased strictness of headers for private and public members and methods. Fixed a bug where directional lights couldn't be hidden. (Check out the example for the CreateLight command in the new docs.) All the Lua scripts in the "Scripts\Start" folder are now executed when the engine initializes, instead of when the first script is run. These will be executed for all programs automatically, so it is useful for automatically loading plugins or workflows. If you don't want to use Lua at all, you can delete the "Scripts" folder and the Lua DLL, but you will need to load any required plugins yourself with the LoadPlugin command. Shadow settings are simplified. In Leadwerks 4, entities could be set to static or dynamic shadows, and lights could use a combination of static, dynamic, and buffered modes. You can read the full explanation of this feature in the documentation here. In Leadwerks 5, I have distilled that down to two commands. Entity::SetShadows accepts a boolean, true to cast shadows and false not to. Additionally, there is a new Entity::MakeStatic method. Once this is called on an entity it cannot be moved or changed in any way until it is deleted. If MakeStatic() is called on a light, the light will store an intermediate cached shadowmap of all static objects. When a dynamic object moves and triggers a shadow redraw, the light will copy the static shadow buffer to the shadow map and then draw any dynamic objects in its range. For example, if a character walks across a room with a single point light, the character model has to be drawn six times but the static scene geometry doesn't have to be redrawn at all. This can result in an enormous reduction of rendered polygons. (This is something id Software's Doom engine does, although I implemented it first.) In the documentation example the shadow polygon count is 27000 until I hit the space key to make the light static. The light then renders the static scene (everything except the fan blade) into an image, there thereafter that cached image is coped to the shadow map before the dynamic scene objects are drawn. This results in the shadow polygons rendered to drop by a lot, since the whole scene does not have to be redrawn each frame. I've started using animated GIFs in some of the documentation pages and I really like it. For some reason GIFs feel so much more "solid" and stable. I always think of web videos as some iframe thing that loads separately, lags and doesn't work half the time, and is embedded "behind" the page, but a GIF feels like it is a natural part of the page. My plan is to put 100% of my effort into the documentation and make that as good as possible. Well, if there is an increased emphasis on one thing, that necessarily means a decreased emphasis on something else. What am I reducing? I am not going to create a bunch of web pages explaining what great features we have, because the documentation already does that. I also am not going to attempt to make "how to make a game" tutorials. I will leave that to third parties, or defer it into the future. My job is to make attractive and informative primary reference material for people who need real usable information, not to teach non-developers to be developers. That is my goal with the new docs.
  21. If you're a beta tester, the new features are now entered into the new docs system. See Physics > Collider.
×
×
  • Create New...