Jump to content

Rastar

Members
  • Posts

    421
  • Joined

  • Last visited

Everything posted by Rastar

  1. Rastar

    Realistic Penumbras

    Wow, great stuff, congrats!
  2. Very excited that you're considering large-scale terrain for Leadwerks 5, that topic's been an interest of mine for ages. I particularly find the algorithm for "Real-Time Deformable Terrain Rendering with DirectX 11" by Egor Yusov in GPU Pro 3 interesting. They basically observe that terrain at any point on the grid usually only deviates slightly from a value linearly interpolated from its neighbors. They use that to only store the delta from that interpolation and get a data set that can be very efficiently compressed. The terrain is then reconstructed at run time from those delta sets according to the required LOD level. There might be some interesting ideas for you in there (and by now OpenGL also provides the relevant features like texture arrays etc.)
  3. That's cool, have to try them out. I had already started working on some environment probes myself for my PBR experiments, but without accessibility to the editor the workflow is a little cumbersome. Question: Is there/will there be an API to generate the cubemaps at runtime? I know this is time consuming, but for changing outdoor conditions (time of day) it would be nice if the maps could be generated at runtime, like every 10mins or so.
  4. Ooooooh, that's one feature I've been eagerly waiting for Really happy that you're improving the graphics of Leadwerks again. Question: I'd be interested to play around with several tone mapping operators (I guess similar to what the "iris adjustment" does) - will the iris adjustment be a simple post processing effect attachment to the camera (in which case it could be easily replaced)?
  5. Which reminds me: Ages ago I did a couple of blog posts here a bout tessellation http://www.leadwerks.com/werkspace/blog/117/entry-1185-pass-the-tessellation-please-part-1/ http://www.leadwerks.com/werkspace/blog/117/entry-1195-pass-the-tessellation-please-part-2/ Maybe there's something helpful in there as well.
  6. Well, you can't divide a triangle into an arbitrary number of triangles, because the tessellation factors are limited (iirc factors of 64 are guarantueed, more might be available). But usually that's sufficient. A good approach often is to calculate the tessellation factors based on the screen-space size of the triangle edge. That way the tessellation automatically changes based on the distance from the camera. Here is an example http://codeflow.org/entries/2010/nov/07/opengl-4-tessellation/ The author's doing that for terrain tessellation, but the principle is the same.
  7. Parallels only supports OpenGL 2.1 (I think it's similar with other VMs), so Leadwerks won't run, no matter what GPU you have.
  8. Well done, will check it out! Yeah, I was thinking along similar lines. By the way, I used the upper 4 bits of the material flags (normal alpha) channel to store metalness (it should normally just be 1 or 0 anyway), and roughness in fragData2.a, thus could leave the transparency alpha as is.
  9. Unfortunately I can't post those textures, since these are proprietary ones by gametextures.com. You could get them with a trial account. But I will make some better comparison screenshots soon.
  10. Yes, that works. But I am trying to get the content of a render buffer Texture* tex = Texture::Create(width, height); Buffer* buffer = Buffer::Create(width, height, 1, 0); buffer->SetColorTexture(tex); buffer->Enable(); buffer->Clear(); world->Render(); buffer->Disable(); Context::SetCurrent(context); tex = buffer->GetColorTexture(); const char* pixelBuf = new char[tex->GetMipmapSize(0)]; tex->GetPixels(pixelBuf); A call to mat->SetTexture(tex) works, probably because the texture handle is being passed, but the above call to GetPixels() doesn't. As I said, I assume the texture isn't actually retrieved from the GPU to the main memory.
  11. When I try to get the pixels of a Buffer's color texture, I get an exception. The texture is fine and can e.g. be set to a texture slot, so I guess the data isn't actually retrieved from the GPU? Is there a way to do this?
  12. Mmmmh, it's been a while since I did some programming in Leadwerks, and things seem to have changed slightly... If I see this right, for C++ projects the Main.lua script is executed in App::Start(), which contains the game loop. And when this is ended, the C++ execution continues in App::Start(), then App::Loop() (which basically does nothing). Also, since everything is done in Main.lua, the World, Context, Camera and Window member variables are all null. I can change this according to my needs, but I think for C++ projects there should be a template with a clean lifecycle implementation done in C++. When I choose a C++ project, I prefer to use Lua for encapsulated entity behavior, but use C++ for the heavy lifting in the background.
  13. When running an app from the IDE, the following line in App.Start() (I don't have an App.lua, so it's trying to execute Main.lua) //Invoke the start script if (!Interpreter::ExecuteFile(scriptpath)) { will throw an error First-chance exception at 0x76C23E28 in MTLighting.debug.exe: Microsoft C++ exception: common::CNamedArgsUndefinedNameException at memory location 0x00CFE7A4. First-chance exception at 0x76C23E28 in MTLighting.debug.exe: Microsoft C++ exception: common::CNamedArgsUndefinedNameException at memory location 0x00CFE774. The execution jumps out here and continues when stopping the application (Esc) at //Call the App:Start() function Interpreter::GetGlobal("App"); if (Interpreter::IsTable()) in App.Start(), and then executes App.Loop() (which returns right away).
  14. Actually I did, but in all those back and forth changes between the two pipelines I must have made a mistake, I created a clean new project and did the "classic" shot again: http://images.akamai.steamusercontent.com/ugc/318997198277429861/AA729F838AEFA16B397DE94A706744F4ED18E51C/ But the direct comparison is really difficult. E.g. in this new shot the light intensity is higher (1.2 instead of 0.8) than in the PBR shots to show some highlights.
  15. That's actually not too easy, because the assets have to be so different. I tried the following: Used a barrel model from Dexsoft's Industrial3 collection and applied a rusted steel texture from gametextures.com (that is available in both classic and PBR variants): Classic http://images.akamai.steamusercontent.com/ugc/318997198276597375/6C91BE3851233BE9365238615B71AC72E1012060/ and PBR http://images.akamai.steamusercontent.com/ugc/318997198276599144/73A9480C1634DF36C7D0BE8E3DBBAF46143C8179/
  16. Genebris, that question is a great excuse to post yet another screenshot: Of course it is ;-) http://steamcommunity.com/sharedfiles/filedetails/?id=620641512 To be fair, no small of this goes towards the texture quality - the status uses a single 4k texture created by the Quixel suite, using one of its smart materials.
  17. I am using spherical harmonics (think Marmoset Skyshop) which have 9 float parameters, so the 4 channels of the ambient color aren't sufficient. Come to think of it: I am currently using just one cubemap, but there could be many in the scene, so the coefficients (and reflection cubemap values) would depend on the object's position. It might be best to keep these calculations in the materials.
  18. I can't give you a direct comparison. It should be slower, but not by an awful lot. Apart from the ambient lighting there aren't more texture lookups, and that would be an unfair comparison since it's an additionl feature. But yes, the shader calculations are more involved (more dot product, a pow() for the Schlick approximation etc.). My code isn't yet optimized, and some optimizations would actually need changes to the Leadwerks engine: the conversion of textures from linear space to gamma can be don eby the GPU using its sRGB sampling. Right now I have to transform the colors using pow(col, 2.2) and back using pow(col, 1/2.2) I am currently doing the ambient lighting in the materials because I don't know how to pass the required data to the lighting shaders,
  19. Physically-based Rendering (PBR), often also called Physically-based Shading (PBS), has taken the world of game engines and content creation tools by storm over the last couple of years. And for good reason: Art assets can be produced in a much more predictable and consistent way, since its properties directly relate to (measurable) real-world data, like the diffuse color of a material or the luminous flux (in lumen) of a light source. Even better, those assets also behave predictably and cosistently under varying lighting conditions. And since I really like the results of PBR pipelines (and love fooling around with shaders) I had a go (actually a second one) at implementing PBR in Leadwerks. So what is (different in) PBR? There are many good resources on this around (e.g. the tutorials by Marmoset http://www.marmoset.co/toolbag/learn), but in a nutshell: - Special care is taken to make the reflection by a material energy-conserving. Meaning: There can't be more light coming out of a material than went in (unless it is an emitter, of course). - The properties of a material are modeled after its real-world cousin. This is especially obvious for metals: Metals have no diffuse reflection (which is actually the main contributor to any material's reflection in a classical pipeline). Physically, the diffuse reflection of a material consists of light that is absorbed and then re-emitted (with the diffuse color of the respective material). Metals don't do that (much) - any light hitting a material is being reflected right away, never entering its surface. As a consequence, the diffuse color (usuall called albedo) of a metal is pitch black. - Everything is shiny: Even non-glossy non-metals do have a (low) amount of reflectivity, being especially apparent at glazing angles (Fresnel effect). Most materials don't have a colored reflection, athough some metals do (like gold or copper). You will find two main workflows for PBR pipelines: specular-gloss and metalness-roughness. These are basically just two different ways of specifying a material's properties, one giving more artistic freedom and less artifact (specular-gloss), with the other being slightly more intuitive and memory/bandwidth friendly (metalness-roughness). Since I don't have access to the setup of the G buffers in Leadwerks' deferred renderer, I went with the metalness-roughness variant since I could squeeze that into the current setup. Apart from modifying the lighting shaders to use the different rendering algorithm (I used the GGX specular lighting), it was important to include indirect specular lighting because otherwise metals would be mostly black. The standard way to do this is to use special cubemaps (I created mine using https://www.knaldtech.com/lys/). I also added a simple form of diffuse IBL (image-based lighting) using spherical harmonics. Some other things are important when adopting a PBR lighting algorithm: - Use linear space. PC monitors actually use gamma space, which is why most texture files are also encoded in gamma space. The problem here is that adding several colors in gamma space gives an incorrect result (for a more detailed description see http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html). Therefore, textures have to be converted to linear space, and the rendering result must be converted back to gamma space for displaying on a monitor. - The accuracy of an 8bit per channel frame buffer (which is what Leadwerks currently uses) does not yield best results, 16bits per channel would be preferable. And some textures (cubemaps, e.g.) should actually be in 16bit formats as well. But enough chit-chat, what does all this actually look like? First of all, we have to "usual" diffuse lighting, here by a directional light http://images.akamai.steamusercontent.com/ugc/318997058028854065/DDFD846082BF282E8CC7279CBF5FE022A03EE367/ and its corresponding specular companion http://images.akamai.steamusercontent.com/ugc/318997058028856863/0BD31B453D6ABD8B16E7435CBD9EE86649D92C4D/ Together with the diffuse ambient term http://images.akamai.steamusercontent.com/ugc/318997058028860663/961C026CA6DB56192FFB356C9F6BCA53BB893CAC/ and the indirect specular http://images.akamai.steamusercontent.com/ugc/318997058028863634/8925ED33DDDD375EC0161B843F465BCCD3AC737E/ this adds up to the total lighting seen at the beginning of the page http://images.akamai.steamusercontent.com/ugc/318997058028866463/8771AB07CB9F5D472F034A4EC83EA4A96A82EA06/ So, what's next? Well, actually I'm not convinced I have done all this right - there are some artifacts I have to look at, and I'm sure if the calibration is right. Also I need to write a cubemap generator to create those textures in-engine. Stay tuned!
  20. At some point you will have to just grind your teeth and get through that kind of code. Lua is actually one of the nicer languages for beginners, so it could be even worse . But if your more the graphical/artistic kind of person, maybe a visual programming language like https://scratch.mit.edu is a better starting point for you to easier grasp fundamental elements like loops, conditional expressions etc.
  21. Rastar

    Terrain Tessellation

    I think with "high-quality" Josh is not so much referring to texture size (2k should be enough) but about how meaningful the displacement map is. This is a grey-scale with black being "no vertical height" and white "full displacement". Often this is generated by some tool from the diffuse map (photo of rock surface), but this will be noisy due to colored specks etc. Using that as a displacement map will create unrealistic spikes. It's better to generate it from rocks etc. sculpted in a 3D modelling tool.
  22. This is really a bit mask. You start with a value of 1 (2^0), which in binary is 00000001. If you add 2 (2^1), this means the second bit flag from the right will be set as well (00000011). So the actual number 3 doesn't have a special meaning, just which bits are true or false. Now, the value is stored as a number between 0.0 and 1.0 in fragData2. So you have to divide the value by 255 (equivalent to 11111111).
  23. Leadwerks uses a little trick to render selected items differently (this is relevant for the editor). If you look e.g. into the fragment shader of default.shader if (ex_selectionstate>0.0) materialflags += 2; so if an entity is selected, the corresponding material flag will be set. Now, in the lighting shaders (e.g. Lighting/directionallight.shader) you'll find the lines //Blend with red if selected if ((2 & materialflags)!=0) { sampleoutput = (sampleoutput + vec4(1.0,0.0,0.0,0.0))/2.0; } so red will be added to the output color.
  24. I'd be interested to know what the revenue split will be for the curated workshop items.
  25. First, you should write your color values to fragData0. Leadwerks uses a deferred renderer - that means you don't set the fragment color yourself, but rather write your data to several buffers in the shaders, and afterwards the engine does the lighting computations and actually sets the fragment color. Unfortunately, most OpenGL tutorials that you find use a forward renderer, and you can't simply apply them 1:1 to Leadwerks, but rather have to understand the general concepts and adapt them to this engine. Then, you don't need the position uniform in the vertex shader. You do create three vertices and add them to a triangle. When Leadwerks issues the draw call for your model, the vertex shader will be executed for every single vertex of your model, and its data like position and uv coordinates are automatically passed to the shader. I actually haven't thought through if your code is equivalent to this, but the usual lines for the coordinate transformation are vec4 modelvertexposition = entitymatrix_ * vec4(vertex_position,1.0); gl_Position = projectioncameramatrix * modelvertexposition; where the first line transform the vertex position from object space to world space, and the second line transform that to camera (eye) space and applies the perspective transformation.
×
×
  • Create New...