Jump to content

Josh

Staff
  • Content Count

    15,744
  • Joined

  • Last visited

Everything posted by Josh

  1. Josh

    Voxel Code

    Well, let's see how Quake 2 looks when Nvidia releases the raytraced version in a few days. The stuff I am seeing seems like it gives much better results than voxel approaches. Voxels are pretty good for blurry GI but the reflections raytracing provides are so much better. If I feel enough faith that raytracing is actually going to catch on and work well then it is probably better to work on that. Voxels have limitations. They don't really work with animated models. You have to figure out a really hacky way to account for the texture color at each voxel. They aren't high-resolution enough for good reflections. And they can only extend in front of the camera for a limited distance, so vast landscapes don't get any enhancement. And there is a delay while the voxel data is updated, so you have GI that sort of fades smoothly as the scene changes. So there's this approach that will still take a lot of work and will probably never produce results people are totally happy with, and then there is another approach that produces perfect results, has a nice marketing campaign already in place, and requires Vulkan, so now we are part of a much smaller group of companies that can even access it. And it meshes well with some of the CAD / aerospace stuff we do as well. Based on this there's enough there for me to want to experiment with the technology and see if it is appropriate. That's as much as I know right now.
  2. Josh

    Voxel Code

    The way the routine works is it distributes a grid of instances across the scene with a random offset. There is no "exact" placement of one instance because there is no position stored, just a grid with a sort of random offset within the space the density value allows. When you click with the vegetation tool, you aren't saying "place one instance here" you are saying "this is an area in which trees are allowed to grow". I got the idea when I walked by an orchard and saw the trees spaced exactly in rows, and then noticed that in nature the same thing happens, although not as precisely, for the same reason...to maximize the sunlight and nutrients each tree gets.
  3. Josh

    Voxel Code

    If there was a big marketing campaign for the same exact thing everyone would be singing the praises of Nvidia TreeTek(R)
  4. Josh

    Voxel Code

    Trees are not stored in memory at all. They are totally dynamic, so you can have millions, and they are extremely fast.
  5. Is there a text file in the project?
  6. I now have different materials with textures working in Vulkan. The API allows us to access every loaded texture in any shader, although some Intel chips have limitations and will require a fallback. This is interesting because some of our design decisions in Leadwerks 4 were made because we had a limit of 16 textures a shader could access. Terrain clipmaps were a good solution to this problem, but since the same limitations no longer exist it may be time to revisit this design. We could, for example, implement a shader that can access any loaded texture and use a single RGBA texture to indicate which texture should be used for each terrain point. This would allow up to 256 different layers. Best of all, the number of texture layers would have no effect on speed. It would run the same speed no matter how many textures you use. In fact the whole idea of "layers" is obsolete and not descriptive at all of what is happening. This would also eliminate the blurriness and weird filtering that can occur with clipmaps, and give us pixel-perfect terrain at any distance. An application has been uploaded for beta subscribers which will load and display any Leadwerks map with the new Vulkan renderer: Our new lighting system will work seamlessly with Vulkan. However, before I continue with lighting I want to resolve some problems in the current scope of the renderer. I've worked out a huge piece of the core renderer design now, and I think things will get easier soon.
  7. The entity pivot will be at the very bottom of the physics shape, since character models are usually created with the origin at the feet.
  8. Josh

    Voxel Code

    We will see. One thing that is pushing me towards raytracing is my experience with Vulkan has shown me that branded technology always attracts more interest. Everything else asise, a technology with a logo someone else spent a lot of money to promote will attract more interest than something I invent myself, like our vegetation system, which I feel like is totally revolutionary but not properly appreciated. In fact, our whole marketing message is going to switch from "I figured out a super efficient way to make graphics faster" to "I am using Vulkan to make graphics fast and I guess other devs are too stupid to use it right lol" even though Vulkan was a small part compared to what I implemented first in OpenGL. It's just more digestable if what I am telling people conforms to their previous expectations. I realized this when I went to GDC and when I talked about performance the first response was a big grin and the word "Vulkan?"
  9. Josh

    The Asset Loader Class

    It just runs the script.
  10. In Turbo (Leadwerks 5) all asset types have a list of asset loader objects for loading different file formats. There are a number of built-in loaders for different file formats, but you can add your own by deriving the AssetLoader class or creating a script-based loader. Another new feature is that any scripts in the "Scripts/Start" folder get run when your game starts. Put those together, and you can add support for a new model or texture file format just by dropping a script in your project. The following script can be used to add support for loading RAW image files as a model heightmap. function LoadModelRaw(stream, asset, flags) --Calculate and verify heightmap size - expects 2 bytes per terrain point, power-of-two sized local datasize = stream:GetSize() local pointsize = 2 local points = datasize / pointsize if points * pointsize ~= datasize then return nil end local size = math.sqrt(points) if size * size ~= points then return nil end if math.pow(2, math.log(size) / math.log(2)) ~= size then return nil end --Create model local modelbase = ModelBase(asset) modelbase.model = CreateModel(nil) local mesh = modelbase.model:AddMesh() --Build mesh from height data local x,y,height,v local textureScale = 4 local terrainHeight = 100 for x = 1, size do for y = 1, size do height = stream:ReadUShort() / 65536 v = mesh.AddVertex(x,height * terrainHeight,y, 0,1,0, x/textureScale, y/textureScale, 0,0, 1,1,1,1) if x > 1 and y > 1 then mesh:AddTriangle(v, v - size - 1, v - size) mesh:AddTriangle(v, v - 1, v - size - 1) end end end --Finalize the mesh mesh:UpdateBounds() mesh:UpdateNormals() mesh:UpdateTangents() mesh:Lock() --Finalize the model modelbase.model:UpdateBounds() modelbase.model:SetShape(CreateShape(mesh)) return true end AddModelLoader(LoadModelRaw) Loading a heightmap is just like loading any other model file: auto model = LoadModel(world,"Models/Terrain/island.r16"); This will provide a temporary solution for terrain until the full system is finished.
  11. clear() will not objects that have pointers stored in the vector. Internally, it will also not deallocate the vector memory, which makes this ideal for something you fill and empty over and over again. I believe resize() will actually allocate a new smaller block of memory. In the new engine, using shared pointers, clearing the vector would cause the contained objects to be deleted if they are not being used anywhere else.
  12. Josh

    Voxel Code

    I've decided not to continue my work on voxel global illumination. I was pretty far along but other better techniques are now available (raytracing). This library can be used to turn mesh data into voxels. voxelizer.zip I was using a geometry shader to visualize the data: voxel.geom Here is my code for using it: std::vector<float> Mesh::BuildVoxels(const Vec3& scale, const float voxelsize, AABB& bounds) { std::vector<float> cloud; vx_mesh_t mesh; mesh.nvertices = CountVertices(); if (mesh.nvertices == 0) return cloud; std::vector<float> positions; positions.resize(CountVertices() * 3); //std::vector<float> normals; //normals.resize(CountVertices() * 3); Vec3 pos; for (int v = 0; v < CountVertices(); ++v) { pos = GetVertexPosition(v) * scale + voxelsize * 0.5f;// add this to align voxels to grid positions[v * 3 + 0] = pos[0]; positions[v * 3 + 1] = pos[1]; positions[v * 3 + 2] = pos[2]; //pos = GetVertexNormal(v); //normals[v * 3 + 0] = pos[0]; //normals[v * 3 + 1] = pos[1]; //normals[v * 3 + 2] = pos[2]; } mesh.vertices = (vx_vertex*)&positions[0]; std::vector<unsigned int> indices; //for (int t = 0; t < CountTriangles(); ++t) //{ // int a = GetTriangleVertex(t, 0); // int b = GetTriangleVertex(t, 1); // int c = GetTriangleVertex(t, 2); // if (bounds.IntersectsPoint(GetVertexPosition(a)) or bounds.IntersectsPoint(GetVertexPosition(b)) or bounds.IntersectsPoint(GetVertexPosition(c))) // { // indices.push_back(a); // indices.push_back(b); // indices.push_back(c); // } //} indices.reserve(CountIndices()); //indices.resize(CountIndices()); for (int t = 0; t < CountTriangles(); ++t) { int a = GetTriangleVertex(t, 0); int b = GetTriangleVertex(t, 1); int c = GetTriangleVertex(t, 2); Vec3 v0 = GetVertexPosition(a); Vec3 v1 = GetVertexPosition(b); Vec3 v2 = GetVertexPosition(c); AABB aabb = AABB(v0, v0); aabb.min = min(aabb.min, v1); aabb.max = max(aabb.max, v1); aabb.min = min(aabb.min, v2); aabb.max = max(aabb.max, v2); aabb.Update(); //if (aabb.IntersectsAABB(bounds)) //{ indices.push_back(a); indices.push_back(b); indices.push_back(c); //} //indices[v] = GetIndiceVertex(v); } mesh.nindices = indices.size(); mesh.indices = &indices[0]; //mesh.normals = (vx_vec3_t*)&normals[0]; //mesh.nnormals = CountVertices(); //mesh.normalindices = mesh.indices; mesh.normalindices = nullptr; mesh.colors = nullptr; auto pointcloud = vx_voxelize_pc(&mesh, voxelsize, voxelsize, voxelsize, voxelsize * 0.01f); cloud.resize(pointcloud->nvertices*3); for (int v = 0; v < pointcloud->nvertices; ++v) { cloud.at(v * 3 + 0) = pointcloud->vertices[v].x - voxelsize * 0.5f; cloud.at(v * 3 + 1) = pointcloud->vertices[v].y - voxelsize * 0.5f; cloud.at(v * 3 + 2) = pointcloud->vertices[v].z - voxelsize * 0.5f; } vx_point_cloud_free(pointcloud); return cloud; }
  13. BankStream is the easy way to use banks. You just create it and read/write to and from it with the regular stream commands, and the bank will be automatically resized when needed.
  14. Josh

    Leadwerks and C#

    I don't see any technical problems, it's just a matter of maintaining a lot of wrapper code, which is why an automated approach would be best. I took a look at SWIG but could not immediately make heads or tails of it.
  15. Josh

    Leadwerks and C#

    I think in the new engine especially, C# will be a viable option. Your game code gets 16 milliseconds to execute independently from the rendering thread so there are no worries about performance. When using the auto keyword for variables, the only different between C# and C++ code will be to replace "->" with ".". Now in the new engine we are relaying more on standard containers, because my new Lua binding code handles them better. There is no CountChildren() or CountVertices(), there is just a vector of objects that contains those things: for (int n=0; n < entity->kids.size(); ++n) { entity->kids[n]->Hide(); } This says support is experimental. Maybe inquire with the developer and if it will work in the near future I can use vectors more and avoid unsupported container types: https://github.com/mono/CppSharp/blob/master/docs/UsersManual.md#containers
  16. Having completed a hard-coded rendering pipeline for one single shader, I am now working to create a more flexible system that can handle multiple material and shader definitions. If there's one way I can describe Vulkan, it's "take every single possible OpenGL setting, put it into a structure, and create an immutable cached object based on those settings that you can then use and reuse". This design is pretty rigid, but it's one of the reasons Vulkan is giving us an 80% performance increase over OpenGL. Something as simple as disabling backface culling requires recreation of the entire graphics pipeline, and I think this option is going away. The only thing we use it for is the underside of tree branches and fronds, so that light appears to shine through them, but that is not really correct lighting. If you shine a flashlight on the underside of the palm frond it won't brighten the surface if we are just showing the result of the backface lighting. A more correct way to do this would be to calculate the lighting for the surface normal, and for the reverse vector, and then add the results together for the final color. In order to give the geometry faces for both direction, a plugin could be added that adds reverse triangles for all the faces of a selected part of the model in the model editor. At first the design of Vulkan feels restrictive, but I also appreciate the fact that it has a design goal other than "let's just do what feels good". Using indirect drawing in Vulkan, we can create batches of batches, sorted by shader. This feature is also available in OpenGL, and in fact is used in our vegetation rendering system. Of course the code for all this is quite complex. Draw commands, instance IDs, material IDs, entity 4x4 matrices, and material data all has to be uploaded to the GPU in memory buffers, some of which are more or less static, and some of which are updated each frame, and some for each new visibility set. It is complicated stuff, but after some time I was able to get it working. The screenshot below shows a scene with five unique objects being drawn in one single draw call, and accessing two different materials with different diffuse colors. That means an entire complex scene like The Zone will be rendered in one or just a few passes, with the GPU treating all geometry as if it was a single collapsed object, even as different objects are hidden and shown. Everyone knows that instanced rendering is faster than unique objects, but at some point the number of batches can get high enough to be a bottleneck. Indirect rendering batches the batches to eliminate this slowdown. This is one of the features that will help our new renderer run an order of magnitude faster, for high-performance VR and regular 3D games.
  17. One of the nice things about the Steam P2P networking is the game is not dependent on a central server always being maintained.
  18. Josh

    Textures in Vulkan

    Not to be outdone by Nintendo, I have added the Vulkan Fog of Death (TM).
  19. Josh

    Textures in Vulkan

    Yes. Once I get the basics done all the work I did in OpenGL will transfer over easily because the same shaders can be used.
  20. Graphics pipelines in Vulkan are tied to both a shader and context. I want to store a map of these in the context, using the shader as the map key. I can use a weak_ptr to the shader for the key so it does not create a circular reference. This is what the third parameter in the map definition below does: class Context { public: std::map<weak_ptr<Shader>, shared_ptr<Pipeline>, std::owner_less<std::weak_ptr<Shader> > pipelines; void Draw(shared_ptr<Shader> shader) { auto iter = pipelines.find(shader); if (iter == pipelines.end()) pipelines[shader] = CreatePipeline(shader); auto pipeline = iter->second; //Begin drawing... } }; When the shader goes out of scope and is deleted the associated pipelines in each context will still be stored in the map. Is there any way I can set this up so that the shader being deleted automatically clears all the map values associated with that key, in each context? The only solution I can think of is to loop through a list of all contexts and remove the values in the shader destructor.
  21. Josh

    Textures in Vulkan

    What exactly are "PBR files"?
  22. Josh

    Textures in Vulkan

    I finally got a textured surface rendering in Vulkan so we now have officially surpassed StarFox (SNES) graphics: Although StarFox did have distance fog. 😂 Vulkan uses a sort of "baked" graphics pipeline. Each surface you want to render uses an object you have to create in code that contains all material, texture, shader, and other settings. There is no concept of "just change this one setting" like in OpenGL. Consequently, the new renderer may be a bit more rigid than what Leadwerks 4 uses, in the interest of speed. For example, the idea of 2D drawing commands you call each frame is absolutely a no-go. (This was likely anyways, due to the multithreaded design.) A better approach for that would be to use persistent 2D primitive objects you create and destroy. I won't lose any sleep over this because our overarching design goal is performance. Right now I have everything hard-coded and am using only one shader and one texture, in a single graphics pipeline object. Next I need to make this more dynamic so that a new graphics pipeline can be created whenever a new combination of settings is needed. A graphics pipeline object corresponds pretty closely to a material. I am leaning towards storing a lot of settings we presently store in texture files in material files instead. This does also resolve the problem of storing these extra settings in a DDS file. Textures become more of a dumb image format while material settings are used to control them. Vulkan is a "closer to the metal" API and that may pull the engine in that direction a bit. That's not bad. I like using JSON data for file formats, so the new material files might look something like this: { "material": { "color": "1.0, 1.0, 1.0, 1.0", "albedoMap": { "file": "brick01_albedo.dds", "addressModeU": "repeat", "addressModeV": "repeat", "addressModeW": "repeat", "filter": "linear" }, "normalMap": { "file": "brick01_normal.dds", "addressModeU": "repeat", "addressModeV": "repeat", "addressModeW": "repeat", "filter": "linear" }, "metalRoughnessMap": { "file": "brick01_metalRoughness.dds", "addressModeU": "repeat", "addressModeV": "repeat", "addressModeW": "repeat", "filter": "linear" }, "emissiveMap": { "file": "brick01_emissive.dds", "addressModeU": "repeat", "addressModeV": "repeat", "addressModeW": "repeat", "filter": "linear" } } } Of course getting this to work in Vulkan required another mountain of code, but I am starting to get the hang of it.
  23. Okay, please wait for an update. It might even be the case that the new Rift is not supported in OpenVR yet, so you might also want to check with Valve on GitHub.
×
×
  • Create New...