-
Posts
22,897 -
Joined
-
Last visited
Content Type
Blogs
Forums
Store
Gallery
Videos
Blog Comments posted by Josh
-
-
7 hours ago, Thirsty Panther said:
terrain->SetElevation(x, y, height);
Are the value of these in metres.? ie the distance from 1,0 to 2,0 is 1m.
By default, a terrain's size is (resolution.x, 1.0, resolution.y), so one terrain point occurs every meter, and the maximum height is 1.0. You can scale the terrain however you want to adjust the maximum height. So this would give you a terrain that is 512x512 meters, with a max height of 100 meters:
auto terrain = CreateTerrain(world, 512, 512); terrain->SetScale(1,100,1);
When you call SetElevation(), the vertical scale of the terrain is taken into account, so the height you set is in meters. If you call SetHeight() instead, then the range is between 0.0 and 1.0.
I tried some other designs, but this is the only one I could consistently remember when working with it, so I think it is the most intuitive.
- 1
-
3 hours ago, Thirsty Panther said:
Ooooh caves!
You could also take another terrain and turn it upside down to make the ceiling of a cave level.
- 2
-
The documentation display just does a search and replace when the file is loaded, so I can switch it back and forth very easily. These don't appear to be working in Visual Studio yet.
-
Here is a rounded arch. The tessellation settings were deduced from the mesh. The only part is missed is that center piece. There's no gap there, it just forms a straight edge along the center. There's not actually any indication in the mesh that part is supposed to be rounded, so I am leaning towards making the automatic calculation conservative, and then anything beyond that needs to be manually adjusted on a per-polygon basis. The mesh is only 34 quads, so manually adjusting some faces shouldn't be a problem.
- 2
-
-
Here we have an arch shape that has tessellation settings automatically deduced from the mesh topography. Notice that all faces only tessellate in one direction. I estimate this mesh is about 1500 quads. If it were tessellated on two axes, like the arch above, it would be around 25,000 quads (50,000 triangles). So managing the tessellation carefully should make a huge different in performance.
- 1
-
-
-
-
8 hours ago, IceBurger said:
What if the "extra faces" are needed for adding detail to the geometry?
If a displacement map is in use, then the face is fully tessellated. I think I can actually do some texture reads to figure out whether tessellation is needed or not in each area, so you should be able to do things like have just a few bricks sticking out of a wall, and only the area around the bricks gets tessellated.
- 2
-
This is so cool. This mesh was processed automatically after creating a box, so the tessellation is curving the right way on the flat faces. I know now how to write a better algorithm for processing the mesh, and it will eliminate some of the extra polys you see here...
If I can just make a bevel work on that flat face, I will be totally satisfied with this. Real-life industrial objects tend to be simple variations of bevelled surfaces and cylinders because they are still limited by the manufacturing process.
- 1
-
@Genebris Maybe there is a problem with the lighting / normals in that tess shader. It also looks like the displacement map is barely being used, like the bricks are not sticking out very far.
- 1
-
If you create a bent cylinder like this:
auto model = CreateCylinder(world, 5, 10, 8, 3, 1); float a; auto piv = CreatePivot(NULL); piv->SetPosition(10, 0, 0); for (auto mesh : model->lods[0]->meshes) { for (auto& vertex : mesh->vertices) { vertex.position.y += 5.0f; a = -90.0f * (vertex.position.y / 10.0f); piv->SetRotation(0,0,0); vertex.position.y = 0.0f; vertex.position = TransformPoint(vertex.position, model, piv); piv->SetRotation(0, 0, a); vertex.position = TransformPoint(vertex.position, piv, model); vertex.normal = TransformNormal(vertex.normal, piv, model); vertex.tangent = TransformNormal(vertex.tangent, piv, model); vertex.bitangent = TransformNormal(vertex.bitangent, piv, model); if (vertex.tessnormal != Vec3(0.0f)) vertex.tessnormal = TransformNormal(vertex.tessnormal, piv, model); } mesh->Finalize(); model->UpdateBounds(); }
Here is the result. Not bad. I think the quad interpolation needs a little improvement, but it's doing what it should do:
- 2
-
Here it is using quads for the sides, creating only the polygons necessary to represent the detailed shape:
This will need further development, but it's extremely promising. This could provide unlimited detail independently from view location., and it does it without requiring more complex models. You really could model stuff at about this complexity and let the system upsample all the curves automatically. Even if you have to go through and fine-tune some vertices, it's not difficult because you can model things at such a simple resolution. This totally simplifies hard surface modeling:
It's the fulfillment of the promise that hardware tessellation initially failed to deliver. It kind of reminds me of "patches" for curved surfaces in the Quake 3 level editing tools.
I am wrapping this up now, but I think this feature will definitely play a big role in the future.
- 1
-
We can also be smarter about the distribution of polygons, so tessellation doesn't need to create a lot of unnecessary polygons. The cylinder cap uses the minimal number needed to seal the gaps. If I used a separate mesh made up of quads for the sides, I could reduce the tessellation in one direction, so it would just create a lot of tall skinny polygons around the sides:
- 2
-
I've investigated these completely, and my conclusion right now is that C++20 modules are completely broken in Visual Studio. If you put a module in a static library and then import that lib into another project, you will get an undefined reference linking error for the simplest function, if it isn't written in the .ixx file. If you just declare the function (like a header) then it links and runs fine. I even downloaded the 2022 preview version, and the behavior is exactly the same.
It does look like modules do exactly what I want, Microsoft just doesn't have them working right yet.
-
To build C++ modules, all exported classes and functions go in a single .ixx file like this:
#include "UltraEngine.h" export module UltraEngine; namespace UltraEngine { //Export classes export Entity; export World; export Display; export Window; //Export enums export WindowStyles; //Export functions export std::shared_ptr<World> CreateWorld(); export std::vector<std::shared_ptr<Display> > GetDisplays(); export shared_ptr<Window> CreateWindow(const WString&, const int, const int, const int, const int, shared_ptr<Display>, const WindowStyles); //Export overloaded function extern shared_ptr<Window> CreateWindow(const WString&, const int, const int, const int, const int, shared_ptr<Window>, const WindowStyles); }
It looks like default arguments are supported, so everything looks good:
-
I think I messed up a bit. C++ modules are probably a better way to control what gets exposed. Instead of this:
#include "UltraEngine.h" using namespace UltraEngine; int main(int argc, const char* argv[]) { ...
Your code will look like this:
import UltraEngine; using namespace UltraEngine; int main(int argc, const char* argv[]) { ...
And there is no need for any header files for the engine library at all. Hopefully. These are quite new and I'm not completely sure how they work.
-
@klepto2 Communication between threads is very interesting. Each system has a command buffer you add instructions to like this:
commandbuffer->AddCommand(std::bind(&RenderMaterial::SetColor, rendermaterial, color));
It can also use lambda functions, but I try to avoid them because they are difficult to debug.
The World::Render method signals a semaphore that tells the rendering thread "another batch of command is ready, whenever you are". Then the rendering thread executes them all like this:
for (const auto& func : commands) { func(); }
That's how every system works, rendering, physics, animation, navigation, etc.
It would be entirely possible to do something like this:
commandbuffer->AddCommand(std::bind(&WaveWorks::Initialize, vkinstance)); commandbuffer->AddCommand(std::bind(&WaveWorks::Update, waveworksinstance));
And that code would get executed in the rendering thread, along with whatever else was going on.
- 1
- 1
-
-
12 hours ago, reepblue said:
Will this update trickle down to the app kit? Interested giving this a peak.
The full engine will be out soon enough, with all the functionality of UAK included.
12 hours ago, klepto2 said:Keeping everything structured and separate with a public only api is a real good step forward. This will make access to other languages much easier as you just need toward the public interface. But you need to keep in mind that the integration of some third party code especially in Vulkan may need some access to some lowlevel details, eg.: the Vulkan context these pointers should be made public in some way as well. Otherwise I really like the way ultraengine is structured..
Each threaded system has its own API, and these are usually not so friendly to use. The commands are like RenderNode::SetOrientation(matrix, position, quaternion, scale, color) and have strict rules on how they should be used. And they are also in a bit of flux.
Any custom Vulkan code would need to be run in a callback, since all rendering is taking place asynchronously. It will be interesting to see what kinds of things advanced devs would like to do with it once they start using it.
-
Doc URLs have been flattened a bit:
https://www.ultraengine.com/learn/CreateCameraGetting rid of the language subfolder in the URL is better for SEO and simplfies it a bit, since I am not going to try to make different products based off the same API.
-
I will get a better measure of performance if I test compute shaders with post-processing effects. It's a little difficult to get a good reading with this because you have several things going on.
-
It looks like I overlooked the performance cost of blurring the shadow image. The jury is still out on whether the compute shader with imageStore() is faster than a fragment shader, but I think VSMs are going to be the high-quality option. Regular depth shadow maps have almost no performance cost on low-end hardware but don't look as nice.
The final scene render for VSMs is in fact faster, so if the shadow is static the VSM becomes faster, but redrawing a variance shadow map incurs a pretty significant cost, on low-end hardware at least.
- 1
Finalizing Terrain
in Development Blog
A blog by Josh in General
Posted
I don't know exactly what they're doing, but in Ultra a material uses a shader family instead of individual shaders. This is a JSON file that specified variation of the shader for all the different rendering options it may use:
https://www.ultraengine.com/community/blogs/entry/2443-shader-families/