Jump to content

Josh

Staff
  • Posts

    22,901
  • Joined

  • Last visited

Everything posted by Josh

  1. Josh

    ChatGPT

    ChatGPT provides tips on how to cook humans. Basically you can get it to say anything if you add the words "ethical" and "consensual".
  2. The fix will be in the next update for 1.0.2.
  3. Yes, Leadwerks does this. It just requires a displacement map. I have not implemented this in Ultra yet because I want to have an exact definition of how it works.
  4. Well, we could multiply the final output of the terrain system by the overlay color. It might be a good idea. However, you might also want to consider using the the overlay color to control which material appears at which point, and its strength, as you did above. I am not sure which will work out better. I feel like an assigned material would be better...if it looks good. I expect to be trying this fairly soon with my lunar visualization stuff.
  5. How would you suggest blending it?
  6. One of the issues I had was that everything tended to form patterns at 0/45/90/135... degrees, because it only read exact pixel values. Ultra has the Pixmap::Sample method, which uses bilinear filtering, so you take a sample from any angle around the pixel you are working on an get an accurate result.
  7. I am kind of curious to see what these look like. Looks like NASA uses the LAS file format for LIDAR data. https://gliht.gsfc.nasa.gov/index.php?section=34
  8. The reason I like DDS is because it includes the width and height information, and it specifies the pixel format. A 16-bit heightmap, for example, could consist of half-floats or unsigned shorts. With a raw file there's no way of knowing what format it is.
  9. r16 is not an image format. You can do this: pixmap->pixels->Save("Map.r16"); BTW, DDS files do support a variety of formats including floats and half-floats. I've been working with these lately and I think these are my preference. There may be some errors in the current DDS saving code for some formats, though.
  10. You want more, actually. I would probably split them up to a set of 1024x1024 tiles. It's not important to create these now, I am more just investigating to answer some questions for future development.
  11. Bro, what did you do??! Wait, this is a very simple error on my part. The error checking has an error. 🤪
  12. L3DT's activation system seems to be broken. Does anyone have any REALLY big 16-bit heightmaps, like maybe 32768 x 32768 or bigger?
  13. I will add this in an update to Ultra Engine 1.0.2 this week. It won't be difficult. Vertex colors are supported. I'm assuming this will be just one big mesh with millions of vertices. I honestly don't know if Ultra Engine will be much faster than Unity in this situation, because the bottleneck will probably be the vertex pipeline, and both engines are probably running at the hardware's max speed under those conditions. This is actually the exact reason why I keep telling everyone it is important to keep the vertex structure as compact as possible:
  14. It's not currently supported, but it would not be hard to add: https://www.ultraengine.com/learn/CreateMesh?lang=cpp Can I ask what you want to use it for?
  15. 3ds max project files for testing color precision for Lunar visualization: lunarhdrtest.zip
  16. Comparison of HDR and LDR textures generated from original NASA data, for rendering the Lunar surface.
  17. There will be an update for 1.0.2 later this week with fixes and some new features for image processing.
  18. I looks like is has trouble coming to rest, since the surface is not flat. I suppose no matter which way the cube falls, it's never going to be on a flat surface. In any case, I will check this out. You might also check out the cube sphere, as this will provide a more even distribution of polygons: https://www.ultraengine.com/learn/CreateCubeSphere?lang=cpp
  19. I left this running for several hours and it only got through about 5000 out of 98,000 columns...which means it would take about three days to process! Not good. I tried enabling write caching on my HDD, but it actually runs at the same speed as the USB drive (which Windows says does not allow write caching), after some long initial pauses. I think reading and writing to disk at the same time is probably just a really bad idea, even if they are two different disks. The reason I am doing this is because the image size is irregular and I want to resize the whole thing to power-of-two, which it is very close to. I can't really split the image up into tiles at this resolution because it doesn't divide evenly. Well, maybe this one could be split into 12 x 4 tiles, but other images might not work so well, and it adds another layer of confusion. In order to maintain accuracy I think I will need to implement a Pixmap::Blit method that can optionally accept floating point coordinates that act like texture coordinates. We already have Pixmap::Sample and that will help a lot. That way I can create small tile images in system memory, one at a time, and blit the big pixmap stored in virtual memory onto the tiles, without creating any distortion in the image data. When you finish each tile you save it to a file and then move on to the next area, but you aren't constantly switching between read and write to process each pixel. I'm going to write a blog about this but I want to keep my notes here so I can go back and view it later.
  20. I am working with a 128 GB USB drive now. These sure have come down in price! Strange results when I try resizing a 109440 x 36482 image using StreamBuffers. I am printing the time elapsed for every 1000 pixels that get processed. The routine is doing reads from one file and writes to another. CPU usage dropped to zero and it just hung. The longest pause was one full minute! After that it started going fast again, and keeps buzzing along happily at 1000 pixels every 30 milliseconds: Resizing albedo... 30 38 30 30 30 31 161 65629 1355 1363 1400 1367 1409 1385 9208 16202 1355 634 5705 11285 8836 3867 562 5278 8825 8839 7043 5396 8838 11279 8636 1921 489 63 63 62 64 72 82 2673 293 340 327 322 2774 360 328 357 356 2758 314 306 332 348 2748 309 322 297 333 2785 316 313 308 304 2734 204 29 28 30 29 28 29 29 28 29 31 28 29 29 29 28 29 31 29 29 ...
  21. Because the culling iterates through the hierarchy and stops if shadows are disabled. This is the intended behavior.
  22. Did you enable shadows on the pivot?
  23. This is an interesting situation. The simple answer is you are moving too quickly, or rather accelerating too quickly. Instantaneous teleportation of one meter is not realistic. The more complicated answer is that the engine doesn't know a shadow needs to be updated until the rendering call begins, but at that point it already has the visibility list it is going to use, which does not include the light casting that shadow, since the culling thread didn't include it, since the shadow was not invalidated when the culling process began. Therefore, the renderer has to wait until the next visibility list is received before the shadow gets updated. I suppose a persistent visibility list for each shadow might be a way of dealing with that, but under normal conditions an object will begin accelerating more gradually, which will trigger a shadow update without a big mismatch between the visible geometry and the shadow map info. A persistent list would also have the unwanted side effect of preventing resources from being freed from memory until all shadows are refreshed.
  24. We have plenty of space for images. JPEG is best for screenshots.
  25. Since all physics commands are executed on a separate thread, that would not really work.
×
×
  • Create New...