Jump to content
Rick

Nanite technology

Recommended Posts

UE 5 demo dropped. It talks about Nanite technology. Seems the idea is your models can be ultra detailed, no funny business like normal maps or anything just straight up source model with millions of polygons from your modelling package, just very detailed models, and this technology will automatically adjust the level of detail in real-time, sort of like tessellation, by streaming data from the model as needed. They say it's sort of like mipmaps. Just curious on Josh's take on something like this. It would be pretty cool to be able to set an overall frame polycount budget and have a system like this automatically deal with the details of how much data from the highly detailed source models that are visible to be rendered in real-time.

 

From a long term perspective what's interesting is if that frame polycount budget was dynamic based on something from the gfx card in question, then most all computers could run most all games made this way as the models would be highly adjusted based on the power of the gfx card. On the other side, as gfx card technology gets better your game from "10 years ago" would automatically look better without any changes from the devs as the polycount budget number would become higher since the engine detects the gfx card can handle more details!

 

Curious on opinions about this idea of streaming in highly detailed models and applying compression algo's to them in real-time to determine the level of detail to send to the renderer.

  • Like 2

Share this post


Link to post

I just watched it too.  Looks amazing, of course but they never lacked for presentation.  I didn't catch that they didn't use normal maps though.  Does the engine generate them on the fly for the reduced-triangle models or is it just not needed somehow?

I do like the thought you mentioned of games looking good even 10 years from now, being more age-resistant, and their talk of taking optimizing models/LODs out of the pipeline.  That's a very powerful concept.  Someone did mention in the comments about how much bigger file sizes must be to accommodate this but I don't know how that would work either.

Share this post


Link to post

It doesn't use normal maps at all it seems. It just compresses the highly detailed model data (triangles), seemingly in real-time (or every few frames probably) depending on the camera distance. The closer you are the less compressed the model data would be meaning it's more detailed (more triangles). The farther away the model is from the camera the more compression the model data would get so it's less detailed (less triangles) since the eye won't notice? Basically dynamic "real-time" LOD like you noted.  Since the idea of a normal map is to fake detail, and the idea here is to change detail, seems there wouldn't be much need for a normal map? Yeah, the biggest thing would be disk space, but since that's cheap and getting cheaper by the day it's probably OK.

 

After watching more, perhaps they are compressing not every model but the final scene of the detailed models. So once you have your entire detailed screen, it then compresses that before sending to the renderer. However, my concern there would be memory of those detailed models loaded. I wonder if this would result in less unique models available on screen. PS5 has 16 gig, but that demo scene might be reusing a lot of the same assets to get around any kind of limit with what you can have loaded at once. Or maybe it's not a big issue.

Share this post


Link to post

Right, disk space and memory.  Don't know how they do it/intend to do it.  A random scanned scene on Sketchfab like this one is a half gig download and it's only 1.3M triangles for a single scene (not a billion, like they said in the "frame" (not the entire scene)).  But I suppose if people are used to 40GB or 60GB game downloads, they probably won't mind 300GBs or more.  🤔

Share this post


Link to post

Call of Duty is already 190gig so it would most likely be in the TB's with something like this pushed to the max.

Share this post


Link to post

I think it’s just streaming in detail of scanned geometry at different resolutions based on distance. Very similar to a lot of planet rendering stuff. I think it would be a bit of a challenge to come up with the models though.

ive seen a lot of stuff like this created from LIDAR data that is then polygonized.

  • Like 2

Share this post


Link to post
24 minutes ago, Rick said:

Call of Duty is already 190gig so it would most likely be in the TB's with something like this pushed to the max.

Thankfully that's an extreme outlier: https://gamerant.com/pc-games-file-size-hd-space-biggest-huge/  But point made.  It could get ridiculous, at least from our current perspective.  I'm sure HD and RAM makers are drooling at the possible demand.

6 minutes ago, JoshMK said:

I think it would be a bit of a challenge to come up with the models though.

I'd guess at some partnership between Epic and Quixel (they do have 82 results for Quixel in their store).  It makes sense to control as many aspects as reasonably possible.
 

Share this post


Link to post

I do think you need something like the auto lod system. Nobody likes LOD models anyway.

 

  • Haha 1

Share this post


Link to post

In other news, they're also further competing with Steam by launching a free Epic Online Services, which is independent of Epic, meaning you can use it with any engine.  Says it even has analytics built in.

Share this post


Link to post

Josh you might want to look at these Online Services. A fair amount of companies are doing it and it gives developers a way to have a lot of online features via web api calls that are common for a lot of games. It's basically your own web api on top of Azure/Amazon that helps manage user Id's, inventories, achievements, etc. Might be a good extra income source. 

Share this post


Link to post

All this is about to become really cool, isnt it?

Between Ray tracing and this Nanite modelling (whatever its name..) games will become such beautiful in 100 years. 

But we will sadly not experiment this future.

I just hope games will not loose for this its story quality.

Like: In the past RPG paper "games" to read were 90% story and 10% fight with dice cubes.

Now, it is 95% guns/sword fight with 5% poor story.

Like could you make a morrowind game with Ray tracing and this Nanite modelling ?

(I know I should not be so pessimistic, witcher 3 was a very nice game.)

The challenge for Leadwerks 5 could be wonderful if fincluding such "woaw" high tech combined with its traditional "easy-to-use" loved feature.

Share this post


Link to post

I could actually see implementing something like this to display LIDAR data, but I don't think it's going to be practical for game development.

maxresdefault.thumb.jpg.b976099986469cd564f12dfa24a4b60e.jpg

  • Confused 1

Share this post


Link to post
23 hours ago, Rick said:

It doesn't use normal maps at all it seems. It just compresses the highly detailed model data (triangles), seemingly in real-time (or every few frames probably) depending on the camera distance. The closer you are the less compressed the model data would be meaning it's more detailed (more triangles). The farther away the model is from the camera the more compression the model data would get so it's less detailed (less triangles) since the eye won't notice? Basically dynamic "real-time" LOD like you noted.  Since the idea of a normal map is to fake detail, and the idea here is to change detail, seems there wouldn't be much need for a normal map? Yeah, the biggest thing would be disk space, but since that's cheap and getting cheaper by the day it's probably OK.

 

After watching more, perhaps they are compressing not every model but the final scene of the detailed models. So once you have your entire detailed screen, it then compresses that before sending to the renderer. However, my concern there would be memory of those detailed models loaded. I wonder if this would result in less unique models available on screen. PS5 has 16 gig, but that demo scene might be reusing a lot of the same assets to get around any kind of limit with what you can have loaded at once. Or maybe it's not a big issue.

This video was released by Digital Foundry; a bit more of a "deep dive" into the tech we've seen so far.

Appears there is still normal maps involved, but they're not bespoke. The tech seems to generate them automatically. At least that's how I understood it.

Here's the vid...


Besides that, my take on the demo is, like the tech demos for previous Unreal Engines, it's intended to only showcase the new tech and what it does on a spectacular level, pushed to the extremes. That is, I don't think we're going to be seeing games routinely using 30 million triangle models, or billions in a single scene.  I think it's more to showcase what the tech is doing on a grand scale.

If you think back to the UE4 "Elemental" demo, it showcased millions of particles, and various other new features in a very spectacular, highly detailed cut-scene. But I don't recall seeing any games that tried pushing their environments or detail to that level, except perhaps in scripted cutscenes, etc. It just wasn't practical in a typical gameplay setting.

I think the same thing applies here.

But again, that's just my take.

  • Like 3

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...