Jump to content

Virtual Texture Terrain

Josh

12,014 views

The Leadwerks 2 terrain system was expansive and very fast, which allowed rendering of huge landscapes. However, it had some limitations. Texture splatting was done in real-time in the pixel shader. Because of the limitations of hardware texture units, only four texture units per terrain were supported. This limited the ability of the artist to make terrains with a lot of variation. The landscapes were beautiful, but somewhat monotonous.

With the Leadwerks 3 terrain system, I wanted to retain the advantages of terrain in Leadwerks 2, but overcome some of the limitations. There were three different approaches we could use to increase the number of terrain textures.

  • Increase the number of textures used in the shader.
  • Allow up to four textures per terrain chunk. These would be determined either programmatically based on which texture layers were in use on that section, or defined by the artist.
  • Implement a virtual texture system like id Software used in the game "Rage".

Since Leadwerks 3 runs on mobile devices as well as PC and Mac, we couldn't use any more texture units than we had before, so the first option was out. The second option is how Crysis handles terrain layers. If you start painting layers in the Crysis editor, you will see when "old" layers disappear as you paint new ones on. This struck me as a bad approach because it would either involve the engine "guessing" which layers should have priority, or involve a tedious process of user-defined layers for each terrain chunk.

A virtual texturing approach seemed liked the ideal choice. Basically, this would render near sections of the terrain at a high resolution, and far sections of the terrain at low resolutions, with a shader that chose between them. If done correctly, the result should be the same as using one impossibly huge texture (like 1,048,576 x 1,048,576 pixels) at a much lower memory cost. However, there were some serious challenges to be overcome, so much so that I added a disclaimer in our Kickstarter campaign basically saying "this might not work"..

Previous Work

id Software pioneered this technique with the game Rage (a previous implementation was in Quake Wars). However, id's "megatexture" technique had some serious downsides. First, the data size requirements of storing completely unique textures for the entire world were prohibitive. "Rage" takes about 20 gigs of hard drive space, with terrains much smaller than the size I wanted to be able to use. The second problem with id's approach is that both games using this technique have some pretty blurry textures in the foreground, although the unique texturing looks beautiful from a distance.

blogentry-1-0-99422000-1373307489_thumb.jpg

I decided to overcome the data size problem by dynamically generating the megatexture data, rather than storing in on the hard drive. This involves a pre-rendering step where layers are rendered to the terrain virtual textures, and then the virtual textures are applied to the terrain. Since id's art pipeline was basically just conventional texture splatting combined with "stamps" (decals), I didn't see any reason to permanently store that data. I did not have a simple solution to the blurry texture problem, so I just went ahead and started implementing my idea, with the understanding that the texture resolution issue could kill it.

I had two prior examples to work from. One was a blog from a developer at Frictional Games (Amnesia: The Dark Descent and Penumbra). The other was a presentation describing the technique's use in the game Halo Wars. In both of these games, a fixed camera distance could be relied on, which made the job of adjusting texture resolution much easier. Leadwerks, on the other hand, is a general-purpose game engine for making any kind of game. Would it be possible to write an implementation that would provide acceptable texture resolution for everything from flight sims to first-person shooters? I had no idea if it would work, but I went forward anyway.

Implementation

Because both Frictional Games and id had split the terrain into "cells" and used a texture for each section, I tried that approach first. Our terrain already gets split up and rendered in identical chunks, but I needed smaller pieces for the near sections. I adjusted the algorithm to render the nearest chunks in smaller pieces. I then allocated a 2048x2048 texture for each inner section, and used a 1024x1024 texture for each outer section:

blogentry-1-0-97225800-1373308168_thumb.jpg

The memory requirements of this approach can be calculated as follows:

1024 * 1024 * 4 * 12 = 50331648 bytes

2048 * 2048 * 4 * 8 = 134217728

Total = 184549376 bytes = 176 megabytes

176 megs is a lot of texture data. In addition, the texture resolution wasn't even that good at near distances. You can see my attempt with this approach in the image below. The red area is beyond the virtual texture range, and only uses a single low-res baked texture. The draw distance was low, the memory consumption high, and the resolution was too low.

blogentry-1-0-99321200-1373308232_thumb.jpg

This was a failure, and I thought maybe this technique was just impractical for anything but very controlled cases in certain games. I wasn't ready to give up yet without trying one last approach. Instead of allocating textures for a grid section, I tried creating a radiating series of textures extending away from the camera:

blogentry-1-0-74950800-1373308648_thumb.jpg

The resulting resolution wasn't great, but the memory consumption was a lot lower, and terrain texturing was now completely decoupled from the terrain geometry. I found by adjusting the distances at which the texture switches, I could get a pretty good resolution in the foreground. I was using only three texture stages, so I increased the number to six and found I could get a good resolution at all distances, using just six 1024x1024 textures. The memory consumption for this was just 24 megabytes, a very reasonable number. Since the texturing is independent from terrain geometry, the user can fine-tune the texture distances to accommodate flight sims, RPGs, or whatever kind of game they are making.

blogentry-1-0-17294200-1373309733_thumb.jpg

The last step was to add some padding around each virtual texture, so the virtual textures did not have to be complete redrawn each time the camera moves. I used a value of 0.25 the size of the texture range so the various virtual textures only get redrawn once in a while.

Advantages of Virtual Textures

First, because the terrain shader only has to perform a few lookups each pixel with almost no math, the new terrain shader runs much faster than the old one. When the bottleneck for most games is the pixel fillrate, this will make Leadwerks 3 games faster. Second, this allows us to use any number of texture layers on a terrain, with virtually no difference in rendering speed. This gives artists the flexibility to paint anything they want on the terrain, without worrying about budgets and constraints. A third advantage is that this allows the addition of "stamps", which are decals rendered directly into the virtual texture. This allows you to add craters, clumps of rocks, and other details directly onto the terrain. The cost of rendering them in is negligible, and the resulting virtual texture runs at the exact same speed, no matter how much detail you pack into it. Below you can see a simple example. The smiley face is baked into the virtual texture, not rendered on top of the terrain:

blogentry-1364-0-87706300-1373609026_thumb.jpg

Conclusion

The texture resolution problem I feared might make this approach impossible was solved by using a graduated series of six textures radiating out around the camera. I plan to implement some reasonable default settings, and it's only a minor hassle for the user to adjust the virtual texture draw distances beyond that.

Rendering the virtual textures dynamically eliminates the high space requirements of id's megatexture technique, and also gets rid of the problems of paging texture data dynamically from the hard drive. At the same time, most of the flexibility of the megatexture technique is retained.

Having the ability to paint terrain with any number of texture layers, plus the added stamping feature gives the artist a lot more flexibility than our old technique offered, and it even runs faster than the old terrain. This removes a major source of uncertainty from the development of Leadwerks 3.1 and turned out to be one of my favorite features in the new engine.



18 Comments


Recommended Comments

One of the tools I work with pretty much lets you paint up to 32 layers across very large datasets but renders them down on export so you never have results as good as the source.

 

One of the biggest limitations we had when building the Afghan map was the 10 meter square resolution and shortcuts made to fit everything into memory.

 

Interesting stuff.

Share this comment


Link to comment

TL;DR :P I'll assume you're doing cool things with terrain textures.

 

Will there be any way to query what's under our feet on the terrain? ie could we tell if we are standing on the smiley face?

Share this comment


Link to comment

Tessellation and displacement are hard to marry with collisions. I know one unreleased engine does this, it doesn't seem trivial.

 

Back to textures, what if I have surface imagery of Oahu Hawaii (say it was a car racing game like Test Drive Unlimited) and want texture stages 4-5 to be a raw sat image. For low altitude rendering, stages 0-3 as m-textures. What different approaches would facilitate handling this kind of game scenario? We might want to graduate between sat images with pre-computed splats based on altitude.

 

"Gods eye" to unit view, and back again.

 

Would the mega-textures be computed at run-time, or during map load time or a longer tool export operation? Your description implies you're having much success with small drawing operations, but how far will it scale I wonder?

Share this comment


Link to comment

Indeed we can't ask LE3 to be AAA engine big tech team, but Vector terrain is not possible one day in LE3 ? (from the HAlo tech link laugh.png , don't give ideass to your users lol)

Will it be terrain LOD (adjustable ?) in LE3 ?

Specially adjustable for mobile optimisation (less polygons to display).

Great terrains are coming smile.png

Share this comment


Link to comment

Interesting article and awesome stuff! The concentric textures reminded me of the geometry clipmapping technique for terrains as describes in GPU Gems 2. Maybe this can be used together for vast Terrains?

Share this comment


Link to comment

@yougroove: do you mean voxel?

 

What is mostly amazing is how Josh is able to do this all by himself. I mean how many people and how long have they been experimenting with mega textures in Rage and other games? Okay the idea is out there and there are some good reads here and there, but to be able to build such a thing like it is nothing: that is really impressive.

Share this comment


Link to comment

Interesting...interesting indeed! :)

Would be great to have some kinda road/river tool in the editor.

Maybe even an option to attach sounds to terrain texture materials?

Any plans for that, Josh? :)?

Share this comment


Link to comment
Back to textures, what if I have surface imagery of Oahu Hawaii (say it was a car racing game like Test Drive Unlimited) and want texture stages 4-5 to be a raw sat image. For low altitude rendering, stages 0-3 as m-textures. What different approaches would facilitate handling this kind of game scenario? We might want to graduate between sat images with pre-computed splats based on altitude.

The base map / blend method I used in LE2 worked well for large-scale satellite images. The splatted images are blended together to create one large baked texture for long-range rendering.

 

Tessellation and displacement are hard to marry with collisions. I know one unreleased engine does this, it doesn't seem trivial.

I don't think this is a problem because tessellated geometry is small compared to the physics geometry.

 

Would the mega-textures be computed at run-time, or during map load time or a longer tool export operation? Your description implies you're having much success with small drawing operations, but how far will it scale I wonder?

The whole megatexture never exists at once, but parts of it are drawn on-the-fly. Since it's working now, it will work independent from terrain size. Distant terrain is slightly blurred, but in Leadwerks 2 we actually went to great lengths to get this effect, with the special "blur mipmaps" setting. The reason is that blurred terrain textures in the distance actually look better because it eliminates obvious tiling.

Share this comment


Link to comment

 

 

 

I don't think this is a problem because tessellated geometry is small compared to the physics geometry.

 

 

 

I think Flexman was thinking of somthing different than simply tesselating terrain. What if you were to use a displacement map as a decal to tesselate a bomb crater? Once the crater was made the physics mesh would not be anywhere close.You would some how have to tesselate the physics mesh also.

Share this comment


Link to comment

I will have to experiment some more and see what it can do. This is by far the most flexible terrain system I've ever worked with.

Share this comment


Link to comment

I'm guessing the texture stages are generated on the GPU? So this doesn't give you the constant VRAM usage that mega-textures traditionally do, but does give you unlimited texture layers and decals w/ only a small VRAM overhead?

Share this comment


Link to comment

I'm guessing the texture stages are generated on the GPU? So this doesn't give you the constant VRAM usage that mega-textures traditionally do, but does give you unlimited texture layers and decals w/ only a small VRAM overhead?

Yes, it's all created on the GPU. The VRAM usage is constant, and probably will weigh in at just 24 mb.

Share this comment


Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Blog Entries

    • By Josh in Josh's Dev Blog 2
      DPI scaling and the 2D drawing and GUI system were an issue I was a bit concerned about, but I think I have it worked out. This all goes back to the multi-monitor support that I designed back in September. Part of that system allows you to retrieve the DPI scale for each display. This gives you another piece of information in addition to the raw screen resolution. The display scale gives you a percentage value the user expects to see vector graphics at, with 100% being what you would expect with a regular HD monitor. If we scale our GUI elements and font sizes by the display scale we can adjust for screens with any pixel density.
      This shot shows 1920x1080 fullscreen with DPI scaling set to 100%:

      Here we see the same resolution, with scaling set to 125%:

      And this is with scaling set to 150%:

      The effect of this is that if the player is using a 4K, 8K, or any other type of monitor, your game can display finely detailed text at the correct size the user expects to see. It also means that user interfaces can be rendered at any resolution for VR.
      Rather than trying to automatically scale GUI elements I am giving you full control over the raw pixels. That means you have to decide how your widgets will be scaled yourself, and program it into the game interface, but there is nothing hidden from the developer. Here is my code I am working with now to create a simple game menu. Also notice there is no CreatePanel(), CreateButton(), etc. anymore, there is just one widget you create and set the script for. I might add an option for C++ actors as well, but since these are operating on the main logic thread there's not really a downside to running the code in Lua.
      local window = ActiveWindow() if window == nullptr then return end local framebuffer = window:GetFramebuffer() if framebuffer == nil then return end self.gui = CreateGUI(self.guispritelayer) --Main background panel self.mainpanel = CreateWidget(self.gui,"",0,0,framebuffer.size.x,framebuffer.size.y) self.mainpanel:SetScript("Scripts/GUI/Panel.lua", true) local scale = window.display.scale.y local w = 120 local h = 24 local sep = 36 local x = framebuffer.size.x / 6 local y = framebuffer.size.y / 2 - sep * 3 self.resumebutton = CreateWidget(self.mainpanel,"RESUME GAME",x,y,w,h) self.resumebutton:SetScript("Scripts/GUI/Hyperlink.lua", true) self.resumebutton:SetFontSize(14 * window.display.scale.y) y=y+sep*2 self.label2 = CreateWidget(self.mainpanel,"OPTIONS",x,y,w,h) self.label2:SetScript("Scripts/GUI/Hyperlink.lua", true) self.label2:SetFontSize(14 * window.display.scale.y) y=y+sep*2 self.quitbutton = CreateWidget(self.mainpanel,"QUIT", x,y, w,h) self.quitbutton:SetScript("Scripts/GUI/Hyperlink.lua", true) self.quitbutton:SetFontSize(14 * window.display.scale.y) w = 400 * scale h = 550 * scale self.optionspanel = CreateWidget(self.mainpanel,"QUIT", (framebuffer.size.x- w) * 0.5, (framebuffer.size.y - h) * 0.5, w, h) self.optionspanel:SetScript("Scripts/GUI/Panel.lua", true) self.optionspanel.color = Vec4(0.2,0.2,0.2,1) self.optionspanel.border = true self.optionspanel.radius = 8 * scale self.optionspanel.hidden = true  
    • By Josh in Josh's Dev Blog 2
      Previously I talked about the technical details of hardware tessellation and what it took to make it truly useful. In this article I will talk about some of the implications of this feature and the more advanced ramifications of baking tessellation into Turbo Game Engine as a first-class feature in the 
      Although hardware tessellation has been around for a few years, we don't see it used in games that often. There are two big problems that need to be overcome.
      We need a way to prevent cracks from appearing along edges. We need to display a consistent density of triangles on the screen. Too many polygons is a big problem. I think these issues are the reason you don't really see much use of tessellation in games, even today. However, I think my research this week has created new technology that will allow us to make use of tessellation as an every-day feature in our new Vulkan renderer.
      Per-Vertex Displacement Scale
      Because tessellation displaces vertices, any discrepancy in the distance or direction of the displacement, or any difference in the way neighboring polygons are subdivided, will result in cracks appearing in the mesh.

      To prevent unwanted cracks in mesh geometry I added a per-vertex displacement scale value. I packed this value into the w component of the vertex position, which was not being used. When the displacement strength is set to zero along the edges the cracks disappear:

      Segmented Primitives
      With the ability to control displacement on a per-vertex level, I set about implementing more advanced model primitives. The basic idea is to split up faces so that the edge vertices can have their displacement scale set to zero to eliminate cracks. I started with a segmented plane. This is a patch of triangles with a user-defined size and resolution. The outer-most vertices have a displacement value of 0 and the inner vertices have a displacement of 1. When tessellation is applied to the plane the effect fades out as it reaches the edges of the primitive:

      I then used this formula to create a more advanced box primitive. Along the seam where the edges of each face meet, the displacement smoothly fades out to prevent cracks from appearing.

      The same idea was applied to make segmented cylinders and cones, with displacement disabled along the seams.


      Finally, a new QuadSphere primitive was created using the box formula, and then normalizing each vertex position. This warps the vertices into a round shape, creating a sphere without the texture warping that spherical mapping creates.

      It's amazing how tessellation and displacement can make these simple shapes look amazing. Here is the full list of available commands:
      shared_ptr<Model> CreateBox(shared_ptr<World> world, const float width = 1.0); shared_ptr<Model> CreateBox(shared_ptr<World> world, const float width, const float height, const float depth, const int xsegs = 1, const int ysegs = 1); shared_ptr<Model> CreateSphere(shared_ptr<World> world, const float radius = 0.5, const int segments = 16); shared_ptr<Model> CreateCone(shared_ptr<World> world, const float radius = 0.5, const float height = 1.0, const int segments = 16, const int heightsegs = 1, const int capsegs = 1); shared_ptr<Model> CreateCylinder(shared_ptr<World> world, const float radius = 0.5, const float height=1.0, const int sides = 16, const int heightsegs = 1, const int capsegs = 1); shared_ptr<Model> CreatePlane(shared_ptr<World> world, cnst float width=1, const float height=1, const int xsegs = 1, const int ysegs = 1); shared_ptr<Model> CreateQuadSphere(shared_ptr<World> world, const float radius = 0.5, const int segments = 8); Edge Normals
      I experimented a bit with edges and got some interesting results. If you round the corner by setting the vertex normal to point diagonally, a rounded edge appears.

      If you extend the displacement scale beyond 1.0 you can get a harder extended edge.

      This is something I will experiment with more. I think CSG brush smooth groups could be used to make some really nice level geometry.
      Screen-space Tessellation LOD
      I created an LOD calculation formula that attempts to segment polygons into a target size in screen space. This provides a more uniform distribution of tessellated polygons, regardless of the original geometry. Below are two cylinders created with different segmentation settings, with tessellation disabled:

      And now here are the same meshes with tessellation applied. Although the less-segmented cylinder has more stretched triangles, they both are made up of triangles about the same size.

      Because the calculation works with screen-space coordinates, objects will automatically adjust resolution with distance. Here are two identical cylinders at different distances.

      You can see they have roughly the same distribution of polygons, which is what we want. The same amount of detail will be used to show off displaced edges at any distance.

      We can even set a threshold for the minimum vertex displacement in screen space and use that to eliminate tessellation inside an object and only display extra triangles along the edges.

      This allows you to simply set a target polygon size in screen space without adjusting any per-mesh properties. This method could have prevented the problems Crysis 2 had with polygon density. This also solves the problem that prevented me from using tessellation for terrain. The per-mesh tessellation settings I worked on a couple days ago will be removed since it is not needed.
      Parallax Mapping Fallback
      Finally, I added a simple parallax mapping fallback that gets used when tessellation is disabled. This makes an inexpensive option for low-end machines that still conveys displacement.

      Next I am going to try processing some models that were not designed for tessellation and see if I can use tessellation to add geometric detail to low-poly models without any cracks or artifacts.
    • By Josh in Josh's Dev Blog 0
      For finer control over what 2D elements appear on what camera, I have implemented a system of "Sprite Layers". Here's how it works:
      A sprite layer is created in a world. Sprites are created in a layer. Layers are attached to a camera (in the same world). The reason the sprite layer is linked to the world is because the render tweening operates on a per-world basis, and it works with the sprite system just like the entity system. In fact, the rendering thread uses the same RenderNode class for both.
      I have basic GUI functionality working now. A GUI can be created directly on a window and use the OS drawing commands, or it can be created on a sprite layer and rendered with 3D graphics. The first method is how I plan to make the new editor user interface, while the second is quite flexible. The most common usage will be to create a sprite layer, attach it to the main camera, and add a GUI to appear in-game. However, you can just as easily attach a sprite layer to a camera that has a texture render target, and make the GUI appear in-game on a panel in 3D. Because of these different usages, you must manually insert events like mouse movements into the GUI in order for it to process them:
      while true do local event = GetEvent() if event.id == EVENT_NONE then break end if event.id == EVENT_MOUSE_DOWN or event.id == EVENT_MOUSE_MOVE or event.id == EVENT_MOUSE_UP or event.id == EVENT_KEY_DOWN or event.id == EVENT_KEY_UP then gui:ProcessEvent(event) end end You could also input your own events from the mouse position to create interactive surfaces, like in games like DOOM and Soma. Or you can render the GUI to a texture and interact with it by feeding in input from VR controllers.

      Because the new 2D drawing system uses persistent objects instead of drawing commands the code to display elements has changed quite a lot. Here is my current button script. I implemented a system of abstract GUI "rectangles" the script can create and modify. If the GUI is attached to a sprite layer these get translated into sprites, and if it is attached directly to a window they get translated into system drawing commands. Note that the AddTextRect doesn't even allow you to access the widget text directly because the widget text is stored in a wstring, which supports Unicode characters but is not supported by Lua.
      --Default values widget.pushed=false widget.hovered=false widget.textindent=4 widget.checkboxsize=14 widget.checkboxindent=5 widget.radius=3 widget.textcolor = Vec4(1,1,1,1) widget.bordercolor = Vec4(0,0,0,0) widget.hoverbordercolor = Vec4(51/255,151/255,1) widget.backgroundcolor = Vec4(0.2,0.2,0.2,1) function widget:MouseEnter(x,y) self.hovered = true self:Redraw() end function widget:MouseLeave(x,y) self.hovered = false self:Redraw() end function widget:MouseDown(button,x,y) if button == MOUSE_LEFT then self.pushed=true self:Redraw() end end function widget:MouseUp(button,x,y) if button == MOUSE_LEFT then self.pushed = false if self.hovered then EmitEvent(EVENT_WIDGET_ACTION,self) end self:Redraw() end end function widget:OK() EmitEvent(EVENT_WIDGET_ACTION,self) end function widget:KeyDown(keycode) if keycode == KEY_ENTER then EmitEvent(EVENT_WIDGET_ACTION,self) self:Redraw() end end function widget:Start() --Background self:AddRect(self.position, self.size, self.backgroundcolor, false, self.radius) --Border if self.hovered == true then self:AddRect(self.position, self.size, self.hoverbordercolor, true, self.radius) else self:AddRect(self.position, self.size, self.bordercolor, true, self.radius) end --Text if self.pushed == true then self:AddTextRect(self.position + iVec2(1,1), self.size, self.textcolor, TEXT_CENTER + TEXT_MIDDLE) else self:AddTextRect(self.position, self.size, self.textcolor, TEXT_CENTER + TEXT_MIDDLE) end end function widget:Draw() --Update position and size self.primitives[1].position = self.position self.primitives[1].size = self.size self.primitives[2].position = self.position self.primitives[2].size = self.size self.primitives[3].size = self.size --Update the border color based on the current hover state if self.hovered == true then self.primitives[2].color = self.hoverbordercolor else self.primitives[2].color = self.bordercolor end --Offset the text when button is pressed if self.pushed == true then self.primitives[3].position = self.position + iVec2(1,1) else self.primitives[3].position = self.position end end This is arguably harder to use than the Leadwerks 4 system, but it gives you advanced capabilities and better performance that the previous design did not allow.
×
×
  • Create New...