Jump to content

DDS Compression Types


Marcus
 Share

Recommended Posts

Is there a reason or gain from saving a texture without alpha channel information using DXT5? As far as I understand, if you save a texture without alpha information as a dtx5 it just creates a blank alpha channel and increases the file size.

Vista | AMD Dual Core 2.49 GHz | 4GB RAM | nVidia GeForce 8800GTX 768MB

Online Portfolio | www.brianmcnett.com

Link to comment
Share on other sites

so oildrum.dds is 171 KB. If I open it in Photoshop and save it using DXT5 and then re-open it, it now has a blank alpha channel and goes up to 342 KB (the same size as oildrumdot3, which I would expect since that one has an alpha that is used for the spec map). Am I still missing something? is DXTC5 something different than DXT5? I don't get an option for DXTC5 with the Nvidia plugins for Photoshop, just DXT5.

Vista | AMD Dual Core 2.49 GHz | 4GB RAM | nVidia GeForce 8800GTX 768MB

Online Portfolio | www.brianmcnett.com

Link to comment
Share on other sites

Sorry, I know how to get a dds with an interpolated alpha, my question was if there was a reason to save a texture using DXT5 when you do not want an alpha (such as diffuse textures with no transparency). Josh seemed to be indicating that you should never use DXT1, but I thought the only difference between DXT1 and DXT5 was that DXT5 is able to retain a 4 bpp alpha channel. I did not think there was any difference between DXT1 and DXT5 in terms of RGB channel compression artifacts.

 

So I have always used DXT1 for textures without alpha channels, and DXT5 for textures with alpha channels. Josh's statement has me wondering if this is the wrong thing to do.

Vista | AMD Dual Core 2.49 GHz | 4GB RAM | nVidia GeForce 8800GTX 768MB

Online Portfolio | www.brianmcnett.com

Link to comment
Share on other sites

I'd say if you aren't seeing any artifacts and are testing on a number of different graphics cards, then use DXT1, or you can take Josh's word for it and just use DXT5.

 

If you are looking for some sort of proof I'm guessing you won't get it. Probably just best to go with what the creator of the engine says.

Link to comment
Share on other sites

Sorry, I know how to get a dds with an interpolated alpha, my question was if there was a reason to save a texture using DXT5 when you do not want an alpha (such as diffuse textures with no transparency). Josh seemed to be indicating that you should never use DXT1, but I thought the only difference between DXT1 and DXT5 was that DXT5 is able to retain a 4 bpp alpha channel. I did not think there was any difference between DXT1 and DXT5 in terms of RGB channel compression artifacts.

 

So I have always used DXT1 for textures without alpha channels, and DXT5 for textures with alpha channels. Josh's statement has me wondering if this is the wrong thing to do.

 

 

What do you mean you don't know how? If you are using the Photoshop plugin, I just told you how. Thats exactly the format I use. It in the drop down box that comes up in Photoshop.

post-234-12657410690994_thumb.png

AMD Phenom II x6 1100T - 16GB RAM - ATI 5870 HD - OCZ Vertex 2 60GB SSD

76561197984667096.png

Link to comment
Share on other sites

to quote from that article for your DXT1 question.

2.1 Object-Space DXT1

 

DXT1 [3, 4], also known as BC1 in DirectX 10 [5], is a lossy compression format for color textures, with a fixed compression ratio of 8:1. The DXT1 format is designed for real-time decompression in hardware on the graphics card during rendering. DXT1 compression is a form of Block Truncation Coding (BTC) [6] where an image is divided into non-overlapping blocks, and the pixels in each block are quantized to a limited number of values. The color values of pixels in a 4x4 pixel block are approximated with equidistant points on a line through RGB color space. This line is defined by two end-points, and for each pixel in the 4x4 block a 2-bit index is stored to one of the equidistant points on the line. The end-points of the line through color space are quantized to 16-bit 5:6:5 RGB format and either one or two intermediate points are generated through interpolation. The DXT1 format allows a 1-bit alpha channel to be encoded, by switching to a different mode based on the order of the end points, where only one intermediate point is generated and one additional color is specified, which is black and fully transparent.

 

Although the DXT1 format is designed for color textures this format can also be used to store normal maps. To compress a normal map to DXT1 format, the X, Y and Z components of the normal vectors are mapped to the RGB channels of a color texture. In particular for DXT1 compression each normal vector component is mapped from the range [-1, +1] to the integer range [0, 255]. The DXT1 format is decompressed in hardware during rasterization, and the integer range [0, 255] is mapped to the floating point range [0, 1] in hardware. In a fragment program the range [0, 1] will have to be mapped back to the range [-1, +1] to perform lighting calculations with the normal vectors. The following fragment program shows how this conversion can be implemented using a single instruction.

 

# input.x = normal.x [0, 1]

# input.y = normal.y [0, 1]

# input.z = normal.z [0, 1]

# input.w = 0

 

MAD normal, input, 2.0, -1.0

 

Compressing a normal map to DXT1 format generally results in rather poor quality. There are noticeable blocking and banding artifacts. Only four distinct normal vectors can be encoded per 4x4 block, which is typically not enough to accurately represent all original normal vectors in a block. Because the normals in each block are approximated with equidistance points on a line, it is also impossible to encode four distinct normal vectors per 4x4 block that are all unit-length. Only two normal vectors per 4x4 block can be close to unit-length at a time, and usually a compressor selects a line through vector space which minimizes some error metric, such that, none of the vectors are actually close to unit-length.

AMD Phenom II x6 1100T - 16GB RAM - ATI 5870 HD - OCZ Vertex 2 60GB SSD

76561197984667096.png

Link to comment
Share on other sites

so if I am reading that article correctly, DXT1 is meant for color textures but DXT5 handles normal maps better. This works out with the current suggested practice of storing the spec map as the alpha in the normal map, although I agree that being able to use DXT5nm and 3Dc looks appealing (although it sounds like it would mean that you would no longer be able to store your spec as the alpha channel).

Vista | AMD Dual Core 2.49 GHz | 4GB RAM | nVidia GeForce 8800GTX 768MB

Online Portfolio | www.brianmcnett.com

Link to comment
Share on other sites

so if I am reading that article correctly, DXT1 is meant for color textures but DXT5 handles normal maps better. This works out with the current suggested practice of storing the spec map as the alpha in the normal map, although I agree that being able to use DXT5nm and 3Dc looks appealing (although it sounds like it would mean that you would no longer be able to store your spec as the alpha channel).

Do NOT use DXT1, ever!

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

asdas

 

2.2 Object-Space DXT5

 

The DXT5 format [3, 4], also known as BC3 in DirectX 10 [5], stores three color channels the same way DXT1 does, but without 1-bit alpha channel. Instead of the 1-bit alpha channel, the DXT5 format stores a separate alpha channel which is compressed similarly to the DXT1 color channels.

 

Unless I am reading that wrong, DTX5 RGB compression is done the same way as DXT1 RGB compression (meaning both are lossy), and the only difference between the two is how the alpha channel information is saved. However, the advice to listen to the creator of the engine sounds like good advice to me, so I will go ahead and follow it... Thanks.

Vista | AMD Dual Core 2.49 GHz | 4GB RAM | nVidia GeForce 8800GTX 768MB

Online Portfolio | www.brianmcnett.com

Link to comment
Share on other sites

(Deep breathe)

 

Ok. Please bare with me here and read the entire post very carefully before responding and/or flaming me into oblivion for dragging this on.

 

I know I do not have very much Leadwerks Community street cred, but I believe that this is an extremely important topic considering that DXT5 textures are exactly twice as large as DXT1, and that texture memory is such a huge piece of the overall memory footprint in a game. For example, a 2048 x 2048 texture saved as a DXT5 is 5.5 MB, whereas a 2048 x 2048 texture saved as a DXT1 is 2.75 MB. When added up over a few hundred textures (even if they aren't all 2048s), this becomes a significant consideration very quickly.

 

My initial question in this thread was intended to figure out where I was wrong, which I assumed I was. However, after very carefully reading the document linked to by Niosop (http://developer.nvidia.com/object/real-time-normal-map-dxt-compression.html) and performing a few very specific tests, I am pretty confident that using DXT5 for diffuse textures with no need for an alpha channel is wasteful.

 

So here is my understanding based on my reading and tests. DXT1 is 4 bpp and DXT5 is 8 bpp. So obviously DXT5 has twice as many bits per pixel as DXT1, which makes it sound like it would be better quality all around. The problem is that the additional 4 bits per pixel in DXT5 are entirely in the alpha channel. Or in other words, there is as much bit information in the alpha channel of a DXT5 texture as all three RGB channels combined.

 

Here are my tests with side by side examples to compare. I took a source PSD document, then saved from the source into DXT1 and then from the source into DXT5 (I did not simply re-save the DXT1 as a DXT5). There is a noticeable difference between the source and both compressed textures, but I cannot see any difference at all between DTX1 and DXT5 (the artficating is even identical). I then did the same thing, but with the red channel only to see the specific difference (if any) between a single channel, but got the same results. In the last column, I copied the red channel into the alpha channel and then saved it as a DXT5. This is where the results where more interesting. The red channel, when copied into the alpha channel and saved there, is much higher quality than the one saved into the actual red channel of the texture. This, and the technical explanation of 3Dc compression of normal maps on nVidia's site, is what made me realize that all of the additional 4 bpp in a DXT5 are in the alpha channel.

 

post-230-12658318659578_thumb.jpg

 

The other, more simple test that I tried was taking any of the diffuse terrain textures that were saved as DXT5 and re-saving them as DXT1, then putting them side by side and comparing. Again, I cannot tell a difference.

 

post-230-12658319681465_thumb.jpg

 

The only difference that I could see is that the DXT5 has a blank white alpha channel and is twice as big (because it has 4 bpp stored in the blank white alpha channel). Please do the same tests yourself to see if you get the same results.

 

Also, most of the textures that come with the SDK are actually DXT1 (or at least they are exactly half as big as their Dot3 counterparts that have the spec map in the alpha channel). This includes all of the Underground, weapons, FPS shooter hands, etc. The only ones that I could find that appear to be DXT5 are mostly terrain textures.

 

Again, I am not trying to be argumentative or play gotcha with anyone, especially not the creator of an amazing piece of software that most definitely has already forgotten more about real time graphics than I will ever know. I just want to make 100% sure that everyone understands what exactly the differences are between DXT1 and DXT5 before committing to doubling the size of all of their textures that don't actually require an alpha channel.

 

Here is another link that explains it better than I am (please if you read nothing else in this post, read the info in this link): http://www.fsdeveloper.com/wiki/index.php?title=DXT_compression_explained

That means that in total 128 bit are used for the 16 pixels (32 for the palette, 32 for the colour indices and 64 for the alpha information).

Also, according to this, saving it as a DXT1 with 1 bit alpha will result in poorer quality in the Red Green and Blue channels (not to be confused with standard DXT1 that doesn't store alpha information and handles the RGB channels the same as DXT5).

 

(exhales)

Vista | AMD Dual Core 2.49 GHz | 4GB RAM | nVidia GeForce 8800GTX 768MB

Online Portfolio | www.brianmcnett.com

Link to comment
Share on other sites

Sounds reasonable to me, thanks for doing the work of making visual demonstrations for us. I think I'll start using DXT1 for my textures without alphas. It'll help make up for the uncompressed ones I do for normal maps.

Windows 7 x64 - Q6700 @ 2.66GHz - 4GB RAM - 8800 GTX

ZBrush - Blender

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...