To put transparent images in games, we usually use images that contain an alpha channel specifying the transparency of each pixel. Here's an example of an image being composited onto a new background using an alpha channel:
There are several reasons why we would need to downscale an image. In 3D games, the most common reason is to create mipmap chains. A mipmap chain stores copies of the image recursively downsampled by 50% each time, until it reaches a single pixel. These smaller images allow us to draw distant objects and glancing angles more accurately and efficiently.
Scaling down images with alpha channels can cause a significant problem. For example, if we try to compose the downscaled example image, we get this:
The background color starts 'bleeding' into the image. The usual way to fix this is to manually make the background color as similar as possible to the image, so that it doesn't matter if it bleeds in. Here's an example of that technique, created by layering the image onto blurred copies of itself.
This can take a lot of time for the artists, and is also made impossible by some image exporters. For example, when Photoshop exports .png files it automatically makes all transparent pixels white. Fortunately, there's a more elegant solution that doesn't require any extra effort on the artist's part. Usually the image is scaled down by averaging every group of four pixels to find the color of each new pixel:
Instead of doing a straight average, we can perform a weighted average of each new pixel based on the alpha weights. This means that transparent pixels will have very little effect on the color. The opaque colors will bleed out into the background instead of vice versa.
The final result is that the composition will look correct at any level of scaling, regardless of the initial background color.
Please let me know if you've encountered this problem before, or if you have an alternate solution!