GeekToo wrote:You determine the brightness of every pixel in the 32bpp source sprite by simply adding r+g+b (ranging from 0-3*255).
That's OK from out "programmers" PoV but isn't an obvious/transparent thing from gfx artist/designer PoV.
GeekToo wrote:...Hope this makes some sense, else I will try to make some charts.
As for me, you're perfectly clear on what you write and everything I've read so far makes a good sense. Still IMO if we're discussing here "how to implement CC properly" in the broad definition of the "how" (instead of discussing something like "what is the minimal set of changes to be applied to the currently used algo so it would perform slightly better") then we should look at the problem at a bit different angle.
Let me try to summarize requirements that seem important to me wrt to CC and rest of recolouring matters:
- We want to make the process of designing new 32bpp sprites suitable for CC recolouring as simple as possible from gfx artist PoV. Best case is if it would be possible for artist to simply render out the 32bpp sprite with whatever colours it had been designed with and then render out same pic with another output settings which would produce mono-coloured mask that simply covers the areas that are about to be recoloured into CC by the engine. Same stands for other types of recolouring like church colours, city buildings colours, e.t.c.
- Translating source 32bpp pixel into recoloured CC pixels should be as obvious/transparent/intuitive as possible from artist PoV. We shouldn't to something like "pure white translates into CC, neutral gray - into black, and rest pixels got red on days from monday till friday and blue on weekends"
.
- Scheme used should allow for easy and fast processing of the existing palette-remap-based recolouring for legacy 8bpp stuff.
- It should be as computationally cheap as possible (no - or minimum possible amount - of fp; minimum amount of branching; minimal count of mul/div/mod ops and memory accesses).
- Recoloured pixels should be allowed to map into either pure black and white with as smooth gradient transition in between as possible.
What we have at our disposal from the source data perspective are five distinct bytes representing each pixel. IMO in case we assume that expected CC recolouring results are pixels being various lights/shades of a single tone (i.e. we don't want to produce, say, red pixels when recolouring some sprite into dark blue CC) it is sufficient to only store two bytes for every pixel to be CC recoloured - namely "target lightness" and "target alpha" - so we're good to go here. It also means that we have three bytes - r, g and b - to be converted somehow into single value that'd be used as a "target lightness". The way we do this conversion determines would it be "intuitively obvious" for artist to use. With the scheme used currently it is not.
Next question to deal with is if we should make a distinction between existing available colours in CC palette range. I.e. we could possibly stick with the "CC recolouring" being mapping into pre-defined set of colours that are hardcoded into engine (or are supplied as a special "32bpp recolor sprites" in GRFs - which we would have to "invent" then) or we could use palette colour as a "base colour" to work with - like it is done now. For me second variant seems to be preferred while first one have a potential to be less complicated wrt computational complexity.
Now, having above written, here is my vision of the possible solution (as it looks to me now, as the view evolves as I spend more time thinking about this):
- 32bpp portion of the sprite: we either require grayscale here (i.e. r==b==g) or perform RGB=>Grayscale conversion of the fly using some proper luma weighting algo (CIE L* is my favourite here but it require more computation to calculate compared to simple gamma-corrected luma weigthing; simplier approximation could be used here - like Blitter::MakeGrey - if we'd stick with "on the fly conversion to grayscale" approach). What we want is to produce a single luma value to work with later on.
- 8bpp mask portion of the sprite: keep it the way it is now pointing to the in-game palette index that would be used as a "base colour" used for recolouring.
- Recolouring process: goal is to have a resulting in-game pixel luminosity to be in direct proportion to the source 32bpp pixel luminosity. HSL model fits well here. RGB->HSL values for palette colours could be pre-calculated on palette init (and re-calculated on PaletteAnimate() calls for affected parts). HSL->RGB conversion isn't the computation-free heaven but it is something that could be done in integer sufficiently fast. We use source sprite 32bpp luma as a factor to modify base CC colour L value in a way so for source luma == DEFAULT_BRIGHTNESS we end up with L unchanged, for source luma == 0 we end up with L == 0 and for source luma == MAX_BRIGHTNESS we end up with L == 255. What it basically means that we want to interpolate between three given points using any interpolation technique we would find reasonable. It could be simple linear interpolation approach or something more fancy (but computationally-demanding) like plain-old cubic splines or even "forget about performance" variants like weighted B-splines. I'd start with implementing simple linear interpolation and staying with it as long as it would produce visually pleasant-enough results.
How does it differ from the existing scheme:
a) luma a.t.m. is assumed to be the same as V value of the pixel converted into HSV. It is not the best choice and that one of the reasons behind the tendency to produce overbrighted pixels (illustrated here:
http://en.wikipedia.org/wiki/HSL_and_HSV#Disadvantages);
b) AdjustBrightness which currently serves as an interpolating function have a response curve that "skyrockets to the lights" with brightness increase and only starts to "compensate for overbright" when V value hits maximum at 255. If you plot Y'601 or Y'709 curve for this algo it'd look like straight line up to DEFAULT_BRIGHTNESS and then it would slow down "raise rate" (decrease acceleration/second derivative) a bit. For most CC colours it'd hit max 255 value long before input V would reach 255. Having divisor changed to 128 simply stretches response curve 2x relative to input V value giving much wanted precision at darken part (V<128) but cropping lighten part so for some (probably, most) CC colours saturation into white is no longer achived even at input V=255.
I've made a spreadsheet with graphs in OpenOffice Calc which could be used as a good "learn by experimenting with numbers" appliance. You could find it in attachment along with two PDF exported from it illustrating response curves for existing implementation vs. proposed HSL-based implementation for divisors equal to 64 and 128. IMO looking at graphs helps a lot to understand the problem we face and what benefits would HSL-based approach provide.
P.S. Looking at the graphs it actually is obvious that for "plain simple linear interpolated" L response curve there's no need at all to convert to/from HSL - recolouring would be just a matter of calculating three linear interpolated values - r, g and b - with a knots at input V == DEFAULT_BRIGHTNESS for r, g and b set to values taken from the company colour. Using HSL and interpolating L value would only be reasonable in case response curve interpolation calculations are way more expensive than HSL->RGB conversion.