Thursday, June 29, 2006

Gamma, Again! (Part VI)

In Gamma, Again! (Part V) I expressed befuddlement at the fact that CRT-based studio monitors alter their native gamma figure, which is supposedly 2.5, to 2.2. I was citing Dr. Raymond Soneira's four-part series titled "Display Technology Shootout" in Widescreen Review magazine, Sept.-Dec. 2004, online in PDF form here and accessible directly as a web page here. Herein, more about the studio monitor issue.

To recapitulate my earlier posts, gamma is a number that tells how quickly the luminance (L) produced by a television or computer monitor's screen rises as the video signal's analog voltage or digital code level (V) steadily ascends from the minimum possible value (black) to the maximum, which is in video parlance "reference white," assuming all three primaries, red, green, and blue, are equally represented in the signal.

Mathematically, the equation

L = VƔ

(Ɣ is the Greek letter gamma) represents the transfer function of the monitor, assuming its BRIGHTNESS or BLACK LEVEL control has been carefully adjusted so that it produces minimum L precisely for minimum V.

If its BRIGHTNESS or BLACK LEVEL control has been set either too low or too high, the equation becomes
L = (V + e)Ɣ

where e represents the amount of the black-level adjustment error. Failure to take e into account leads to an erroneous assumption that Ɣ itself has changed. If black level is set too low, Ɣ seems to rise, as the picture seems to gain contrast. If black level is set too high, Ɣ seems to go down, as the image contrast seemingly lessens.


Video cameras are standardized with an inverse transfer function which, a bit oversimplified, looks like

L = V(1/Ɣ)

where Ɣ is (approximately) the assumed gamma of the TV monitor. The exponent here is for the purpose of gamma correction. It serves to (almost) neutralize the gamma of the TV, for an end-to-end power of (close to) one.

By "end-to-end power" I mean what you get when you multiply the mathematical power or exponent in the second equation by that in the first. I parenthesized the words "approximately," "almost," and "close to" above because considerations of rendering intent or viewing rendering dictate that the end-to-end power actually ought to be somewhat greater than one.

As I said in my earlier post, one reason for this is that a video display's luminance levels are tiny fractions of the real-world luminances that arrive at the camera's image sensor from the original scene. Another is that we customarily frame video images in unnaturally dark surrounds and view them in darkened or semi-darkened rooms. A third is that our TV screens usually cannot achieve the ultra-wide contrast ratios found in nature.

These ideas about rendering intent and end-to-end power come from Charles Poynton's excellent textbook, Digital Video and HDTV: Algorithms and Interfaces. Poynton is a video and color imaging guru who speaks with a great deal of authority about such matters. I contacted him by e-mail and asked him to comment on Dr. Soneira's notion that our TVs at home ought to emulate the 2.2 gamma of studio monitors.


The issue is one that, after all, boils down to the end-to-end power of the video delivery system. When gamma correction in the video source — a camera or film scanner — uses an exponent whose value is effectively 0.45, which is close to 1/2.2, to deliver a video signal to a home TV whose gamma is 2.5, the end-to-end power is 0.45 x 2.5, or 1.25. That tends to provide optimal results when the image is viewed in a dim-but-not-pitch-black environment, says Poynton's book.

But images that are viewed in total darkness ought to have 1.5 as their end-to-end power, Poynton says. If the original gamma-correction exponent is changed from 0.45 to 0.6, a gamma-2.5 TV yields that end-to-end power. (If the gamma correction stays the same, adjusting the TV's gamma to 3.33 would, my calculator tells me, have the same effect. But how many TVs can be adjusted to 3.33 gamma?)

So the effective gamma-correction exponent is crucial to rendering intent. But here's where the studio monitor, used by "the creative people involved in making a program [to] approve their final result," as Mr. Poynton so succinctly puts it in his e-mail reply to me, makes its crucial appearance.


The studio monitor has, of course, its own gamma exponent, which determines how much image contrast there is in movie scenes that are rendered on its screen. If the "creative people" in, say, a DVD post-production facility don't see enough contrast, they can in effect raise the original film-to-video scanner's gamma-correction exponent to provide a more contrasty result on the eventual DVD.

But what happens when the post house monitor has, say, 2.2 gamma and your TV's is fully 2.5? The end-to-end power of the video delivery system as a whole thereby rises above what it would otherwise be, right?

That's not necessarily bad, mind you. Remember the example I gave above, in which hiking the gamma-correction exponent from 0.45 to 0.6 made for an ideal image as viewed in a totally dark viewing environment on a gamma-2.5 TV? It suggests that "creative people" in post houses, who use (according to Dr. Soneira) gamma-2.2 studio monitors to tweak images as they are being viewed in the post facility under subdued lighting conditions, wind up producing just the right amount of image contrast for DVDs watched in pitch-dark home theaters whose displays have gamma figures notably higher than 2.2. True?


Well, maybe. Mr. Poynton now says, in his kind e-mail response to me, "Current practice — as far as I can determine, after a decade or more of work — is that studio monitors have 2.4 gamma" (!). He also suggests that he now finds 2.4 to be a "more realistic" estimate of the inherent gamma of a CRT, a fact of life upon which the need for gamma correction was originally based. Studio monitors, of course, are usually CRTs.

Meanwhile, owing to what are perhaps misinterpretations by TV makers of official standards for video-production engineers and television studios, consumer TVs may be getting built-in gamma exponents less than 2.4. Poynton:

Rec. 709 [the broadcast standard for modern digital HDTV] standardizes the factory setting of a camera's gamma correction, but fails to mention viewing rendering, and misleadingly includes an inverse (code-to-light) function. Inclusion of the inverse function suggests its use in a monitor, but actually the function would yield scene-referred values, not display-referred (rendered) values. And no video textbook — save mine — even mentions the issue! It's a mess.

I think that by "suggests its use in a monitor" he means, here, that Rec. 709 specifies an inverse function whose gamma-correction exponent amounts to an "advertised power" of 0.45, which is roughly equal to 1/2.2. "Advertised power," in Poynton's terms, refers to the fact that "taking into account the scaling and offset required to include the linear segment [at the lower end of the curve, the effective exponent] is effectively 0.51." But that's not the key thing here.

The key thing is, rather, that all manner of people, TV makers included, see that advertised exponent of roughly 1/2.2 or 0.45 codified in the Rec. 709 standard and think consumer HDTVs ought consequently to have a 2.2 gamma exponent.

So even CRT-based consumer HDTVs (what few of them there are) may be using digital signal processing (DSP) to change what would otherwise be an inherent 2.4 gamma to 2.2! And makers of non-CRT displays — plasmas, LCDs, etc. — are following suit. Their (DSP-imposed) gamma figures are basically the same as modern CRTs': 2.2, or thereabouts.


Mr. Poynton brings up another subtle but interesting point in his e-mail to me:

Rec. 709 has an advertised power of 0.45 but taking into account the scaling and offset required to include the linear segment [in the region at the lower end of the curve, near black] it is effectively 0.51. For 2.4-power display, end-to-end power is about 1.2. That's appropriate for a daylight scene with diffuse white at about 30,000 [cd/m2]. Viewing rendering needs to be reduced if the scene is shot at candlelight: End-to-end power should then drop to about 1.1, requiring effective 0.46 at the camera (requiring reducing advertised gamma to about 0.40).

My interpretation: in a brightly lit daylight scene, sunlight will reflect off a white sheet of paper with a luminance of about 30,000 candelas per square meter, or 30,000 cd/m2. A televised image of that scene needs an end-to-end power of about 1.2 when viewed (I assume) in dimly lit but not pitch-black environs.

But a scene shot in candlelight will have a maximum luminance lower than that by several orders of magnitude. The TV screen's output luminance will now be able to nearly match that of the original scene. Since one of the main reasons for using an end-to-end power well greater than 1.0 has disappeared for this particular scene, end-to-end power "should then drop to about 1.1."

Since the TV cannot be expected to make that gamma adjustment on the fly, the effective power or exponent of the camera (or film scanner or post-house tweaking station) ought to drop to 046. In view of the difference between effective power and advertised power, the latter ought to be reduced to 0.40 for a candlelit scene.


I admit that all this stuff about gamma may seem like material for a Ph.D. thesis. Again, why should we care?

The main reason is that the best front- and rear-projection HDTVs today, properly adjusted and calibrated and with excellently produced video source material, can produce images that are stunningly film-like. Flat-panel displays are not far behind. We are very close to home-theater nirvana.

Achieving proper end-to-end power figures will put us even closer.

It looks to me as if that holy grail requires co-operation between program producers such as home-video post houses and HDTV makers. The people responsible for scanning films and tweaking the image for consumer DVDs have some latitude to adjust their effective gamma-correction exponents on a scene-by-scene basis, as Mr. Poynton suggests. But because gamma correction has to stick to providing a digital signal that can be encoded without artifacts in just 8 bits per color per pixel, that latitude is limited.

So it seems to me that out HDTVs need gamma adjustment capabilities.


HDTVs and front projectors with the ability for the end user to tailor gamma already exist, of course. As far as I can tell, gamma adjustment is even becoming a common feature on pricey high-end HDTVs. With these TVs, users are able to change the gamma setting to take into account such things as how dark or light their viewing environment is and what their own preferences are concerning image contrast. Users can also change gamma for DVDs or other video sources they feel have been rendered too dark or too light.

Those end-user gamma adjustments ought now to enter the mainstream. And they ought to be implemented in such a way as to tell the user exactly which gamma exponent (2.2? 2.4?) the TV is using.

In fact, it would be nice if TV makers would include in their TVs' firmware test patterns and software routines similar to SuperCal by which to measure and calibrate gamma. For that matter, why not include calibrating patterns for black level, white level, hue and saturation, etc.?

Tuesday, June 27, 2006

Gamma, Again! (Part V)

In Gamma, Again! (Part IV), I fussed with trying to determine the actual native gamma of one of my HDTVs, a Hitachi plasma. Now I'd like to get back to the main subject: what is gamma, and why should we care?

Gamma is basically the nonlinear way in which the light output or luminance of a TV screen (a.k.a. it's intensity) represents the many possible levels of red, green, and blue, the three primary colors in the video signal.

The higher the display's gamma happens to be in a range from 1.0 to 2.5 or so, the darker and more contrasty the image.


Imagine a "ramp" test pattern:



In it, moving from the left edge of the screen to the right, the input signal rises in level from the minimum possible, for black, to the maximum possible intensity, for white, with all three color primaries present in equal amounts. As the signal level goes up, luminance lags behind. Due to this lag, the luminance output at, say, the halfway point across the screen, is actually a lot lower than it would seem by visual inspection.

That's because this lag is not apparent to the eye: the eye's lightness perception of the various levels of luminance is itself nonlinear. The human visual system tends to exaggerate the lightness variations at lower levels of luminance and compress those at higher levels. That's why this test pattern seems to reach its middle level of lightness right in the center of the left-to-right sweep.


If input signal level is V — for voltage, with analog signals; for video level, with digital signals — then the screen luminance L of the TV is given by the equation

L = VƔ

where Ɣ, the Greek letter gamma, represents gamma. This function is in effect computed three times for each separate pixel in the image: once for red, once for green, and once for blue.

But why? Why not use the simpler function L = V, where the gamma exponent is effectively 1?

There are several reasons, it turns out. The most basic of these reasons is that cathode ray tubes, which are inherently nonlinear, operate with an intrinsic gamma of 2.5 or thereabouts.

That's the most fundamental reason, then, why signals intended for display on CRTs have always been gamma-corrected. A video camera creates V for each color primary according to a function something like

L = V(1/Ɣ)

where Ɣ is (approximately) the assumed gamma of the TV.

I say "approximately" because the actual denominator of the exponent, in the gamma-correction equation, is typically 2.2, not 2.5. (Moreover, due to the fact that for very low values of L the functional relationship shown above is replaced with a straight line segment, the overall exponent is in effect slightly changed again; I'll ignore that nuance for now.)

Because the camera's actual gamma-correction exponent is roughly 1/2.2, or 0.45, gamma correction at the video camera serves to almost but not quite neutralize the gamma of a standard CRT display, which is nominally 2.5. Because the neutralization is incomplete, the final displayed image appears to have slightly more contrast than it would if the gamma correction were complete.


There are several reasons why the camera's gamma-correction exponent doesn't, and shouldn't, fully offset the actual display gamma. One reason is that a video display's luminance levels are tiny fractions of the real-world luminances that arrive at the camera's image sensor from the original scene. Another is that we customarily frame video images in unnaturally dark surrounds and view them in darkened or semi-darkened rooms. A third is that our TVs usually cannot achieve the ultra-wide contrast ratios found in nature.

All three of these reasons result in the need to "goose" image contrast. The best way to do that is to ensure that the gamma correction that is done in the camera does not fully offset the gamma of the display.


Yet according to Dr. Raymond Soneira's four-part series titled "Display Technology Shootout" in Widescreen Review magazine, Sept.-Dec. 2004, studio CRT monitors used in tweaking video images before they are broadcast or rendered on DVD typically have decoding gammas of 2.2, not 2.5. "Current CRTs," he writes in the second part of his series (in WR, Oct. 2004, p. 68; the article can be accessed directly as a web page here) typically have a native gamma in the range of 2.3 to 2.6, so the gamma of 2.20 for Sony (and Ikegami) CRT studio monitors is actually the result of signal processing."

Whatever it's the result of, a gamma of 2.20 seems to violate the maxim that camera inverse-gamma ought not to fully compensate the gamma of the display. If the camera exponent is approximately 1/2.2, or 0.45, and the display gamma is 2.2, then (roughly speaking, at least) full compensation does occur.

I simply can't yet explain why studio CRT-based monitors, using digital signal processing, alter their native gamma figure, which is nominally 2.5, to 2.2.


I mentioned above that the basic reason why gamma correction is done in the video camera and for all other video sources is that CRTs are inherently nonlinear: the luminance they produce is not a linear function of signal voltage.

I also mentioned that the eye's lightness response to luminance is itself nonlinear, such that lower/darker levels of luminance are exaggerated, in terms of their apparent lightness, while higher/brighter levels are more compressed. That was why the ramp test pattern shown above seems to place its middle lightness level right in the center of its horizontal sweep, although the actual luminance at that point is far less than half the luminance of white at the right edge of the ramp.

By a strange coincidence, the eye's own version of "gamma correction," a perceptual trick by which its lightness response is not a linear function of luminance, approximately matches that done in video signal encoding to offset the gamma inherent in CRT picture tubes!

That is, a scientist's graph of the eye's lightness response to luminance has very close to the same shape (and thus the relevant equation has approximately the same exponent, 0.4 or 1/2.5) as a graph of the video gamma correction function.

As a result, a gamma-corrected video signal bears an approximately linear relationship to perceived lightness — though, as I have already said, not to measurable luminance. This is a second reason why gamma correction is done in video. A camera that does gamma correction responds to luminance patterns focused on its image sensor much as the human visual system responds to luminance patterns focused on its retina.

Another way of stating it is to say that gamma-corrected video has perceptual uniformity. (I am drawing here from Charles Poynton's excellent book, Digital Video and HDTV: Algorithms and Interfaces.) Each successive increase in gamma-corrected digital code value over the available range from 0-255 (or 16-235) boosts perceived lightness (though not physical luminance) by the same barely detectable amount. (A similar statement is true for analog video signals expressed in IRE units from 0-100, though each step up in signal level — say, from 50 IRE to 51 IRE — involves a more-than-minimally-detectable boost in lightness.)


Perceptual uniformity in gamma correction works out nicely for two reasons. First, the visibility of video noise, especially troublesome in darker parts of the scene having luminances at the low end of the available range, is effectively minimized.

In the absence of display nonlinearity and gamma correction in the camera, digital video might instead be encoded in a "linear-light" domain. If 8 bits per primary color per pixel were used, the range of available code values would be (at most) 0-255. Suppose the "correct" code value for a gray pixel (ignoring color) were 50, but due to the presence of noise in the circuits of the video camea, it is instead encoded as 51. The seemingly tiny difference in code value would produce a 51/50 = 102/100 = 1.02 = 102% ratio of actual luminance to intended luminance.

That is, the actual luminance on the monitor screen would be 2% higher than it ought to be. But differences in luminance of just 1% can be detected by the eye, at least under certain conditions. So the noise in the non-perceptually uniform, linear-light signal is apt to be noticeable.

However, gamma correction of the luminance at the image sensor of a camera into a perceptually uniform signal domain according to a 1/2.5 power function compresses the low end of the tonal range especially much, and with it what I'll call the low-end noise. The luminance-plus-noise quantity which in the above exapmle prompted an erroneous code value of 51 might, with gamma correction, yield a number like 50.4. But since only integer codes are allowed, this would be rounded to 50, and the noise would disappear!

A similar logic also applies to analog video signals. Without gamma correction, low-end noise would be more of a problem than it is, simply because the eye is more sensitive to lightness variations at the low end of the tonal range than at the high end.


Noise at the high end of the tonal scale or lighntess range is much less of a problem. Again, imagine an 8-bit tonal scale with codes 0-255. If camera noise takes a "correct" pixel value up one level from 200 to 201, the ratio is just 201/200 = 100.5/100 = 1.005 = 100.5%. A mere 0.5% rise in luminance is not detectable to the eye, which under the best of circumstances needs a 1% jump in luminance for differences to be visible.

But there is a separate problem which affects the middle portion and high end of the tone scale in digital video. Poynton calls it the "code 100" problem, and it is the second reason why digital video needs to be gamma-corrected into a perceptually uniform domain.

The "code 100" problem has to do with the need to provide a minimum of a 30:1 contrast ratio between the brightest-possible parts of a scene and the darkest-possible parts. Without gamma correction, the codes from 0 to 100 in an 8-bit encoding scheme, with values from 0-255, have to be thrown out, for reasons similar to the discussion of noise above. That is, each successive code increment (say, code 50 to code 51) provides a much-more-than-barely-detectable boost in output luminance, in a linear-light encoding system.

Accordingly, what should be shades of color or gray that blend indistinguishably into one another instead exhibit banding or false contouring: visible striations that were not present in the original subject matter.

Thus, the codes from 0-100 have to be thrown out and never used. Black has to be identified with code 100, not code 0. (Remember, we are talking here about a hypothetical linear-light method of 8-bit digital encoding, not what is actually done in the real world of digital video.)

If white is at code 255 and black is at code 100, then the ratio between the two is only 255/100, or 2.55:1. That's way too low, when 30:1 is considered the minimum acceptable ratio.

In order to get a contrast ratio that meets or exceeds 30:1, you have to go to 12-bit linear-light coding. Then white is at code 4095, not 255, and black is at 100, for fully a 40.95:1 contrast ratio.

But many of the available codes are in effect wasted; they're not perceptually useful. For example, the eye can't see the difference between any two codes in the range from 4001 to 4040, because the luminance associated with the code at the top of the range is less than 1% above that associated with the code at the bottom.

Gamma-correcting the signal into a perceptually uniform lightness domain allows the same amount of perceptually useful information, with a similarly acceptable contrast ratio, to be shoehorned into pixels of just 8 bits per primary color, not 12 bits. The approximately 1/2.5 power function that converts camera luminance amounts into a gamma-corrected video signal effectively "squeezes out" the wasted code levels. This, then, is the solution to the "code 100" problem.

It solves that problem while also dealing with the low-level noise that would also plague a linear-light 8-bit system, if codes below 100 weren't tossed out. That's why Poynton says gamma correction allows video signals to make maximum effective use of digital channel bandwidths.


So we have seen several reasons for gamma-correcting a video signal:

  • To precompensate for the nonlinearity of a CRT
  • To suppress low-level video noise
  • To code for perceptual uniformity
  • To avoid the "code 100" problem
  • To avoid wasted digital codes
  • To reduce the number of bits needed per pixel
  • To maximize effective use of the bandwidth of the digital channel
  • To maximize effective use of the capacity of a digital recording device

Many of these are, of course, merely ways of saying the same things in different words, when you come right down to it. The first three apply to analog and digital video, while the others are digital-specific. In fact, the last six, all of which have to do with perceptual uniformity, show why gamma correction would need to be done even if the luminance-to-voltage curve of a CRT were perfectly linear.

That is, gamma correction would have to be done even if a CRT display's native gamma exponent were a linear 1.0, rather than around 2.5, simply because in the digital video age coding for perceptual uniformity pays off so handsomely.


At this point in the discussion we may justifiably take a moment to thank our lucky stars. For it is an extremely fortunate coincidence that the gamma-correction equation which best imposes perceptual uniformity on the digital video signal is for all intents and purposes identical to the equation which best precompensates the gamma of a CRT picture tube!

If this were not so, video engineers would have to choose between a camera transfer function with an exponent which best precompensates a CRT's inherent gamma (which under these hypothetical assumptions would not be 2.5) and one which, in Poynton's words on p. 258, "makes maximum perceptual use of the channel." The latter constraint, says Poynton, requires video coding of an image in such a way as "to minimize the visibility of noise, and to make effective perceptual use of a limited number of bits per pixel" — while at the same time sidestepping the "code 100" problem as it relates both to a too narrow contrast ratio and to wasted code values.

But since a CRT electron gun's intrinsic response to signal voltage very neatly mimics the eye's perceptual response to scene luminance, both gamma-correction goals can be served by the same camera transfer function!

Sunday, June 25, 2006

Gamma, Again! (Part IV)

In a series of several articles, most recently Gamma, Again! (Part III), I have been exploring gamma, a measure of how the amount of luminance, L, that is output by a TV screen or computer monitor relates to the input signal's various possible brightness levels. Gamma is actually the exponent of the video signal level, V, in the equation

L = VƔ

where Ɣ, the Greek letter gamma, represents gamma.

The originators of video images normally assume their images will be displayed on TVs that have a standard gamma of 2.2 — or, at most, 2.5. If the TV's gamma is too high, the image will be too dark — especially the darker portions of the image. But if gamma is too low, the image will appear washed out.

Gamma is nominally a constant that does not vary over the range of V, the possible brightness levels of the image's pixels. When that is the case, test patterns like this "Gamma Chart" from Ovation Multimedia's Avia: Guide to Home Theater calibration DVD can be used to measure the gamma of a TV:



(Click on the image to bring up a larger version.)

This chart sets nine squares of various levels of gray against a background which, when viewed at a distance or with a squint, blends pure white and pure black in 50-50 proportions. That yields a 50% gray. The grays in the patches are nominally much lighter than 50% gray, but the gamma of the display acts to darken them. The label of the square which, when duly gamma-darkened, matches the 50% gray of the blended background gives the value of the display's gamma. (You may have to interpolate between two squares if gamma is odd — say, 1.9 or 2.3.)

Another gamma test pattern may be found here:



This one seems to give like results. But I find that there are several caveats that need to be kept in mind when trying to determine gamma using charts like these. For example, they don't necessarily work right if I display them on my TV screen via an S-video connection from my laptop, an older Mac PowerBook.

This computer does not have a video card per se. All it has is an external monitor connection and an S-video connection. When I hook the computer to my plasma TV via an S-video cable and play the Avia Gamma Chart test pattern into the TV via the DVD Player software on my Mac, I seem to get a gamma reading that is way too low.

What the exact gamma measurement is is hard to say. When I play the Avia Gamma Chart through the laptop's DVD Player software into the TV, the squares in the pattern are so light gray that gamma 1.6 has to be far too high.

When I play the screen snap I have taken of the Avia Gamma Chart into the TV via the Preview app, I seem to get somewhat darker squares and a gamma reading that is only slightly less than the lowest on the chart, 1.6.

When I play the same screen snap via the Firefox browser, the squares get lighter, but not as light as with DVD Player.

When I play the second test pattern shown above via Preview, it seems to say my TV has a gamma of 1.0 (!).

When I play that second test pattern via Firefox, gamma seems to read a smidgen above 0.8 (!).

Those low readings can't be anywhere near correct! Can they????


The issue is confused by several factors. One is that Macs use so-called ColorSync "display profiles" to tailor their screen images. These profiles involve, among other things, a target gamma setting. I can make different profiles that have different target gammas, and whichever one I make active will control all screen output ... with certain exceptions!

One exception seems to be that (on my laptop but not on my desktop iMac, for whatever reason) the display profile is ignored by the DVD Player software.

Another exception seems to be that when I use an external TV to "mirror" the screen of my laptop via an S-video hookup, the display profile does not affect the external TV.

Also confusing the issue is the fact that in various of the above scenarios the Avia Gamma Chart has to be scaled to fit a certain window size or screen resolution. It doesn't scale well: the supposedly equal-width horizontal lines of alternating black and white show up with different scaled widths, resulting at best in a regular oscillation of the blended background intensity and at worst in irregular spacings that shoot the blending all to hell.

The problem seems not to affect the second test pattern, whose lines are closer together. But it's anybody's guess how accurate it is in my situation. There are just too many X factors between it and my TV screen.


After I wrote the above it occurred to me that I was missing a bet. I suddenly realized that I might be able to use the same Apple Display Calibrator software which makes display profiles for my laptop computer's internal screen to make one for my Hitachi plasma.

This application opens from within the Mac'sSystem Preferences: Displays control panel when I click on the Color tab. I hadn't quite realized at first that Display Calibrator opens two windows, one for the built-in display of my PowerBook and another for the television connected via an S-video cable. When I select the Color tab in that second window, I find myself in Display Calibrator making a display profile for my plasma TV!

The process involves setting the TV's contrast to maximum (100/100) and its brightness to a level (40/100) where a dark gray apple in the middle of a darker gray square is just visible. Then there are five steps involving adjusting a vertical slider until an apple in the middle of a similar gray square perfectly matches the square in lightness. Each time, the apple and the square are at different intensity-level pairs, the result being that the Display Calibrator measures the gamma of the display at five points along the entire brightness gamut.

There is also, in each of the same five panels, a horizontal/vertical slider that adjusts the apple's chromaticity coordinates for a neutral gray. I didn't have to touch the chromaticity sliders at all.

So the succession of five interactive panels can be thought of as "gamma+chromaticity" measurements. After going through them, I was presented with a panel that let me set the target gamma that I wished to achieve. I could choose any gamma from 1.0 up to ... well, I don't remember how high the slider went. I could also click on TV Standard 2.2 or PC Standard 2.5 or even Linear 1.0, or I could check "Use Native Gamma" (which is what I did) to accept the native gamma of the TV.

Next came a panel that let me set the TV's target white point. Again, I opted for the native white point that had just been measured.

After a couple of further administrative steps, I was finally presented with a profile summary which told me that my native gamma was 1.03 (!) and my native white point was 6507K.

These chromaticity coordinates, which relate to those on the 1931 CIE chromaticity diagram, were also reported:


xy
Red phosphor:0.6300.340
Green phosphor:0.2950.605
Blue phosphor:0.1550.077
Native white point:0.3130.329


That was with the "Standard" color temperature selected on the TV, by the way. With another setting I expect I may have gotten different chromaticity coordinates, at least for the white point.


The real scoop here is that the Hitachi's "native gamma" was a supposedly near-linear 1.03. In fact, I was unable to adjust a couple of the panels' gamma sliders far enough downward to get the magic apple-in-the-square to fully disappear, and I suspect had I been able to do so I might have read a native gamma of 1.0!

This made little sense to me, as a native gamma of 1.0 or thereabouts would seemingly make for a very washed-out picture. So I decided to investigate further by obtaining a shareware monitor calibrator for the Mac, SuperCal.

SuperCal does what Display Calibrator does, but with a more elaborate user interface. I won't attempt to describe all the steps needed to calibrate a monitor, but the steps that actually determine gamma are of some interest. They involve adjusting a slider until the top and bottom halves of a colored square set against a black background appear to be of equal brightness.

You do this several times for each of the three primary colors, red, green, and blue. Each time, you adjust another slider so that the adjustment you are presently making is for a different video level along the possible gamut from black to maximum intensity. As you make these adjustments, a graph inset in one corner of the screen shows you the actual gamma curve you are creating, point by point!

That's what gave me the insight I needed to figure out what was really going on.

For the graph is actually a gamma-correction graph. It shows the inverse of the gamma curve of the monitor, not the monitor's gamma curve per se. For example, if the gamma of the monitor happens to be 2.2, the graph shows

L = V(1/2.2)

The rationale? What is actually being determined by SuperCal is what gamma-correction curve is needed in order to exactly neutralize the inherent gamma of the monitor.

Once it knows that, it can go on to produce a ColorSync profile for regular use with that monitor. In creating this profile, SuperCal lets the user decide which "target gamma" to use. For instance, the user may select a target gamma of 2.5, such that anything that appears henceforth on the monitor screen looks like it would on a Windows PC, where the usual gamma is 2.5.

In effect, SuperCal first neutralises the "native gamma" of the monitor, and then it simulates whatever "target gamma" the user opts for.


It occurred to me that I might expect SuperCal to show me a triad of gamma-correction curves, one for each primary color, that were absolutely linear, if I were to use it and not Display Calibrator to calibrate my Hitachi plasma, while it is hooked to my laptop via S-video.

Unlike the usual case in which the monitor's gamma-correction curve humps upward away from the slope-1.0 diagonal of the graph, I would expect it to hug that diagonal slavishly — if, that is, my earlier experiments were correct, and the "native gamma" of the Hitachi is indeed 1.0.

And that's exactly what happened. In each case, the red, green, or blue gamma-correction plot SuperCal showed me was a ruler-straight line coincident with the slope-1.0 diagonal of the graph.

All of which brought home to me what I said above: the SuperCal and Display Calibrator are in the business of measuring a monitor's native gamma so that they can neutralize it with the appropriate gamma-correction curve and then (optionally) replace it with the target gamma of the user's choice.


Once my thinking about the above became clear, it next occurred to me that the "native gamma" of my Hitachi plasma was reading at 1.0, or linear, in my Mac-to-Hitachi S-video scenario because something else on the Mac is already gamma-correcting the signal.

That is, there appears to be an independent software or hardware gamma-correction function built into the Mac's S-video-out signal path. Since this path "knows" the target monitor is a television receiver, it perhaps uses the inverse of the TV-standard gamma, 2.2, as the exponent of its signal-transformation function:

L = V(1/2.2)

That's a guess as to the particular inverse exponent it uses, of course. But whatever the actual exponent, it appears to be one that exactly inverts the true native gamma of my Hitachi. If it were not, then SuperCal would surely show me gamma-correction plots that do not precisely hug the slope-1.0 diagonal!


So I have to conclude that, although I don't yet know the actual true native gamma of my Hitachi plasma, I do know that (a) it matches what the Mac "expects" any television it feeds at its S-video output to have(possibly 2.2) and (b) it is, as it should be, constant over the video brightness range (or SuperCal could not show me a straight slope-1.0 gamma-correction function).

Friday, June 23, 2006

Gamma, Again! (Part III)

In Gamma, Again! (Part I) and Gamma, Again! (Part II) I discussed gamma, a number which describes a video display's luminance (its light output) as it relates to the input signal's various levels (the range of possible pixel-by-pixel brightnesses, from 16 to 235 for digital video). Gamma determines image contrast.

If gamma is fairly high — say, 2.5 or more — shadows in images may appear too deep and dark. Portions of the TV screen whose pixel values are in the low end of the 16-235 brightness range will be rendered with less luminance than if gamma were relatively low. (In fact, all brightness levels besides pure black at pixel code 16 and pure white at pixel code 235 will be dimmer, the higher gamma is.)

As gamma drops from 2.5 to, say, 2.2 or even 1.8, dark areas of the screen begin to open up to visual inspection. The input pixel values don't change, of course, but their output luminance levels go up. And the effect is greater at the lower end of the brightness range than at the higher.


Note that "brightness range" is a phrase I use loosely to connote the entire possible gamut of input signal levels. For digital video, this gamut is expressed in terms of numeric code values that apply separaately to each pixel contained in the image. These values range from 16 at the low end (black) to 235 at the high end (white). Values between 17 and 234 represent shades of gray.

Or, if just one primary color is being considered — say, red — 16 is (again) black and 235 is red at its maximum possible brightness. Values between 17 and 234 represent the darker reds that are found at the low end of the scale and, at the high end, increasingly brighter reds.

When just one primary is involved in defining a particular pixel of the image, the code value of each of the other two primaries is presumably 0, for that pixel. In more general cases, the pixel's other two primaries also have non-zero code values between 16 and 235. If all three primaries have the same code value — say, 0 or 100 or 200 or 235 — they mix to make "colorless" black (code 16), or gray (codes between 17 and 234), or white (code 235).

Code values may also be defined over a 0-255 range, instead of 16-235. 0-255 is the range used for computer graphics. 16-235 is the range nominally used for television video. But some televisions and some video sources such as DVD players prefer the 0-255 brightness range as well.

Analog video uses voltages — numbers of millivolts — to express a signal's brightness range. Also in common use are "IRE units," from 0 to 100, which can apply to either analog or digital signals. Finally, a brightness range is sometimes "normalized" to fit within arbitrary brackets such as 0.0 to 1.0.

However it is defined, this brightness range from black t0 white/maximum brightness is the basis for gamma. A TV's gamma function causes it to reproduce the values in the brightness range in nonlinear fashion: output luminance does not rise in one-to-one lockstep with the input signal. The higher gamma is, the more nonlinear the reproduction. Gamma 2.5 produces a deeper, darker, and more contrasty image than gamma 2.2.


Existing video standards assume that TVs have a gamma of 2.2 ... though cathode-ray tubes or "picture tubes" actually have a native gamma of 2.5! In CRTs today, the difference between gamma 2.5 and gamma 2.2 is the result of digital signal processing. Values from look-up tables (LUTs) buried in the TV's digital circuitry effectively modify the native transfer characteristic of the TV.

Non-CRT displays — flat-panel plasmas, LCD panels and rear projectors, DLP rear projectors, etc. — also need specific transfer functions: CRT-like ones, if they are to look like CRTs. But here, in a recent review of a super-pricey plasma HD monitor, the Mitsubishi PD-6130, we see that TV designers can have other ideas in mind. According to reviewer Gary Merson of Home Theater magazine:

"The set has a four-level gamma control that adjusts the logarithmic relationship between input signal level and display level. Good gamma contributes to subtle changes in brightness, and video is ideal at a gamma value of 2.2. The PD-6130's number-four setting had an average value of 2.05 over the entire brightness range. (The value changed in different parts of the range, which is why I specify an average here.) At the number-one setting, the average gamma measured 1.78."


This one short paragraph says a lot:

  1. Some HDTVs let users choose among various gamma settings.
  2. The gamma choices they offer can have arbitrary names/numbers, not the actual gamma values themselves.
  3. Experts like Merson look for a nominally standard gamma setting of 2.2.
  4. What they actually find can be a much different gamma.
  5. Some TVs' gamma figures actually change at different levels over the available pixel-brightness range.
  6. These TVs' actual average gamma figures, at any setting, are apt to be lower than the standard 2.2.

Sub-2.2 gammas make for a brighter image overall, a selling point at the video store. Gamma values that vary with pixel brightnesses let TV makers goose the picture for impressive blacks and snappy highlights, even as the sub-2.2 average gamma boosts the overall image.

Will such an eye-grabbing image satisfy us at home, in the long run? Maybe not. It would be nice if we could switch to a standard gamma of 2.2, straight across the entire brightness range.


But how? There are a great many obstacles. Obstacle #1 is the fact that we as end users generally have no way to measure the gamma of our TVs.

True, Ovation Multimedia's Avia: Guide to Home Theater, a calibration DVD, has among its test patterns a "Gamma Chart." My experience with it, however, is that the result is at best an average number which doesn't tell much about how the TV's gamma varies over the brightness range.

Moreover, the Avia-reported gamma figure can change if the user alters the TV's brightness/contrast settings or turns on such digital enhancement features as "black enhancement," "dynamic contrast," and the like.

Are there other ways to measure gamma? Ovation does also offer Avia: Pro, which boasts a "Gamma Response and Linearity to Light Test" that I unfortunately know little about. This seven-DVD test suite is aimed at professionals and ultra-serious home enthusiasts and costs around $350.

There are also various video setup/calibration products available from DisplayMate Technologies that run on Windows. I, as a Mac user, can't use them.

Other setup/calibration products exist as well, but it's not clear how many of them claim to let end users to calibrate gamma.


Then there is the problem that many TVs simply don't provide user-accesible menu options for gamma adjustment. Their gamma adjustment capabilities are typically buried in their so-called service menus, which provide a series of parameters that let professional technicians, often using special instruments, to optimize the picture. Most grayscale calibrations, for instance, are done in TV service menus.

Another problem is that, as mentioned, various user-available controls and features can alter gamma, as a side effect, so once you have somehow succeeded in measuring gamma and perhaps even tailoring it to your liking, other tweaks you might make can put you right back at square one.


So maybe the best policy is simply to become "gamma-aware": to learn to recognize what our various tweaks and settings might do to the tone scale, as some experts call the gray scale of our TV screens. ("Gray scale" also describes the TV's "color temperature" — e.g., 6500K — and how neutral or tint-free it renders grays of various brightnesses in the image.)

Being gamma-aware might mean that, as we adjust a TV's brightness/black level setting, we simply realize that gamma is going in the opposite direction. Accordingly, if we raise black level, gamma drops. If we lower black level, gamma rises.

Being gamma-aware might mean that we use the "special" features of our "digital" TVs, such as "black enhancement" and "dynamic contrast," advisedly, since these features often affect gamma. In particular, they can make otherwise "straight" gammas "nonlinear."

By that I mean that ordinarily a luminance-vs.-signal plot, drawn on log-log axes, is expected to be a straight line, with gamma its numerically constant slope. But various digital signal processing tricks, such as "black enhancement" and "dynamic contrast" — by whatever names the TV maker chooses to call them — bend the log-log plot. It no longer has a constant slope. Rather, its slope is now a variable function of values in the brightness range.


That's not necessarily all bad, by the way.

Suppose, for instance, you watch TV in a brightly lit room, and because of it you need to boost the TV's brightness above the nominally correct black level setting, in order to see all the shadow detail that is present in the picture. That lowers effective gamma ... with the perhaps unwanted side effect of reducing color saturation. If you find a user-menu setting, such as "black enhancement," that counteracts that side effect by moving the picture's effective gamma back up in the other direction, you may want to turn it on.

"Gamma-bending" user options (we might call them) are like the various sound profiles provided by a stereo: "clear," "live," "flat," "beat," "pop," etc. Most of them are by design far from flat or linear in their frequency response. Or, turning on a stereo's "bass boost" button doesn't yield a flat response curve, either ... but it can sure make the music sound subjectively better.

Likewise, "gamma-bending" user options can enhance the TV-viewing experience.


But when "gamma-bending" is built by a TV's manufacturer into each and every luminance response curve the TV offers, with no "straight" gamma-2.2 setting available, things are not so good. As Dr. Raymond Soneira's Grayscale/Color Accuracy Shootout article says, non-standard gamma can shift image contrast, image brightness, hue, and color saturation in ways subtle but real. We should not have to put up with it if we don't want to.

Also, when a TV's so-called "gamma" control simply "stretches" the lower portion of a luminance response curve, says Soneira — rather than change its overall slope, as it should — image artifacts such as banding/false contouring can result. I take "stretching" a portion of the brightness range or luminance response curve as equivalent to what I mean by "gamma-bending." It can give subjectively pleasing results — but at a cost to image accuracy.

Wednesday, June 21, 2006

Gamma, Again! (Part II)

In Gamma, Again! (Part I) I tried to show that gamma is an important characteristic of a television display or computer monitor. Now I'd like to show why and how it might be manipulated.

To recap, gamma is a number, usually in the range of 1.8 to 2.5 (or so), that describes how changes in a display's luminance or light output, L, track with changes in its input video signal level, V. V may be expressed in millivolts, in IRE units from 0 to 100, or in code values from 0 to 255 (or 16 to 235). Luminance can be measured in foot-Lamberts (ft-L) or candelas per square meter (cd/m2).

Specifically, gamma is the exponent or power-of-V in this functional relationship between V and L:

L = VƔ

As such, when this function is plotted on log-log axes, the result is a straight line(!) with gamma as its slope. The larger gamma is, the steeper the graph's slope ... and the greater the "apparent" image contrast on the screen.

I say "apparent" because gamma doesn't make blacks any blacker or whites whiter. But at signal levels in between o-IRE reference black and 100-IRE peak white, increased display gamma makes us perceive shadows as being "deeper" or "darker." Conversely, if display gamma is decreased, the image appears "flatter," in terms of its "contrast," even as the overall "brightness" of the image goes up.

So what is the "correct" gamma for a display?

That simple question does not, unfortunately, have a simple answer. Rather, there are several possible answers.


The process of answering this question begins with noting that, historically, the concept of display gamma began as a concession to an operating characteristic of the cathode ray tube, or CRT. The familiar color "picture tube" in use today has three electron guns, one for each primary color: red, green, or blue. Each gun fires a beam of electrons at phosphors of the appropriate hue on the inner surface of the face of the tube, exciting the phosphors and making them emit colored light. The more electrons that are fired per unit of time, the brighter the phosphors glow.

An electron gun fires when and if a voltage is applied across its individual cathode and the common, positively charged grid inside the neck of the picture tube. But, crucially, the number of electrons fired per unit of time is not a linear function of the applied voltage. This is where the original idea of a gamma exponent comes in. It characterizes the mathematical relationship that determines the rate of an electron gun's firing as a function of its applied voltage.

As such, the fixed value of gamma in a modern CRT is typically 2.5, according to Charles Poynton in his book Digital Video and HDTV: Algorithms and Interfaces.

However, writes Poynton (p. 268), "The FCC NTSC standard has, since 1953, specified R'G'B' encoding for a display with a 'transfer gradient (gamma exponent) of 2.2'."


Let's break that down into smaller chunks. First of all, NTSC stands for National Television System Committee. In 1953, under the aegis of the Federal Communications Commission (FCC), the NTSC promulgated the television standard that is still in use today for standard-definition television transmissions in the U.S.A. These SDTV signals, also known in today's parlance as 480i, are in full color, though in 1953 color television was yet a brand new technology, still awaiting implementation.

Gamma correction is applied by broadcasters to the NTSC signal in the following way. Three color primaries, R (for red), G (f0r green), and B (for blue), are produced by the image sensor of a video camera. These three signals are immediately altered to, respectively, R', G', and B'. This is done by means of applying a mathematical transfer function whose exponent, the encoding gamma, is roughly the reciprocal of the display's inherent decoding gamma.

Later, R', G', and B' are matrixed at the TV studio to form Y', Pb, and Pr. The "black and white" part of the signal is, accordingly, gamma-corrected Y', or luma. Pb is actually(B' - Y'). Pr is (R' - Y'). From these three transmitted video "components" the television receiver will be able easily to recover B' and R'. It can also derive (G' - Y') and thus G' itself.

Once the television receiver has recovered R', G', and B', it uses each as a voltage to drive the CRT's appropriate electron gun directly (assuming, that is, that it is a CRT). Since the picture tube is intrinsically nonlinear, as reflected in its native decoding gamma exponent of 2.5, in effect R', G', and B' are automatically turned back into R, G, and B at the face of the cathode ray tube itself!


But what's this about a "transfer gradient (gamma exponent) of 2.2"? Shouldn't that be 2.5, in view of the fact that "modern CRTs," per Poynton (p. 268), "have power function laws very close to 2.5"?

For the encoding gamma used by broadcast TV stations when they gamma-correct their signals, the reciprocal of the decoding gamma is, supposedly, typically used. So if 2.2 is assumed to be the display's decoding gamma — not 2.5 — the encoding gamma becomes 1/2.2, or 0.45. If a decoding gamma of 2.5 is assumed, on the other hand, the encoding gamma would nominally be 1/2.5, or 0.4.

Poynton, however, advises not taking this discrepancy "too seriously." For one thing, "the FCC statement," says Poynton, "is widely interpreted to suggest that encoding should approximate a power of 1/2.2," in spite of a decoding exponent that may be (and is) other than 2.2.


According to Dr. Raymond Soneira's four-part series, "Display Technology Shootout," published in Widescreen Review magazine, Sept.-Dec. 2004, studio CRT monitors used in tweaking video images before they are broadcast or rendered on DVD typically have decoding gammas of 2.2, not 2.5. "Current CRTs," he writes in the second part of his series (in WR, Oct. 2004, p. 68) typically have a native gamma in the range of 2.3 to 2.6, so the gamma of 2.20 for Sony (and Ikegami) CRT studio monitors is actually the result of signal processing."

Translation: the original signals, once they've been gamma-corrected as discussed above, are fed to a CRT studio monitor to make sure they look right. This monitor is apt to have a "native gamma" in the range of 2.3 to 2.6 — say, 2.5. But signal processing — probably digital signal processing — that takes place within the monitor's electronics, prior to the final display device, makes the monitor operate as if its decoding gamma is 2.2, not 2.5.

How is this done?

Typically, it's done with "look-up tables." Each possible code value for the input signal — R', B', and G' are treated separately, but alike — can be looked up in a table in the memory of the digital signal processing (DSP) portion of the display's internal circuitry. The code values for each pixel of an input video frame are stored in a "frame buffer." Then they are in effect replaced by new code values that result when the original values are looked up in the look-up table or LUT. (When red, green, and blue primary colors are involved, and not just a grascale image the tables are color look-up tables, or CLUTs.)

As a result, each input code value is replaced with an output value in accordance with a mathematical transfer function which (in this case) converts the input signal so as to make the monitor's native gamma, 2.5, look instead like gamma 2.2.

If the resulting image on the studio monitor doesn't, for whatever reason, yield the proper "depth of contrast" (shall we call it), technicians can revise the original signal's encoding gamma to make it look right after all. Maybe the original encoding gamma, 1/2.2 = 0.45, doesn't suit the source material to a T. Tweaking encoding gamma slightly — to, say, 0.44 — may be indicated.

At the end of the tweaking process, the result is an image that looks "perfect" on a monitor whose decoding gamma is 2.2, not 2.5. Why? Because of the decoding gamma of the studio monitor that is used in tailoring the image — given that the studio monitor's native gamma of 2.5 is altered, via DSP look-up tables, to an effective decoding gamma of 2.2.


All of this suggests that, if you have a CRT monitor or TV in your home, you'd like its native gamma of perhaps 2.5, to be able to be digitally altered to an effective value of 2.2 as well. That way, you could at least in theory see the image of, for example, a movie on DVD just the way studio technicians saw it as they were massaging its essential parameters during the process of authoring the DVD.

Right?

In fact, even if you have a plasma, LCD, or other non-CRT display, an effective decoding gamma
of 2.2 would seem to be the absolute holy grail of faithful image rendition.

Right?

Well, maybe it is. Or maybe not.


A lot depends on exactly how bright or dim your viewing environment is, compared with that in which the studio technicians were tweaking the images on their gamma-2.2 monitors.

Suppose you pride yourself on having a home theater in which there is zero illumination other than that produced by the screen itself. What if the studio techs were not working their magic in such a pitch black environment?

As Poynton points out in chapter 9 on "Rendering Intent," what really matters is how well a display system's "end-to-end power exponent" suits the purposes for which the images are intended to be used, in a particular viewing environment.

When you take the encoding exponent used to generate the gamma-corrected video signal and multiply it by the receiving TV's effective decoding gamma exponent, you get the "end-to-end power exponent" of the transmission system as a whole. And — perhaps surprisingly — this end-to-end exponent had better be greater than 1.0!


If the end-to-end exponent were exactly 1.0, the overall system would be perfectly linear, and that's not good.

Why? There are several reasons. One, in video images the amount of luminance is scaled way down from that of the original scene. Barring alteration of the end-to-end exponent that applies to the image, that will make the on-screen version of the image seem to have much less apparent contrast (and also much less colorfulness).

Two, when video is watched in the dark or near-dark, the apparent contrast range of the image decreases, in what is known as the "surround effect" (Poynton p. 82). Normally, the human visual system artifically "stretches" the apparent contrast that exists among objects seen in a bright "surround," as in a brightly lit original scene. When that same interior-of-the-scene detail is arbitrarily framed and viewed on a video screen in a dimly lit room, this surround effect is not triggered, and the image contrast is perceived as unnaturally flat.

Three, the actual contrast ratio of the video or film camera is apt to be much less that the 1000:1 or more found in nature. And the display device's own limitations typically constrains the actual contrast in the viewed image yet further.

Gamma-linear transmission systems don't take such realities into account. Hence they don't yield images whose "image contrast" or "tone scale" is perceptually correct.


If the decoding gamma is 2.5 and the encoding gamma is effectively 0.5, their product is 1.25 — an end-to-end exponent well-suited for TV images being watched in dim, but not totally dark, surroundings. Or so says Poynton (p. 85). This ideal situation is what can happen if the "advertised exponent" used in the encoding portion of the system — in, say, the television studio or the DVD post-production facility — is 0.45, or 1/2.2.

Here's what that means: you start with an "advertised" encoding gamma exponent of 0.45, or 1/2.2, and then you alter the curve it generates on graph paper so that the portion near its left end is instead a straight line segment, not a curve. This avoids the unwieldy situation of having a graph that possessed an infinite slope at its leftmost point.

But it also changes the effective overall encoding gamma of the system as a whole to something more like 0.5.

As a result, when you compute the end-to-end exponent of the system as a whole, you multiply the effective decoding gamma of the TV (Poynton simply assumes it's 2.5) by 0.5, not 0.45. The result is 1.25, an end-to-end exponent which gives you image contrast well-suited to a dim-but-not-pitch-black viewing environment.


But what if, as I asked earlier, your home theater is instead pitch black? In that case, Poynton says, you want an end-to-end exponent of 1.5, not 1.25.

Why's that?

Basically, says Poynton (pp. 84-85), it has to do with the fact that turning the lights all the way off in the viewing environment is apt to provoke your reducing the display's black level, via its brightness control setting. Otherwise, you may place a "gray pedestal" beneath all of tehe display's luminance levels, and blacks in particular may look not black but dark gray.

Put the opposite way, turning the lights up in a pitch black TV room typically creates more luminance on and reflected by the screen than just that produced by the TV itself. So the TV's brightness control must typically be turned up so low-level details in the picture don't get swamped by the ambient illumination.

When you reduce the TV's black level for viewing a totally dark viewing room, the slope of the display's log-log gamma plot goes up — from 2.5 to 2.7, in Poynton's example. As a result, the end-to-end exponent (still assuming an effective encoding exponent of 0.5) is now 1.5, not 1.25.

Using similar logic, Poynton shows that the brightness or black level setting of the display will typically nbe boosted above that necessary in a dimly lit environment, if the viewing room is brightly lit. Now the effective decoding gamma drops to about 2.3, for an end-to-end exponent that is, for all intents and purposes, as low as 1.125.


But here's the catch. Video engineers apparently assume you don't want to know all of the above, you don't habitually adjust your TV's brightness depending on how many lights are on in your viewing room, and you have no control whatsoever over gamma. So they impose a "rendering intent" on their images right from the start.

According to Poynton, they assume you'll be watching video on a (gamma 2.5) CRT in a dimly lit room, so they use an effective encoding exponent of 0.5 — or actually 0.51 — based on an "advertised" encoding exponent of 0.45.

The same goes for the producers of the films that eventually find their way to DVD via a so-called "video transfer" process. They use camera negative film and photographic print film that together impose what amounts to an end-to-end exponent of fully 1.5 — suitable for a totally darkened theater.

The creators of computer graphics, on the other hand, assume you'll be viewing them on a screen (again, one that presumably has native gamma 2.5) in a brightly lit office. So they arrange for their encoding gamma to drop to 0.45, based on an "advertised" exponent of 0.42.


This all means that, if you look at images in different (brighter or darker) surroundings than those envisioned by the creators' "rendering intent," you may want to be able to change the effective decoding gamma of your TV or monitor.

Or, if your display has been designed to have an effective decoding gamma of 2.2, not 2.5, you may (or may not) want to be able to change it.

Notice, here, a subtle source of confusion. One authority, Poynton, says video engineers expect your television to decode their signal using an effective gamma of 2.5. Another authority, Soneira, says video sources are tweaked to look right at gamma 2.2, not 2.5, since that is the effective decoding gamma of the typical studio monitor that is used in color- and contrast-balancing the source material.

So, which decoding gamma is "right"? The answer depends on assumptions made by people in far-off studios who don't really know exactly what your viewing environment is like, which effective gamma your TV really has ... and, furthermore, what your own idiosyncratic viewing preferences are, in terms of image contrast and desired colorfulness.

Moreover, when the source material is a movie on film, there is a video-transfer step in the signal delivery chain. In it, a colorist operating a telecine or film scanner decides how best to render an image originally produced on (I assume) camera negative or print film that has its own characteristic "gamma exponent" or "transfer function." The film has been shot and processed with an eye to giving it a "look" that may or may not be at all "natural."

How does the colorist respond? Possibly by choosing an unusual encoding exponent that lets the resulting video output look as "unnatural" as the film seen in theaters. But will that choice stand up to being viewed in your TV room, with your lighting situation?

Again, you might like to be able to take control over your TV display's gamma.


Doing so by means of adjusting brightness is, unfortunately, not good enough. Changing the black level of your TV might be an appropriate response to altered room lighting conditions, as in Poynton's discussion. It has the previously noted side effect on gamma: gamma goes up as black level is reduced, and vice versa. But it does not follow that you'd like to manipulate gamma, as a rule, by manipulating black level.

If you adjust black level properly for a given viewing environment, you will render reference black (0 IRE) in the video signal with minimal luminance on the screen, as is right and proper. Raise black level, and reference black turns into an ever lighter shade of gray. Lower it, and information in low-video-level portions of the image (say, 5 IRE) can't be distinguished from black.

So you don't want to use the TV's brightness or black level control to manipulate gamma, as a general rule.

So how do you manipulate your TV's gamma? That will be the subject of Gamma, Again! (Part III).

Monday, June 19, 2006

Gamma, Again! (Part I)

In the past I have tried several times to explain gamma, a crucial but hard to understand characteristic of TV displays (and also computer monitors) which affects contrast, brightness, hue, and saturation and thus how the TV picture looks to the eye. See HDTV Quirks, Part II, Gamma and Powerful LUTs for Gamma for two of my more elaborate attempts. Now I'd like to bring up the subject again and perhaps correct some of the mistaken impressions I left before.

Basically, a TV's gamma is a "transfer function" that decides how much luminance it will produce at various input signal levels:



At every possible level of the input video signal, V, the TV will produce a certain amount of light output. This is its apparent brightness or luminance, L. Mathematically, the functional relationship is:

L = VƔ

where Ɣ, the Greek letter gamma, is the exponent of the V-to-L function.

A (rough) synonym of luminance is intensity. Luminance or intensity may be stated in absolute units such as candelas per square meter (cd/m2) or foot-Lamberts (ft-L). In the graph above it is represented on the vertical axis in relative units from 0.0 to 1.0.

The input video signal may be a voltage in millivolts, for analog signals, or an integer code value, also known as a code value or pixel value, for digital signals. The V in the equation above stands for either voltage or (code or pixel) value. The digital codes or pixel values that determine luminance or intensity are integers in the range from 0-255 for computer applications, or 16-235 for video applications. Also, IRE units from 0 (or 7.5) to 100 may be used to represent either analog or digital signal levels.

Above, I put relative signal-level units from 0.0 to 10.0 along the horizontal axis, and I omit the 0.0 level entirely — pure or reference black, that is — to avoid trying to take the logarithm of zero later on. 10.0 represents peak white, also known as reference white. Everything between 0.0 and 10.0 is a shade of gray — for now, I'm ignoring color.

Setting aside gamma 1.0 (the red graph), a TV's "gamma curve" is nonlinear. The blue graph, representing gamma 1.8, is in a sense less nonlinear than the orange graph, representing gamma 2.5. The higher the TV's gamma number, the less linear is the TV's luminance output as a function of its input video signal.


The higher the gamma, the more image contrast there seems to be:

For gamma = 1.8
For gamma = 2.5


The photo on the left looks best when the monitor's gamma is 1.8 and is "too contrasty" when the monitor's gamma is 2.5. The Goldilocks on the right looks "just right" at gamma 2.5 and looks too washed out at gamma 1.8. So what an image ultimately looks like depends not only on the decoding gamma of the monitor or TV, but also on the encoding gamma that has been used in creating the image.

Encoding gamma needs to bear a roughly inverse relationship to decoding gamma. If the monitor has decoding gamma 2.5, the encoding gamma exponent ought to be roughly 1/2.5, or 0.4. In practice, for technical reasons having to do with "rendering intent," encoding gamma for gamma-2.5 television images is often modified slightly, to 0.5. The process of applying encoding gamma to TV images prior to broadcast is called gamma correction.


Although the "gamma curve" of a TV or computer monitor actually curves when plotted on linear axes as above, when it is plotted on log-log axes, it typically becomes a straight line:



The logarithm of the relative input video signal level now appears along the horizontal axis. On the vertical axis, it's the logarithm of the relative luminance output. Switching to log-log plotting allows the gamma "curve" to appear straight. Its mathematical slope is now equal to gamma itself, the exponent of the mathematical function which relates output luminance L to input video level V.


The word "luminance" is unfortunately ambiguous. Under the subtopic "Confusing Terminology" in the Wikipedia article on "Gamma Correction," luminance is defined in two ways:

  • the apparent brightness of an object, taking into account the wavelength-dependent sensitivity of the human eye
  • the encoded video signal ... similar to the signal voltage
I am using the first definition, where the "object" whose "brightness" is in question is part of a scene being rendered by a video camera. Eventually that same object with its associated apparent brightness or luminance is a part of a video image displayed on a TV screen or computer monitor.

I called the encoded video signal V above. It could also be called Y', since, in video, Y is the name given to the luminance of the original scene as picked up by the video camera and converted into an electronic signal, and Y' (wye-prime) is that electronic signal after it has undergone gamma correction and is en route to the receiving TV or monitor. To distinguish the two signals, the first, Y, is called (video) "luminance" and the second, Y', is called luma.

Note that Y' ("luma"), which is derived from Y ("luminance"), transmits to the receiving TV or monitor nothing but colorless shades of gray falling along a continuum extending from reference black to peak white. Even so, Y — or actually Y' — is actually derived from three color signals detected by the video camera: R (for red), G (for green), and B (for blue). Each of these single-color signals is gamma-corrected individually to become, respectively, R', G', and B'.

Accordingly, to make monochrome Y' into a color image, it must be accompanied by two color-difference signals. When the signals are analog, these color-difference signals are called Pb and Pr. In digital video, they are Cb and Cr. Pb (or Cb) is the difference between the gamma-corrected blue signal, B', and Y'. Pr (or Cr) is the difference between the gamma-corrected red signal, R', and Y'. The gamma-corrected green signal, G', can be recovered when Y', Pr (or Cr), and Pb (or Cb) are received. A Y'PbPr (or Y'CbCr) signal is often called a "component video" signal.


Assume two TVs or monitors, one using gamma 1.8 and one using gamma 2.5, are adjusted so that they produce the same luminace output for a peak white input signal. From the log-log plot above we see that as the input signal level decreases, the two sets' luminance outputs diverge more and more. Accordingly, differences in gamma show up more noticeably at low input signal levels than at high. Levels near peak white are relatively unaffected by gamma differences. Levels nearer to the signal's reference black level are, on the other hand, strongly affected by gamma.

Qualitatively, this fact means that "gamma deepens shadows." It doesn't really make the luminance of shadow details drop below that of reference black. But it does make it "take longer" for gradual increases in the video signal level, starting at the signal level for reference black and sweeping upward toward that for peak white, to produce concomitant increases in output luminance levels. The image "stays darker longer."

In fact, luminance when gamma is greater than 1.0 only fully "catches up" with luminance for gamma 1.0 at the very top of the scale, at peak white. Every signal level below peak white produces less screen luminance when gamma is higher.

Still, when we see an image displayed at, say, gamma 2.5 side-by-side with the same image at gamma 1.8, we are apt to say that the shadows are "deeper," not that the less dim areas are somehow "brighter." This, again, is because differences in log-log gamma plots are wider, and thus show up more noticeably, at low input signal levels than at high.


A log-log gamma plot for a given TV will vary in slope quite a bit depending on how the brightness control is set. A TV's brightness control actually sets its black level, the luminance it will produce for an input video signal level that equates to pure black.

This incoming video level for pure black is, in digital TV, either 0 or 16, depending on whether the useful range is set as 0-255 or 16-235. In analog video, it is 0 millivolts (or, if so-called 7.5% setup is used for the broadcast signal) 54 mV. From now on, I'll refer to it, using the well-known "IRE units," as 0 IRE. (I'll ignore the possibility of 7.5% setup, which would put black at 7.5 IRE.)

The signal's reference white or peak white level can be 255 or 235 in digital video. In analog video, it can be either 700 mV or 714 mV. Whatever units are used, reference or peak white is also said to be at 100 IRE.

The luminance output level for 100-IRE reference or peak white is set by the TV's contrast control. It ought instead to be called something like video level or gain. It could also appropriately be called "brightness," if the actual brightness control were renamed "black level" and a control which tailors gamma per se were added in the form of a more appropriately named "contrast" control.


Once the white level is set via the contrast control as we know it today, then — assuming nothing is overtly done to change gamma — changes to the setting of the brightness control change gamma anyway. In effect, adjusting the brightness or black level control pivots the log-log gamma plot around its upper end at 100 IRE.

Imagine that the TV's brightness control has been carefully set such that a 0-IRE input signal produces the least amount of luminance the TV is capable of producing, and a 1-IRE input signal produces just enough more luminance to show up with a higher degree of visible lightness in a pitch-black room. You can then, in principle, measure the TV screen's luminance output at various signal levels from 0 IRE to 100 IRE and plot the luminance figures against the input levels on log-log axes. The slope of the resulting plot is, let us say, 2.5, which means that the TV is operating at gamma 2.5.

Now, imagine turning up the brightness control. Every luminance figure at every IRE level of input will go up ... but the ones at lower IRE levels will go up more than the ones at higher IRE levels. At 100 IRE, there will be no change in luminance whatsoever. In effect the log-log plot, while remaining a straight line, swings upward. It pivots clockwise around its rightmost end point at 100 IRE.

It therefore has a shallower slope. Instead of 2.5, the slope might (depending on how far the brightness control is turned up) drop to about 2.3 — which means that the TV is now operating at a gamma figure of 2.3.

If, on the other hand, you imagine turning down the brightness control below its carefully chosen optimum, the log-log plot pivots in the other (i.e., counterclockwise) direction; it takes on a steeper slope; the TV's operating gamma goes up to, say, 2.7.

If you turn the brightness control up from its optimum setting, furthermore, deep blacks will be rendered as dark grays, while if you turn brightness down, low-level video information will be rendered no different than black. Shadow detail will be "swallowed," in other words. In addition to affecting a TV's operating gamma, misadjusted brightness can have other deleterious effects on the image you see.

So changes to a TV's brightness control can alter its operating gamma (as I'll call it) in either direction away from its nominal gamma.


Whether operating or nominal, gamma is important. It not only affects image contrast — how deep and pervasive its shadows and darker elements appear to be — it also affects overall image brightness as well as the hue and saturation of various colors.

We have already seen that an increase to gamma makes every input video signal level between 0 IRE and 100 IRE appear on screen with less luminance. Everything on the screen appears darker and dimmer — though the effect is greater, the lower the original input signal level. Since most people say they prefer a "brighter" picture, TVs often are designed to operate at a gamma that is lower than they really "ought" to.

At first it seems odd to note that gamma affects colors, when it seems to be more of a black-and-white thing. But any color other than pure red, green, and blue at maximum saturation levels (100 IRE) is indeed affected by gamma.

For example, if a certain color of brown is to be represented, it may (let's say) be made of red at 100 IRE and green at 50 IRE, with blue entirely absent. (This analysis is adapted from Dr. Raymond Soneira's article, "Display Technology Shootout: Part II — Gray Scale and Color Accuracy," in the October 2004 issue of Widescreen Review. This article is available online in PDF form here. Dr. Soneira is the president of DisplayMate Technologies. His article can be accessed directly as a web page here.) The 100-IRE red won't be affected by gamma, but the 50-IRE green will.

If gamma is relatively high, 50 IRE green will be reproduced at the TV screen with lower luminance than if gamma is relatively low. As a result, the hue of brown will appear redder (because green contributes less) at higher gamma and less red (because green contributes more) at lower gamma.

Next, imagine replacing some of the red in the input signal with blue: say, a brown that is 75 IRE red, 50 IRE green, and 25 IRE blue. Now, because all three color primaries are represented, the brown is no longer a fully saturated one. Instead, the 25 IRE of blue combines, in effect, with 25 IRE of the red signal and 25 IRE of the green signal to make a shade of gray.

That leaves 50 IRE of red and 25 IRE of green. Both of these will be affected by gamma, but the latter (because lower in signal level) will be affected more. Just as before, gamma differences will change the hue of brown.

But this time, gamma will also affect the luminance of the shade of gray produced by the combining of 25 IRE worth of red, green, and blue signal. If gamma is relatively high, this gray will have relatively low luminance, and the brown will appear on screen purer and more saturated. If gamma is relatively low, the brown will appear less pure and take on more of a pastel shade.

So gamma affects the hue of all colors (not just brown) that mix two or three primaries together in different proportions. It also affects the saturation of all colors that mix all three primaries together in different proportions.

Sunday, June 18, 2006

2006 FIFA World Cup on HDTV

U.S. striker Brian McBride
wins the ball against Italy
on Saturday, June 19, 2006
It's the most widely viewed sporting event in the world, according to Wikipedia: soccer's quadrennial World Cup, now taking place in Germany under the aegis of the sport's international governing organization, FIFA. According to the FIFA.com website (this article), "The cumulative audience over the 25 match days of the 2002 event reached a total of 28.8 billion viewers. ... These impressive figures make the 2002 FIFA World Cup Korea/Japan™ the most extensively covered and viewed event in television history."

Here in the United States, where soccer is at best a pastime and not the obsession it has long been in most of the rest of the world, every World Cup game is
, for the first time ever, available in glorious 720p high-definition television, on either ABC, ESPN, or ESPN2. You have, of course, to be able to receive all of these networks in their high-def glory, if you want access to all the games. That means you probably require either digital cable or broadcast satellite reception in your household, since only ABC is available over the air.

Yesterday I watched Team USA battle the Italians to a 1-1 tie on ABC-HD, courtesy of the digital ABC affiliate here in Baltimore: television station WMAR,
broadcasting over the air on channel 52, though WMAR-HD comes into my house on Comcast Cablevision's digital channel 210. (I find that I have to be careful to select channel 210, by the way, and not cable channel 12, which is a standard-def, analog version of the same WMAR fare.)

I watched the Italy game on a "DVR-delayed" basis. My digital cable box has a built-in digital video recorder, which I set up in advance to record ABC-HD's coverage. The coverage began at 2:00 PM here in this Eastern time zone. I watched it beginning at around 6:30. That way I could zip over all the commercials in the pre-game, halftime, and post-game shows. I could also replay moments in the game that I wanted to see again.

I watched the coverage on the 32" Hitachi plasma in my basement TV room, not on my 61" Samsung DLP rear projector in the living room. (For comparison, I may use the Samsung to watch the U.S.-Ghana match on Thursday.) The image was admittedly small, at my roughly 12-foot seating distance, but quite good. Overall, I'd say that HD adds a lot to soccer coverage.

It does so because, due to the nature of a game in which the ball can move so swiftly and unpredictably, the camera has to keep large portions of the field or "pitch" in view at all times. This wide-angle mandate makes the players appear as very small figures on the screen. It's not easy to identify them by either their looks or their jersey number, especially since the run of normal soccer play sees nearly every player move to just about every position on the field at some time or other. A so-called center midfileder, for instance, is apt to show up just about anywhere, from one end of the field to the other, possibly along the sidelines as well as in the middle of the pitch.

In HD on a relatively tiny screen such as my Hitachi's, the problem by no means goes away. Still, the sharp and colorful hi-def image does provide more identification cues than a non-HD image would. For one thing, the jersey numbers are easier to read. And it's easier to pick up on the players' hair colors/styles, plus their sundry skin tones. Even their distinctive shapes and sizes as human individuals are more apt to survive the rigors of TV transmission and show up meaningfully on your screen.

Color helps, and HDTV color is better — because of its wider intensity gamut — than SDTV color. The Italian national team is called "The Azzurri" ("The Blue") because of the bright blue hue of their uniforms, which contrasted sharply on my plasma screen with the white tops and dark navy shorts of our Yanks. And when the red cards came out in abundance, as an overzealous referee sent two Americans and one Italian off for various sorts of dangerous play, they showed up rather strikingly on my screen — as did the blood streaming down the face of American forward Brian McBride after Italian midfielder Daniele de Rossi elbowed him hard in the face. That was the foul that drew the initial ejection of the match.

So it's safe to say that HD can highlight the fact that soccer is a blood sport, after all.


Can HDTV make soccer more of a mainstream sport here in America, then? It's not impossible. Still and all, we Americans like our sports coverage "up close and personal," which means we like to see the faces of the players and coaches as often as we can. That's a tall order with soccer, since there are few pauses in the far-flung action into which a closeup shot might be inserted.

Baseball and American football are pause-laden by comparison. Basketball and hockey take place in confined arenas where zoomed-in camera shots don't risk missing the action. NASCAR races offer lots of opportunities for (previously recorded) head shots of everyone concerned to be superimposed over the festivities. In fact, most of our favorite sports are far more TV-friendly than soccer. The only one that comes readily to mind that is not is lacrosse — which is more of a niche or cult sport, anyway. (I'd like to see its popularity grow, rest assured. And that could indeed happen with more HDTV coverage.)

I'd like to see soccer coverage employ more picture-in-picture insets, split screens, etc. It would be great if the guy who's about to take a shot on goal could be framed in an inset close shot while another camera watched the broad sweep of action from afar. But that's not terribly realistic, since in any given "buildup" by an offensive team as it draws into goal-scoring range, any one of about seven players could wind up taking the shot — if there is a shot.


More realistic might be to put a semi-permanent "close-up cam" on whichever key player the broadcast team wants us to focus on at any given stretch of the game. It could have been U.S. midfielder Landon Donovan for most of yesterday's match, since he was roundly criticized by Coach Bruce Arena (and by himself) for lackluster play in the Americans' shameful loss to the Czech Republic. Donovan played his heart out against Italy, especially after his team was reduced to nine men early in the second half. It's too bad we couldn't see more of him on the screen.

Later in the game, Donovan's co-midfielder DaMarcus Beasley could have been spotlighted. He was benched as a starter after the Czech Republic game and then came on late in the Italy match as a substitute. Despite the announcers' hopes that he might spark a winning goal, he seems to me to have taken very few chances in his brief minutes on the field. I would have liked to have been able to keep a closer eye on him.

But inset shots, when they do pop into view, always seem to find the corner of the screen where the ball is about to go. There probably needs to be some as-yet-unavailable way to coordinate the director's doubled-up use of screen real estate with exactly how the camera operators are framing their shots on a second-by-second basis. If the ball darts behind a screen inset that is being shown atop the main picture, have the main camera pan or tilt just enough to bring the ball back in view — that kind of thing.

Cooler yet would be some way for the viewer to hide or show
at will what amounts to a picture-in-a-picture, using the remote control to bring up the inset and move it to one corner or another of the screen. With digital TV transmissions, that's not an absolute impossibility, but I won't hold my breath for it, either.

All in all, I'd say that HD enhances soccer coverage, but it won't put it over the top in Americans' estimation any time soon, because the game as seen on TV simply doesn't lend itself to the kind of up-close-and-personal visuals we crave. For example, the news photo which I borrowed for the top of this piece is not the type of framing you're ever likely to get on TV, hi-def or not. That's too bad, because a photo like this conveys how hard it is to play soccer well ... and the TV coverage simply doesn't.