Friday, July 22, 2005

Why We Watch

Martin Scorsese in
My Voyage to Italy
Why We Fight was the umbrella title of a series of documentaries made by director Frank Capra as home-front propaganda during World War II. Martin Scorsese could have subtitled his 1999 documentary My Voyage to Italy (Il mio viaggio in Italia) Why We Watch.

Specifically, why we watch movies.

Mr. Scorsese, whose films as a director have gathered awards and acclaim, from Mean Streets (1973) to The Aviator (2004), takes us by the hand into the world of the Italian films he grew up with as an American son of transplanted Sicilian immigrants. Coming of age in New York City in the late 1940s and early 1950s, he saw the Neorealist renaissance of post-WWII Italian cinema through a tiny glowing screen, as filler programming in the early days of television. Even if he didn't fully understand the films of the likes of Roberto Rossellini at such a tender age, they moved him deeply.

They still do, because more than anything else, they demand of us our full and honest humanity and compassion for all God's children.

Scorsese's two-part documentary originally aired on Turner Classic Movies, on cable, in 2003, a follow-on to his 1995 A Personal Journey with Martin Scorsese Through American Movies. As far as I know, My Voyage to Italy has not aired since, nor is it scheduled to air again — much less in high-definition. (The documentary is available as a two-disc DVD set, which is how I'm viewing it.) So I am cheating a bit to include this post in my What's on HDTV? blog. I admit that.

Yet I think it a worthy inclusion because, in this day and age — as was mentioned in Parade magazine's "Personality Parade" column in the July 17, 2005, issue — movies are no longer made for adults. They're made for adolescents. And the souls of adolescents are not the most notable in the world for their compassion.


Today, whether adolescents or grownups, we are not accustomed to wearing our hearts out on our sleeve. After watching My Voyage to Italy, you may want to have your tailor get in touch with your surgeon.

Not that there's an ounce of sentimentality, schmaltz, or false pathos in the films Scorsese presents to us, or the way he presents them — just the opposite. For example, at the outset of the journey he leads us on (I admit I haven't yet gone further than Neorealism), he gives us his own personal tour of the films that were called Neorealist because they took an un-glamorized view of life as it was lived in a torn-up country, Italy, which had long been prey to fascism and then been overrun by German Nazis and American liberators in turn.

Under the circumstances, there wasn't much basis for glamorization. Which meant there was all the more basis for compassion, humanity, and fellow-feeling. Rossellini and other Neorealists like Vittorio De Sica showed everything, mean and noble, in Italian and European life at the time, and made the recovering world know that no man is an island.

What's more, they showed the world that movies, a once-fluffy art form, were uniquely able to reveal to us the totality of the human condition, warts and all, and thereby stir our tenderest feelings.


I have to admit, when it comes to tender feelings, or even to being open to seeing beauty amid ugliness and flowers amid dirt, I personally can't hold a candle to Martin Scorsese. My personal inclination is to try to kayo the ugliness and clean up the dirt, and only then — just maybe — appreciate the beauty.

Which is why I feel so glad to have this man of the gentlest possible voice and of the widest possible mercy sit beside me in the movie theater, in effect, and whisper in my ear what to look for and what to think about as the extended clips of old B&W movies in a foreign tongue (with subtitles) unreel before my eyes.

The movies Scorsese extols are powerful, but it's all to easy for an ugliness-kayoer/dirt-cleaner-upper from way back like me to resist their power at a pre-conscious level. Which is the reason, I suppose, that when Rossellini made The Miracle, a segment of his film L'Amore (1948), in which a woman give birth to a child she believes is the Christ child, Catholic officials denounced it.

The Miracle's main character, Nanni, portrayed by Anna Magnani, has more religious faith than sense or sanity. She's taken advantage of by a stranger, a wanderer she thinks is St. Joseph, who she imagines has come along in answer to her prayer. (The stranger is played by Federico Fellini.) Pregnant, she winds up an outcast. As we watch the tender scene in which Nanni has her baby, alone, without prospects, redeemed by the simple fact that she has brought forth the "miracle" of new life, Scorsese tells us what it all means to him:

Rossellini is communicating something very elemental about the nature of sin. It's a part of who we are, and it can never be eliminated. For him, Christianity is meaningless if it can't accept sin and allow for redemption. He tried to show us that this woman's sin, like her madness, is nothing in comparison to her humanity.

***


"It's odd to remember," Scorsese continues, "that this film was the cause of one of the greatest scandals in American movie history. When The Miracle opened at the Paris Theater in Manhattan, Cardinal Spellman, who was the cardinal of New York at the time, and the Legion of Decency — that was the Catholic watchdog organization — called the movie a blasphemous parody, and they mounted a campaign to have it pulled from the theaters."

The case wound up in the Supreme Court, which in 1952 struck down movie censorship based on allegations of blasphemy. That's important because it paved the way for the frankness of Scorsese's and other filmmakers' films in coming decades.

But more important to me than the legal precedent is what The Miracle says about compassion and humanity, sin and redemption. It say that the true Christian attitude is not to be such an ugliness-kayoer and dirt-cleaner-upper as I know I tend to be and as Cardinal Spellman and the Legion of Decency were back in their day. It is rather to see the beauty in each of God's creatures and to view their flawed humanity, not only with the utmost of honesty, but also with the utmost of compassion.

And to be reminded of that is, ultimately, why we watch movies.

Tuesday, July 19, 2005

Restoring Movie Images Digitally

It's probably no secret that this blogger thinks of movies as reason number one to climb on the HDTV bandwagon. To look their best in hi-def form, classic films, middle-aged films, and even fairly recent ones often need restoration work. In Robert A. Harris, Film Restorer Extraordinaire I discussed the work of one of film restoration's leading lights, a man whose contributions (usually along with partner James Katz) include preserved versions of Vertigo, My Fair Lady, Spartacus, Lawrence of Arabia, and Rear Window.

Now I'd like to take a look at the contributions to film restoration that are arriving from an entirely different quarter: John Lowry's Lowry Digital Images.

LDI has been responsible for returning the first three Star Wars films "to their original glory for DVD," says "Creating the Video Future," an article in the November 2004 edition of Sound & Vision. Ditto, the Indiana Jones collection. Ditto, all twenty of the James Bond films. Ditto, Snow White and the Seven Dwarfs; Singin' in the Rain; North by Northwest; Gone with the Wind; Now, Voyager; Mildred Pierce; Roman Holiday; Sunset Boulevard; and Citizen Kane.


The Lowry process is not unlike that used for digital intermediate work (see
The Case of the Data-Processing DP). The first step is to scan into the digital domain each frame of the film negative, often at 4K resolution (see 2K, 4K, Who Do We Appreciate?). At four seconds per frame, I calculate it must take 691,200 seconds, or 192 hours, to scan in a two-hour film at 4K. That's a ratio of 96 scan hours per one running hour, excluding coffee breaks!

Lowry stores the entire film as data on servers that hold 6 terabytes
apiece. A terabyte is 2 to the 40th power, or 1,099,511,627,776 bytes. Think of it as 1,000 gigabytes. Lowry has a total of 378 terabytes of storage hooked to his high-speed network. Perhaps the Pentagon would like to send a task force to check out how it all works.

That data network at LDI also interconnects 600 dual-processor Macintosh computers whose task it is to process all that stored data, using complex algorithms selected and parameterized according to the needs of the particular movie, in order to tweak the look of each of the 172,800 frames that make up a two-hour flick.

After a visual review of each scene on a video monitor and a possible re-parameterization and rerun of the automatic process, exceptionally problematic material is touched up "by hand."

When all that is done, the result is a "digital negative." The digital negative is said by Sound & Vision's Josef Krebs to be "every bit as good as the camera negative — but without the wear, tear, and deterioration." The digital negative can be used to spin off versions for DVD, HDTV, standard-def TV, or digital cinema, a filmless methodology which involves using digital video projectors to throw the image onto the screen in commercial theaters. Or it can be output on film for optical projection.


Darth Vader in Star Wars
For the story on how this process gave the original Star Wars films new life, check out "John Lowry: Three Months and 600 Macs" on the Apple Computer website. Job One was to obliterate dirt and scratches, Lowry says. The next challenge was to conceal the telltale changes in contrast, graininess, and definition that accompanied the optical special-effects scenes in those pre-CGI days. Lowry says he also "sharpened it end-to-end, reduced the granularity and got rid of the flicker."

"When you use a computer," says Lowry, "if you can understand the problem, it's always solvable." Torn film, fading, chemical deterioration — they all succumb to digital remedies.


4K-resolution scanning is not always used as the basis for his digital restorations, Lowry says ... after having extolled 4K as uniquely giving the ability "t
o capture everything that is on that negative, which probably has a limit somewhere in the 3- to 4K range."

The "information on a film," he says, "
usually rolls off between 3 and 4K. We've experimented at 6K, but, frankly, it's pointless on a standard 35mm film frame until there are better camera lenses and film stocks." So, 4K looks like a winner when it comes to really, really capturing the whole image present on the original camera negative. Also, Lowry says, "with 4K the colors are more vibrant, subtle, and sharper."

Even so, Lowry did standard-def (!) transfers for
North by Northwest; Now, Voyager; and Citizen Kane. Snow White and the Seven Dwarfs and Giant were his first digital restorations at high definition (1080p, I assume). Next came the step up to 2K, for Roman Holiday and Sunset Boulevard; they had to be scanned from duplicates of their original negatives, with accordingly lower resolution to begin with, to be captured digitally.

Lowry's current maximum resolution, 4K, is newer yet. "
Now we do most of our transfers at high-def and 2K," he says, "but we also do a whole range of movies in 4K." I'll bet the demand for 4K will, despite the extra cost, soon outstrip that for 2K and HD. I'll furthermore bet that there's already little remaining demand for SD restorations.

Along those lines, Lowry says that he is doing nine of the twenty James Bonds in 4K, the rest in HD. I'll wager the day will come soon in which the film's owners wish they'd scanned them all in 4K.


Film preservationist Robert Harris, in his
"Yellow Layer Failure, Vinegar Syndrome and Miscellaneous Musings" column at TheDigitalBits.com, has sometimes been critical of the Lowry process. One reason is that Harris describes himself as a "proponent" of film grain. "Grain is our friend," he has written. The Lowry process can remove most or all of the grain in a film image. Not just the excessive grain which builds up whenever film is duplicated from camera negative to interpositive to internegative ("dupe" negative) to final print — all the grain, period.

Lowry's S&V response: "We generally try to even out the grain so changes don't interfere with the storytelling. I don't recommend taking all the grain out of anything. I used to do more of that. We took out too much grain on Citizen Kane, but on Casablanca we left more in and it looks better. There's something comforting about grain for the viewer." It sounds like Lowry has become a late convert to the grain-is-our-friend school of film preservation.

Another complaint Robert Harris has made is that the Lowry process doesn't really restore or preserve the film per se. It leaves the photographic images which were originally recorded on celluloid strictly alone. It does not yield a replicated negative or fine-grain positive separation masters for archival use. It does not result in any film elements whatever, other than (optionally) a new generation of release prints derived from the "digital negative."

But release prints are not archival. They're the end of the line, photo-optically speaking. You can't use them as sources for further duping, since they lack the negative's high level of detail, low level of grain, wide dynamic range/contrast ratio, and so forth. And, due to "blocking," the shadows and highlights in a final print are drained of nuance.

So the Lowry process places the entire burden for archiving film's images squarely in the digital-video domain. The film isn't preserved for posterity; at best, the data is.

Though I don't know that I can put my finger on the source — I believe it to have been an interview I read with prime digital-cinema mover George Lucas — I've read that Lowry's precious data eventually is remanded to the studio to whom the film belongs. Or else the studio provides Lowry with the (empty) storage media in the first place, and then receives the (filled) media back at the end of the process. Something like that.

That means the studio winds up in charge of archiving the data. In the past, studios have been (shall we say) notably remiss in taking proper care of their archives.

What's more, data-file formats are notorious for their quick obsolescence. The file formats used for digital intermediates have yet to be standardized, and I'm sure those Lowry uses for "digital negatives" are just as idiosyncratic. In twenty years, will machines still be able to read these files? That's Robert Harris' main concern, I gather.


Still, the process John Lowry uses to restore not film per se but film's images is an impressive one. It's clearly the wave of the future. Hollywood is already beating a path to his door.

Monday, July 18, 2005

2K, 4K, Who Do We Appreciate?

In The Case of the Data-Processing DP I talked about how film production is being revolutionized by the digital intermediate: transferring movies to the domain of digital video for sprucing up the final image and adding computer-generated imagery (CGI). DI would be tough to do were it not for newfangled film scanners and data-capable telecines which can accomplish so-called 2K and 4K scans.

In the highest-definition digital TV format available over the air, 1080i, the width of the 16:9-aspect-ratio image holds fully 1,920 pixels. This number is accordingly the upper limit on "horizontal spatial resolution," which in turn means that no detail whose width is tinier than 1/1920 the screen's width — a single pixel — can be seen.

In a 2K scan, the number of pixels across the width of the scanned film frame goes up to at most 211, or 2,048. In a 4K scan, that upper limit is doubled, to 4,096.

The actual count of pixels produced horizontally in a 2K or 4K scan can be less than the stated upper limit — for example, when the aperture used by the film camera to enclose the film frame doesn't allow the image to span the whole 35-mm frame from "perforation to perforation," or when room is left along one edge of the film for a soundtrack. Furthermore, a lesser number of pixels will be generated vertically than horizontally, since the image is wider than it is tall.

Even so, 2K and 4K scans generally betoken levels of visual detail higher (though, in the case of 2K, only somewhat higher) than a so-called "HD scan" at 1920 x 1080.


With the advent of high-definition 1080i DVDs expected imminently (see HDTV ... on DVD Real Soon Now?), do yet-more-detailed film scans at sampling rates such as 2K and 4K have any relevance to us now?

Apparently, yes. Experts seem to agree that it's a good idea to digitize anything — audio, video, or what-have-you — at a sampling frequency a good deal higher than the eventual target rate. The term for this is "oversampling."

For example, The Quantel Guide to Digital Intermediate says on pp. 17-18, "Better results are obtained from ‘over sampling’ the OCN [original camera negative]: using greater-than-2K scans, say 4K. All the 4K information is then used to produce a down converted 2K image. The results are sharper and contain more detail than those of straight 2K scans. Same OCN, same picture size, but better-looking images."

Furthermore, according to "The Color-Space Conundrum," a Douglas Bankston two-part technical backgrounder for American Cinematographer online here and here:

A 2K image is a 2K image is a 2K image, right? Depends. One 2K image may appear better in quality than another 2K image. For instance, you have a 1.85 [i.e., 1.85: aspect ratio] frame scanned on a Spirit DataCine [film scanner] at its supposed 2K resolution. What the DataCine really does is scan 1714 pixels across the Academy [Ratio] frame (1920 from [perforation to perforation]), then digitally up-res it to 1828 pixels, which is the Cineon Academy camera aperture width (or 2048 if scanning from perf to perf, including soundtrack area). ... Now take that same frame and scan it on a new 4K Spirit at 4K resolution, 3656x2664 over Academy aperture, then downsize to an 1828x1332 2K file. Sure, the end resolution of the 4K-originated file is the same as the 2K-originated file, but the image from 4K origination looks better to the discerning eye. The 4096x3112 resolution file contains a tremendous amount of extra image information from which to downsample to 1828x1332. That has the same effect as oversampling does in audio.

In 1927, Harry Nyquist, Ph.D., a Swedish immigrant working for AT&T, determined that an analog signal should be sampled at twice the frequency of its highest frequency component at regular intervals over time to create an adequate representation of that signal in a digital form. The minimum sample frequency needed to reconstruct the original signal is called the Nyquist frequency. Failure to heed this theory and littering your image with artifacts is known as the Nyquist annoyance – it comes with a pink slip. The problem with Nyquist sampling is that it requires perfect reconstruction of the digital information back to analog to avoid artifacts. Because real display devices are not capable of this, the wave must be sampled at well above the Nyquist limit – oversampling – in order to minimize artifacts.

So avoiding artifact creation — "the Nyquist annoyance" — is best done by oversampling at more than twice the sampling frequency of the intended result and then downconverting to the target frequency.


This would seem to imply that it would be wise to transfer a film intended for Blu-ray or HD DVD at a scan rate of at least 2K — but 4K would be better!

It's hard to know how much the "Nyquist annoyance" would mar a typical 2K-to-HD transfer, since (see View Masters) most projected 35-mm movies have way less than 1920 x 1080 resolution to begin with. Original camera negatives generally have up to 4K resolution, or even more, but by the time release prints are made from internegatives which were made from interpositives, you're lucky to have 1K resolution, much less HD or 2K.

So if the "Nyquist annoyance" became an issue with an OCN-to-2K-to-HD transfer, the transfer could be intentionally filtered below full 1920 x 1080 HD resolution and still provide more detail than we're used to seeing at the movies.


All the same, video perfectionists will surely howl if filtering is done to suppress the "Nyquist annoyance." They'd presumably be able to see a real difference between a filtered OCN-to-2K-to-HD transfer and an unfiltered OCN-to-4K-to-HD transfer, given the opportunity.

All of which tells me the handwriting is on the wall. Someday in the not-far-distant future, aficionados are going to be appeased only by HD DVD or Blu-ray discs that are specifically "4K digitally scanned," or whatever the operative jargon will be. All of the farsightedness ostensibly displayed today by studios using "HD scans" for creating standard-def DVDs — intending one day to reuse those same transfers for Blu-ray or HD DVD — will turn out to be rank nearsightedness. Sometimes the march of progress turns into a raging gallop!

Sunday, July 17, 2005

Robert A. Harris, Film Restorer Extraordinaire

James Stewart in Rear Window
Robert A. Harris is a film archivist, preservationist, and restorer — a wizard who in partnership with James Katz a few years ago made the restored version of one of my favorite films, Alfred Htichcock's Vertigo, possible. Harris' artistry (with or without Katz) has also given such films as My Fair Lady, Spartacus, and Lawrence of Arabia new leases on life. Another Hitchcock masterpiece, Rear Window, has also benefitted from the Harris-Katz touch.

Robert writes an occasional column, "Yellow Layer Failure, Vinegar Syndrome and Miscellaneous Musings," at TheDigitalBits.com. It's a hoot and a half.

Robert adores DVDs and home video in general for providing movie studios with money incentives to retrieve older movies from the dustbin, restore them to as near pristine form as possible, and issue them on plastic discs which will forever inveigle new generations of film buffs in their homes.


Reading through Robert's column archive is an education in itself. Before I started reading him, I was (vaguely) aware that really old films, mostly silents, were falling by the wayside due to nitrate-based film stocks which (who knew, way back then?) decompose over time.

But for most of the post-silent era, acetate-based "safety" stocks were used whose chemistry doesn't turn them into goop with time, heat, and humidity. So I assumed that just about every film I'd ever seen and loved in my post-1947 tenure on this earth was safe, right?

Wrong, celluloid breath!

The films of my youth are, albeit for different reasons, in as much danger as the cinematic contemporaries of The Perils of Pauline.

For one thing, apparently the switchover to safety film didn't begin until 1950. All the films of Hollywood's "Golden Age" were on nitrate, including such movies in glorious three-strip Technicolor as Gone With the Wind and The Wizard of Oz (both 1939).


Three-strip Technicolor? Its cameras made three separate negatives, with light passing through multiple optical pathways and filters, to represent each of the three "subtractive" color primaries: cyan, yellow, and magenta. Then each color's negative was transferred to a separate black-and-white "matrix," which was then bathed in dye of an appropriate hue and pressed against an originally clear, colorless print film. The dye thereby transferred permanently to the print, after which the process was repeated for the other two colors. At the end, the print film bore all three separate color "records" superimposed atop one another in perfect registration.

For restorationists like Robert Harris, classic three-strip Technicolor films can be a challenge. You can't simply take an existing Technicolor release print, assuming it's in the best possible shape, and copy it onto another piece of film or transfer it to video. It's too contrasty to copy or transfer properly.

So you have to go back at least to the black-and-white matrices, or, better still, to the original camera negatives, if they can be found. (I'm not clear on whether the matrices used in Technicolor restorations need to be ones that were never subjected to the dye imbibition process I just described.)


Modern color print film doesn't do things the way three-strip Technicolor did. Instead, since it uses so-called "monopack" film, it bears three layers of emulsion, one for each subtractive primary. This is the so-called Eastmancolor system, and it's been around (using a succession of types of Kodak film stock over the years) since the early 1950s.

But the Eastmancolor system presents its own problems to the restorationist. Some of the film stocks from the late 1950s, for instance, have been prone to "yellow layer failure" in which the yellow record on the print fades, leaving magenta and cyan to render distorted hue combinations when the print is projected.

A potential workaround is to locate "separation masters" made at the time the film was produced on fine-grain, low-contrast positive black-and-white film stock. One fine-grain "sep master," or just "sep," would often be made for each color record, using appropriate filters. (For example, I'm guessing — since colors on negative film are the complements of the colors in a scene — a green filter would be used to make a record on black-and-white film of the magenta record on the original camera negative.)

Making sep masters was a nod to archival needs, since they played no real role in producing the actual release prints. If seps were made at all, they were often not checked to see if they were made properly. Even if they were, years of uncertain storage conditions may have caused differential shrinkage of the celluloid, meaning the three seps no longer superimpose to make one image. The list of potential woes for the restorationist is a long one ...


Of course, the most basic woe is the inablity to find any usable "elements" — pieces of film — for entire movies or portions thereof. A lot of films or scenes have simply disappeared, or so it's seemed until troves of cinema's past glory have from time to time been rediscovered collecting dust in some collector's attic or misfiled in some musty film archive. That's when restoration wizards like Robert Harris and James Katz can really swing into action.

One of the primary reasons usable elements are hard to find for the early widescreen epics Robert Harris specializes in is that, in those days, release prints to be used in movie theaters were direct copies of the cut-and-assembled camera negative. To make each new print, the negative had to be reused to make "contact print" after contact print ... which was hell on the negative. Eventually it got so scratched and worn, not to mention dirty and falling apart due to failed editing splices, that it became useless.

Later on, the practice of making a small number of "interpositives," from each of which a manageable number of "internegatives" were generated, arose. The internegatives were then used to make the release prints, a procedure which preserved the camera negative for posterity.

Another problem plaguing the restorationist's task is "vinegar syndrome," the tendency of chemical reactions in so-called safety film stock's base to gradually and unstoppably ruin the film element by turning the cellulose triacetate in the base to acetic acid, the essence of vinegar. In older 35mm prints using the four-track stereo soundtrack popular in the 1950s, the magnetic stripe used for the additional sound channels even acts as a catalyst in accelerating the vinegar syndrome. Jeepers!


Harris' column is about more than just film restoring, by the way. It's really about the love he has, and we all should have, for the history of the silver screen. Thanks to DVDs, and to laserdiscs before them, and now to high-definition video, that history and that glory is not likely to fade out, ever, as Harris is continually pointing out. As long as money can be made resurrecting the treasures of the past and putting them out on home video, Hollywood will not let (say) the stunning dance routines of Fred Astaire and Ginger Rogers die. (Many of their movies from the 1930s are due on DVD soon.)

So, each time a nugget of ancient or not-so-ancient movie gold reaches DVD, Robert is apt to highhlight it in his column. Those who are interested in film classics but don't particularly care about the travails of restoring them will find Robert's words fun reads, even so.


You can read more about Robert A. Harris and James Katz in "Robert A. Harris: Tilting at Hollywood" here. The Rear Window project is documented here.

Saturday, July 16, 2005

Powerful LUTs for Gamma

Way back in HDTV Quirks, Part II, Gamma I tried to give some idea what "gamma" means with respect to TV displays. I'm not sure I succeeded. I'd like to try again.

Here again are two versions of a single photo:

For gamma = 1.8
For gamma = 2.5

The image on the left is intended to be viewed on a Macinstosh computer monitor whose inherent "attitude" toward rendering contrast is to prefer the subtle over the dramatic. The image on the right is aimed at a PC monitor intrinsically with a more dramatic approach to rendering contrast. So, in view of their two different target monitors, the image on the left has more contrast built in than the image on the right. On your computer monitor — no matter which type it is — the image on the left will look more dramatically "contrasty" than the one the right, simply because the image on the left it has more contrast built into it.

A monitor's (or TV's) "attitude" toward rendering contrast is betokened by its gamma. A Mac monitor typically has a gamma of 1.8: the level of the input video signal is raised to the 1.8 power to compute how much light, or luminance, the monitor's screen will produce.

A PC monitor typically has a higher gamma figure of 2.5. Its input video signal is raised to the 2.5 power, not the 1.8 power.

Gamma makes the luminance output of a monitor or TV nonlinear with respect to input signal levels. If luminance were linear — i.e., if gamma were 1.0, not 1.8 or 2.5 — the monitor or TV would provide too little contrast.

Here is a graph (click on it to see a larger version) comparing gamma 1.0, gamma 1.8, and gamma 2.5:



The horizontal axis represents the level of the input video signal, relative to an arbitrary maximum of 10.0. This signal can be an analog voltage or a digital code level; it can represent either the black-and-white "luminance" signal per se or any one of the three primary color components, red, green, or blue.

The vertical axis represents the amount of light (or luminance) the monitor or display produces for each input voltage or code level. Again, this value is computed and plotted relative to an arbitrary maximum value — this time, 1.0.

Notice that only the gamma-1.0 plot is linear. Real TV's and real monitors typically have gammas between 1.8 and 2.5, so their gamma plots curve.


Take the gamma-1.8 plot. Specifically, the plot sags downward such that low input levels toward the left side of the graph — say, input signal level 4.0 — have luminances way lower than they would be for gamma 1.0. Then, as input signal levels rise toward 10.0, the gamma-1.8 plot gets closer to the gamma-1.0 plot. It never quite catches it — until the relative input level is at 100% of its maximum, that is — but it's like a racehorse closing fast at the end of the race, after falling behind early on.

When the input signal level is low — say, 2.0 — increasing it by 1.0 to the 3.0 level has only a minor effect on luminance output. But going from 7.0 to 8.0, again a boost of 1.0, increases luminance output quite a bit. The fact that the gamma curve sags downward and then hurries upward causes details to emerge more slowly out of shadows than out of more brightly lit parts of the scene.

What's true of gamma-1.8 is even more true of gamma 2.5. Gamma 2.5's plot sags downward even more than gamma 1.8's, so shadows are yet deeper and, by comparison, highlights are more dramatic with gamma 2.5 than with gamma 1.8.


A gamma curve is actually a straight line when plotted on log-log axes:



Here, I've replaced the original axes with their logarithmic equivalents. That is, I've replaced each tick mark along the horizontal axis with one for its point's logarithm. For instance, instead of plotting 4.0 at 4.0, I've plotted it at log 4.0, which is 0.602. 9.0 is now at log 9.0, or 0.954. 10.0 is at log 10.0, or 1. And so on.

A similar mathematical distortion has been done to the vertical axis. The result: gamma "curves" that are all straight lines!

This is to be expected. If

luminance = signal gamma

then, as math whizzes already know,

log(luminance) = gamma x log(signal)

Math whizzes will also recognize that gamma is the slope of the log-log plot. This slope has a constant value when gamma is the same at every input signal level, which is why the log-log plot is a straight line.


Why is this important? One reason is that, as with the two versions of the single image shown above, the gamma that was assumed for the eventual display device — and built into the image accordingly — ought truly to be the gamma of the actual display device. Otherwise, the image can wind up with too much or too little contrast.

When we say that the assumed display gamma is "built into" the image, in the world of television and video we're talking about gamma correction. A TV signal is "gamma-corrected" at the camera or source under the assumption that the eventual TV display will have a gamma of 2.5, which is the figure inherent in the operation of each of the three electron guns of a color CRT.

So the original camera signal (or signals, plural, one each for red, green, and blue) gets passed through a transfer function whose exponent is, not 2.5, but rather its inverse, 1/2.5, or 0.4. (Actually, for technical reasons having to do with the eye's interpretation of contrast in a presumably dimly lit TV-viewing environnment, the camera's gamma-correction exponent is altered slightly from 0.4 to 0.5.)

When the display device is not a CRT, it has no electron guns, and its "gamma" is completely ersatz. If the device is digital, a look-up table or LUT is used to instruct the actual display — say, an LCD panel or DLP micromirror device — how much luminance to produce for each digital red, green, or blue component of each pixel. Each color component of each pixel is, usually, an 8-bit digital value from 0 to 255. When that 8-bit code value is looked up in the LUT, another 8-bit value is obtained which determines the actual output luminance.


What will that second 8-bit value be? If we set aside questions about how the TV's contrast, brightness, color, and tint controls work, the answer depends mainly on the ersatz "gamma curve" built into the LUT.

If the LUT's ersatz "gamma curve" is based strictly on the exponent 1.8 or the exponent 2.5, the graphs shown above depict how the digital LUT translates input signal values to output luminances.

But the "gamma" of the LUT may reflect a different exponent, one not found in the graphs, such as 1.6 or 2.2 or 2.6. The TV's designers may have chosen this "gamma curve" for any one of a number of reasons having presumably to do with making the picture more subjectively appealing, or with camouflaging the inadequacies of their chosen display technology.

Or the "gamma" of the LUT may reflect more than one exponent! That is, the "gamma" at one video input level may be different from that at another. At a relatively low 20% of the maximum input level (video mavens call it a 20 IRE signal), the ersatz "gamma" may be, say, 2.5. At 40% (40 IRE), it may be 1.6. At 80% (80 IRE), 2.2.

So there is nothing to keep a digital TV from having a wide range of ersatz "gamma" exponents built into its LUT, such that (at the extreme) a slightly different "gamma" may be applied to each different input code level from 0 through 255 (or each IRE level from 0 to 100).

If the wide-ranging ersatz "gamma" underlying a digital TV's internal look-up table were to be plotted on log-log axes, it would no longer be a straight line! A straight-line log-log gamma curve assumes a single, constant gamma exponent, independent of the input code value (or analog voltage). Thus, when the gamma exponent varies with input level, the log-log gamma plot is no longer a straight line.

Even a CRT-based TV, if it processes its input signal in the digital domain, can play such games with gamma. True, it's electron guns have their built-in "true gamma," but that single, simple figure (say, 2.5) can be distorted at will by means of a digital look-up table. The LUT is more powerful than the electron gun!


In the service menu of any modern television, CRT-based or otherwise, there is apt to be a way to select "gamma" — not the "true gamma" of a CRT electron gun, which can't be changed, but the "ersatz gamma curve" built into an internal look-up table.

On my Samsung DLP rear-projection TV, there is in fact a GAMMA parameter in the service menu. It can be set to values ranging from 0 through 5 (values higher than 5 apparently being meaningless). Each different setting affects the "attitude" the TV has toward contrast by (I assume) re-specifying the "gamma curve" of its internal look-up table.

As best I can tell, few if any of these GAMMA choices implement a single, simple gamma exponent that is constant for every video input level, à la an old-fashioned, "LUT-less" analog CRT. The various log-log curves are not, presumably, straight. Nor do the GAMMA numbers, 0-5, have any relationship to the ersatz "gamma" exponents being selected. For example GAMMA 2 does not choose an ersatz "gamma" exponent of 2.0.

I have yet to encounter any authoritative discussion of why these alternate GAMMA settings are put there, or how they are intended to be used. Enthusiasts on various home-video forums are wont to claim that, for Samsung DLPs like mine, GAMMA 0, GAMMA 2, or GAMMA 5 is "better" or "more CRT-like" than the original "factory" setting, GAMMA 4.

But my own experiments with these alternate settings have not really borne those claims out.

Even so, I have learned that twiddling with digital, LUT-based "gamma" (because it's so ersatz) can alter the "look and feel" of the picture on the screen in myriad ways which may well appeal to you, with your digital TV display, if not me with mine. And, in the end, that's why I've gone to the trouble of trying to explain what's going on when you adjust the "gamma" underlying your digital TV's powerful internal LUT!

Friday, July 15, 2005

View Masters

When I was a lad in the early-to-mid 1950s, I first got the movies-at-home bug ... and at that age, I'd scarcely even been to the movies. I had a friend, Jimmy S., about three years older than I. He was, I'd say, 10 when I was 7. And he had a projector!

It was, I think, an 8-mm or 16-mm projector which he as a budding movie buff must have bugged his bemused gas-station-owner dad to buy him, along with reels of Abbott and Costello and monster movies and old silents and cartoons which he was constantly itching to show me. I was admittedly a bit antsy about sitting through an entire movie at that age. But that was duck soup compared to sitting still while he, perched at my bench-style blackboard, chalk in hand, storyboarded every scene from the latest movie his mother had taken him to in the theater.

I was more the hands-on type, and what I could really get into, patience-wise, was our attempts, Jimmy's and mine, to overlap stills from those old stereoscopic View-Master slide reels and put 3D images on the wall! 3D was all the craze at the time, remember. And View-Masters must have been a brand new product then.

I gather they're still around. A cardboard ring or reel contains a series of photos of, say, scenes from the latest Disney movie. Each slide is doubled along the opposite edge of the circular reel such that looking through a special viewer brings it into 3D. But the View-Master projector doesn't do that; it projects just one of the paired pictures at a time. So, Jimmy reasoned, why not get two copies of each ring and use his projector (which could also accept View-Master reels) to superimpose the other version of the image on the wall? That surely would reproduce the 3D effect, right?

We spent hours making it work, on several occasions at his house and mine. It never happened. The image on the wall never went 3D.


Yet that must have been the origin of what seems to have turned into a lifelong obsession to become, in essence, my own "view master." By that I mean watching movies in my home and having them look "just like they look in the theater."

And so do a lot of others, as the huge success of DVDs bears out. (I've lost touch with Jimmy S., but I imagine he must have the world's largest DVD collection.)

So what is "just like in the theater," then? What is it that we "view masters" are striving for?

Well, we want size. If not in the league of the local cineplex, where the screen can be bigger than any single wall in our house, at least big enough that we have to turn our heads to take in both sides of the image.

And we want quality, a commodious notion which subsumes things like high brightness, deep darkness, and vivid color, as well as overall clarity and freedom from distracting noise and artifacts.

Then there's the holy grail of grails: filmlike resolution, right? That's what high-definition video basically is: the ability to boost the accustomed resolution of our TV screens.


So, how high is high?

How much detail must the HDTV be able to resolve to be "filmlike"?

A little research reveals to me that it's not as much, not as high, as one might think. For example, The Quantel Guide to Digital Intermediate says on p. 16, "The image resolution of 35mm original camera negative (OCN) is at least 3,000 pixels horizontally; some would say 4000 ... ." Yet, the guide continues on p. 17, "The traditional chemical process of generating a final film print by copying [specifically, by film-to-film contact printing] changes the spatial resolution of the images from that held on the OCN. Although the OCN resolution is up to 4K digital pixels, the resolution reduces every time it is copied – to interpositive, internegative and finally to print, where the resolution is closer to 1K pixels."

Here, the designation "K" represents not 1,000 but 210, or 1,024. Either way, depending on the number of intermediate stages between the original camera negative and the projected print, the horizontal resolution of the typical pre-digital-age 35-mm movie print could be less than the horizontal resolution of a 720p HDTV, which is 1,280 pixels!

"George Lucas’s Star Wars prequels were predominantly shot in high definition at 1920 x 1080," the guide says (p. 16). And, "while grading Pinocchio for digital projection and working at 1280 x 1024 resolution, the director and main actor, Roberto Benigni, was surprised to find his frown lines visible in the digital projected final when they were not in the film print. As a result these had to have their focus softened on the digital final!" (p. 17).

(What's "grading"? It's "individually adjusting the contrast of the [red, green, and blue] content of pictures to alter the colour and look of the film." A synonym is "timing" or "color timing.")

So even 720p HDTV can offer as much spatial resolution as we usually get at the movie theater, and 1080i can offer as much as George Lucas considered necessary for his latest Star Wars epics.

Merely gaining access to HDTV accordingly will play a key role in making each of us "view masters" in our own home domain!

Thursday, July 14, 2005

The Case of the Data-Processing DP

What do these recent movies have in common?

  • The Lord of the Rings trilogy (2001, 2002, 2003)
  • Spider-Man (2002) and Spider-Man 2 (2004)
  • Seabiscuit (2003)
  • The Aviator (2004)
  • A Very Long Engagement (2004)
  • The Passion of the Christ (2004)
  • Collateral (2004)
  • Ray (2004)
  • The Village (2004)

According to "The Color-Space Conundrum," a Douglas Bankston two-parter for American Cinematographer, online here and here, "all went through a digital-intermediate (DI) process." (Some of the above information also came from the Internet Movie Database.)

DI is a way of doing post-production on a film after scanning its negative, frame by frame, into the digital-video realm. Once converted to bits and bytes, the image can be manipulated in numerous ways. Visual effects such as computer-generated imagery (CGI) can be added during the DI process. DI is also useful for adjusting the overall look and feel of the film, manipulating its darkness or lightness, contrast, color saturation, and hue palette via "color grading" or "color timing."

Once DI has been completed, the result is typically returned to the film domain via laser recorders that beam light in three separate colors onto the emulsion of a color print filmstock.

CGI and DI today go hand in hand. Antedating the digital era, though, CGI has been around at least since "the science-fiction thriller Westworld (1973) and its sequel, Futureworld (1976)." Other CGI groundbreakers included Tron (1982) and The Last Starfighter (1984). Of course, one of the most famous early CGI pioneers was the first Star Wars film (1977).

In those days CGI, after being created in a computer, was usually projected on a screen and filmed. Optical tricks were used to combine the projected images with live-action shots. One of the first movies to place digitally composited images directly on film was Flash Gordon (1979), in some of its sequences.

A "turning point" happened in 1998, with Pleasantville. The idea of the film was to place full-color modern-day characters in scenes that evoked black-and-white 1950s television. After the film was shot in color, a computer was used to selectively desaturate just the background on nearly 1,700 individual shots — not the whole film. This proto-DI work was not CGI; it amounted to an extreme form of color grading or timing.

O Brother, Where Art Thou?
O Brother, Where Art Thou? (2000) was reputedly the first film to undergo a full-fledged, no-frame-left-behind DI process. The object was to "achieve a highly selective, hand-painted postcard look for the Depression-era film without every shot being a visual effect." Director of photography Roger Deakins not only shot the movie, but he also spent "10 weeks helping to color-grade the film" during its post-production. In so doing, he may have become the world's first "data-processing DP"!

Cinematographer Robert Richardson shot The Aviator (2004) and then worked with visual effects supervisor Robert Legato during a post-production DI process to match the look of old two-strip and three-strip Technicolor films from the personal library of the film's director, Martin Scorsese — the film was a bio of Howard Hughes, who during Technicolor's heyday in the 1930s was a Hollywood honcho. This was a case of the increasingly common "data-processing DP" striking again!

In the future, it looks as if most big-budget Hollywood films will be doing "the digital-intermediate thing" — even those films with little or no CGI. Crucial to the look of what we will see in theaters will be, of course, the "colorist": the post-production technician who arranges for all those color-grading or color-timing palettes and densities to appear. But looking over the colorist's shoulder, guiding his or her decisions, will inevitably be the "data-processing DP," no longer simply a master of light and lens.

The payoff will be an increasing number of films whose look can only be described as "stunning." But there will be a second advantage for us end users as well. Transfers of movies to HDTV and DVD can now be derived directly from the finished digital intermediate, rather than scanning back in to the digital domain a finished piece of film with all of its analog imperfections. That could result in movie images being nearly as stunning in the home as they are in the cineplex.

Tuesday, July 12, 2005

Color Me ... a Fan of Color (Part 3)

Here is another installment of an ongoing series, the most recent of which was Color Me ... a Fan of Color (Part 2). The topic before the house: what are the assets and liabilities of high-definition video when it comes to rendering color cinemtatically?

In the prior installments, I introduced the idea that different color spaces have different characteristics that grossly or subtly affect the look of an image on film or video. Every type of color film stock, whether for use as a camera negative or as a positive print, has its own associated color space, meaning the gamut of colors it can actually render. The same is true of high-definition video as opposed to standard-def video; each of the two uses a (very slightly) different color space.

The CIE XYZ gamut
The diagram to the right gives us some visual idea of what all this means. Its source is "The Color-Space Conundrum," a two-part article found online at the American Cinematographer website here and here. In particular, it is one of the several diagrams which may be clicked on from within the text of page 6 of the second part of the article.

In the diagram, according to its caption, "the innermost cube represents the colors that can be reproduced by a particular RGB display device." A television or video projector, whether high-def or standard-def, is just such an "RGB display device."

Furthermore, the AC caption continues, "the curved surface illustrated by colored contour lines represents the boundary of colors visible to human beings; that it lies outside the RGB cube shows that many [visible] colors cannot be reproduced by an RGB device." That much is fairly self-explanatory, if disappointing.

As f0r "the outermost figure, the white wireframe rhombohedron ... represents the colors that can be specified by the CIE XYZ primaries." The X, Y, and Z primaries, as defined by international standards of the CIE (Commission Internationale de l'Eclairage), define an all-encompassing color space. To some subset of XYZ any "lesser" color space, such as RGB, can be mapped. XYZ's compass is so large, it even includes "colors" beyond those the eye can actually see.

So the XYZ rhombohedron has many more colors (visible and otherwise) than the RGB cube has. (A "color" is some combination of "hue," such as orange or teal; "saturation," such as vivid orange or muted teal; and "value," such as light vivid orange or dark muted teal.) In particular, a range of extremely saturated colors that the eye is capable of seeing and responding to lie outside the RGB cube.


Color negative and print film uses a differently positioned cube, for a different color space. Film's color space is represented by a cube whose axes are C, M, and Y, not R, G, and B. C (for cyan), M (f0r magenta), and Y (for yellow) are the secondary colors or subtractive primaries. Each can be made by adding exactly two of the additive primaries: R (for red), G (for green), and B (for blue) in equal amounts. For example, adding equal amounts of R (red) and B (blue) produces M (magenta).

Color film uses a strategy of subtracting CMY secondary colors from white light, whereas TV and video use a strategy of adding RGB primaries to make white light. That's why film uses a CMY color space whose cube does not necessarily match RGB's in the diagram above.

(Actually, film uses a CMYK color space, where K stands for "blacK." By itself, CMY does not produce convincing blacks. So an independent "primary color," K, is used as a layer in color film stock to make blacks more solid and convincing. In what follows, I will simply ignore K.)

The CMY cube, were it to be drawn into the diagram, would (for one thing) have a whole different orientation than the RGB cube has, with its corners in different places than those of the RGB cube. For another thing, the primary-color chromaticities — the x and y coordinates of the specific cyan, magenta, and yellow hues that are used to define the CMY cube — make for a CMY cube has a different color gamut than the RGB cube.

If the CMY gamut were wider than RGB's, as it apparently is, that would mean that some of the colors available in CMY as used in film don't "make the cut" when film is transfered to video. Those colors would instead be represented by colors available in the RGB gamut which approximate the original colors.

Furthermore, for like reasons having to do with the two color-space gamuts not overlapping precisely at their edges, there may be colors available in RGB that can't be accurately reproduced in CMY.

In the view of Ed Milbourn at HDTV Magazine, as stated in "Ed's View - The 'Film Look'," today "the video color gamut is somewhat greater than that of film." (Here, Milbourn is not referring to the old, highly saturated look of classic Technicolor.) "This," he continues, "gives video the capability of reproducing a slightly wider range of colors than film, adding to the color 'snap' of HDTV video. All of these factors (and a few others) combine to give film color a more muted, realistic look than video."

So HDTV color is not, and cannot be, as cinematic as it would be if it used the CMY color space common to color film stocks instead of the narrower-gamut RGB color space upon which video is standardized!

More on HDTV color in Part 4 (which I may not get to for a while).

Wednesday, July 06, 2005

Color Me ... a Fan of Color (Part 2)

In Color Me ... a Fan of Color (Part 1), I began sneaking up on some sort of formal understanding of how the color on HDTV can look so vivid, nuanced, sharp, and even downright cinematic. Douglas Bankston's two-part series on "The Color-Space Conundrum," found online at the American Cinematographer website here and here, formed the basis for my discussion.

To review briefly: color perception being so personal and idiosyncratic, color scientists needed a way to standardize the millions of possible combinations of hue, intensity/saturation, and light-to-dark "value" in an objective way. Enter, accordingly, the CIE chromaticity diagram. It mapped all possible real-world "color spaces" into one large, abstract three-dimensional solid which could for convenience be represented on a flat, two-dimensional graph as a shark fin-shaped triangle made of every conceivable hue and saturation.

Every real-world application of color imaging, however, uses a narrowed gamut or palette which, for technical reasons, typically excludes one or more ranges of the purer, more intense colors lying around the outer regions of the diagram. This narowed gamut is the application's "color space." The outer limits to any particular application's gamut stem from many factors, one of which has to do with whether the application uses an additive color process or a subtractive color process.


Additive colors
TV systems, HD and otherwise, use an additive color process which builds the image out of red, green, and blue (RGB) pixels. In theory, when you add together the three primary colors in various quantities and proportions, you can make any color under the sun. It can be either light or dark, depending on the total quantities of R, G, and B. And it can be either saturated or muted, depending on the relative contribution of the smallest of the three additive primaries, compared with the summed quantities of the other two primaries.

For example, you can make pure brown by mixing two small dollops of red with one small dollop of green. Make the two dollops big instead of small, and you get pure orange, not brown. You can think of brown as "dark orange" and of orange as "light brown." But both are pure, maximally saturated colors, since the third primary, blue, is wholly absent. Start adding in a little blue, though, and the brown/orange starts verging away from a pure, maximally saturated hue and toward a (dark, for brown; light, for orange) gray. Before it becomes fully gray, it's any of a huge number of muted, desaturated versions of brown or orange.

If you add R, G, and B in equal amounts, that's when you get an actual gray — or, if you use large enough dollops, white. White and the various shades of gray can be thought of as maximally desaturated colors.

In terms of the CIE chromaticity diagram, TV imaging systems typically select, in defining their color space, a "red," a "green," and a "blue" that are not as pure as those at the "corners" of the CIE diagram. In the real world, it is hard to find sources of light that will produce absolutely pure primary colors and can be used in direct-view TV displays or in front and rear projectors' "light engines."

Consequently, the real-world primaries — the colors of the actual phosphors (or whatever) that are used to light up TV images — impose a gamut restriction on the color space of the TV. The palette of colors that are actually able to be reproduced by the TV or the signal transmission system which feeds it can be diagrammed as a subsidiary triangle within the larger CIE diagram.

TV colors
To the right is a version of the CIE chromaticity diagram (it's from an online essay on colorimetry standards found here) which illustrates the subsidiary triangle of television color reproduction. We see the customary shark-fin shape of the spectral locus/line of purples. In its center is a point marked D65, representing a standard chromaticity for white. A so-called black-body radiator of light will produce D65 when raised to the temperature 6504 degrees Kelvin, or 6504 K.

(Actually, the x and y cooridnates of D65 were determined before physicists moved the so-called Planckian locus which represents the various colors of white produced by black-body radiators when heated to different absolute temperatures such as 6504 K. They decided a key numerical constant, Planck's constant, needed to be changed slightly, which moved the Planckian locus. So D65 wound up off the locus. 6504 K is the color temperature associated with the point on the locus nearest D65.)

Within the spectral locus/line of purples perimeter in the diagram above (the outer "shark fin") are two triangles, one solid and one dashed. The solid one represents the SMPTE 170M color space, defined by the Society of Motion Picture and Television Engineers for standard-definition TV. It is sometimes (somewhat inaccurately) spoken of as the International Telecommunication Union's "Rec. 601" color space. (Not shown is the ITU's "Rec. 709" color space for HDTV. Its corners, however, are quite close to those of SMPTE 170M/Rec. 601.)

The dashed triangle encloses the long-obsolete color space originally defined in 1953 for NTSC color TV transmissions in the United States. Notice that it has a wider gamut of colors (since it's a bigger triangle) than SMPTE 170M/Rec. 601. That wide gamut form 1953 was narrowed over the next few years in response to consumer preferences for brighter color TV displays. With the phosphors available, you could either have a dim-but-wide color gamut or a bright-but-narrow color gamut, and people preferred the latter.


Subtractive colors
Movies are made with film stocks that employ a subtractive color process. Cyan, magenta, and yellow dye layers in the film absorb, respectively, red, green, and blue from the white light of the projector lamp.

If you hold a strip of film up to a white light and it looks yellow, for example, that's because a yellow dye in the film has subtracted all the blue from the initially white light. Only the red and green wavelengths of light get through; they combine to make yellow.

If the strip looks green, then yellow and cyan dye layers working together have subtracted, respectively, blue and red from the white light, leaving green.

If the strip looks black, then yellow, cyan, and magenta dye layers have all been at work. They have subtracted, respectively, blue, red, and green wavelengths, leaving no light whatever, or black. (In practice, though, color films employ ways of making black look solider and more convincing than simply subtracting red, blue, and green from white.)

Additive color systems' color spaces, unfortunately, do not map perfectly to those of subtractive color systems, and vice versa. That's one reason why transferring film to video can create color mismatches.

More on HDTV color later, in Part 3.

Tuesday, July 05, 2005

Color Me ... a Fan of Color (Part 1)

To this blogger, one of the fine things about HDTV is how great the color in the image looks. It is at once vivid, nuanced, and sharp. It looks more cinematic than anything else I have seen on a TV screen.

There are of course various reasons why HDTV color is better than that of standard-definition TV. I'd like to explore some of them, but to a certain extent I'm held back by how daunting the subject is, technically speaking. So I intend to approach the topic with little baby steps.

One good place to begin is with a pair of articles, written by Douglas Bankston, from the archives of American Cinematographer. They compose a two-part series on "The Color-Space Conundrum." Part One is here, and Part Two is here.

"Humans can see an estimated 10 million different colors," writes Bankston, "and each of us perceives every color differently." We remember colors in real-life scenes, furthermore not as colors per se but as contexts: "A memory color is not 'green'; a memory color is 'grass'." When what we think of as "true colors" are actually contexts of personal memory, how do we bring any degree of objectivity to our use of color in scenes we manufacture, as in the movies.

One way is to describe "color spaces" scientifically. When we do that, we first need to map those 10 million different colors — their hues, their intensities of vividness versus mutedness, their values of darkness versus lightness — onto a highly conceptual three-dimensional set of axes. One of the maps we have come up with is the one experts call "the CIE chromaticity diagram."

CIE Chromaticity Diagram
The objective three-dimensional map — or, actually, a two-dimensional projection of it onto a flat piece of paper — can then be used as a point of comparison for all real-world color spaces, such as the one used in HDTV or the quite similar one used in SDTV.

Loosely, the CIE chromaticity diagram has a horseshoe/triangular shape, with red, green, and blue hues occupying the areas near the vertices. This is no accident. The retina of the human eye has three types of color receptors or "cones": one type for red, one for green, and one for blue. We see all colors in terms of these three basic colors, called primary colors.

In fact, all other colors can be thought of as mixtures of two or more of these three additive primaries. For example, there are three possible ways to mix exactly two primaries in equal amounts, leaving out the third:

  • Red + green = yellow
  • Green + blue = cyan
  • Blue + red = magenta

Yellow, cyan, and magenta are accordingly called the secondary colors or subtractive primaries. If we were to place, say, a magenta filter in front of a source of white light, what it would actually be doing is subtracting green from the light — green is the color complementary to magenta — while passing through the mixture of red and blue which we call magenta. This is why magenta (like yellow and cyan) is called a "subtractive primary."

The outer periphery of the CIE chromaticity diagram has the most vivid — i.e., purest — colors. As we move inward, away from the very edges of the diagram, we encounter all the colors that have some contribution made to them by all three primaries, red, green, and blue. At the diagram's very edge, at least one primary is always lacking. At the triangle's "corners," two primaries are lacking.

When I talk about the "edge" of the chromaticity diagram, I refer to the part of the outline that has a horseshoe or shark-fin shape. This is the so-called spectral locus. The straight line connecting the "blue corner" with the "red corner" is not part of the spectral locus. It is the line of purples. Every hue along it is a shade of purple that cannot be made by a single wavelength of light. Rather, contributions from both red and blue wavelengths are required.

3D CIE Diagram
When we move increasingly far away from the spectral locus/line of purples, we eventually get to the very "center" of the CIE chromaticity diagram. This is the area in which all three primaries are present in equal proportions, and the result is a "colorless" white. If we were to imagine moving from that white point gradually to a succession of other points located behind the page — we are now moving into the diagram's third dimension, that of depth — the white would turn first light gray, then darker and darker shades of gray, and finally black. (See diagram at right for the "Blk" point behind the two-dimensional horseshoe.)

The 3D CIE diagram shown here actually represents an LMS color space, where the L axis corresponds to the long-wavelength (roughly, red-sensitive) cones of the retina, the M axis is for medium-wavelength (green-sensitive) cones, and the S axis is for short-wavelength (blue-sensitive) cones.

Most of the time, the term "color space" really refers to a subset of the full three-dimensional CIE chromaticity diagram, a narrowed gamut or palette which, for technical reasons, typically excludes some of the purer colors on the outer regions of the diagram. It is with reference to maps like the CIE diagram that we can translate from, say, the color space used by a particular type of motion picture film to the color space used by HDTV.

Color spaces are usually defined with reference to associated sets of axes. For example, a color space defined with respect to red, green, and blue primaries uses R, G, and B as its axes and is called an RGB color space. There can also be color spaces defined with respect to (for example) X, Y, and Z axes, where X (not R) represents long-wavelength reds, Y (not G) medium-wavelength yellows/greens, and Z (not B) short-wavelength blues.

The X, Y, and Z axes, not R, G, and B, are the ones used for (most) representations of CIE-defined chromaticity. Moreover, the Y axis is identical to video "luminance." Since it is possible to transform XYZ to RGB, and (with gamut limitations) vice versa, for the most part the two sets of axes are equivalent.

There are many other color-space possibilities, but (just as the LMS color space does) they all map to the actual 3D CIE diagram, and thus to the 2D CIE diagram which is its projection in two dimensions. Accordingly, at least in theory, any color space and its associated set of axes can be mapped to any other color space and set of axes.

In practice, though, some colors that are present in one color space may not be present in another color space. In the real world, the mapping of one color space to another is not always complete. This is what I meant above by "gamut limitations." If one set of axes or color space is mapped to another, it may turn out that a particular color that is being mapped does not appear in the target color space — it is an "out of gamut" color — and so can only be approximated by a color that does appear in the target space.

For example, a deeply saturated red which is present in the color space of a motion-picture film stock may not be present in the color space used for HDTV transmissions. On even the best, most color-accurate HDTV, it will be stood in for by a slightly desaturated red primary.

More on color later, in Part 2.

Weekend Pleasures

Over the Fourth of July 2005 weekend just past, your faithful blogger watched two movies on HBO-HD that you may want to add to your list of good movies that you shouldn't have missed.

In addition, I began watching the episodes I recorded during the (non-hi-def) USA Network's The 4400 marathon. This series, now wrapping up its second year, was recommended to me by a good friend. It's about a group of 4,400 people mysteriously re-deposited in the present after having been abducted at various times in the past. The identity of the abductor: apparently, someone or something having to do with the future. The returnees haven't aged, and they have very little idea what has happened to them. But each gradually wakes up to the fact that he or she has acquired some sort of strange, abnormal power ... . A good show.

As for the movies, they were Woody Allen's virtully unnoticed 2003 film Anything Else, starring Allen, a hapless Jason Biggs, and a devastatingly delightful Christina Ricci; and The Cooler, starring William H. Macy, Alec Baldwin, and Maria Bello, a film which marked the directorial debut of Wayne Kramer in (also) 2003.

In garnering fully 6,721 votes at the Internet Movie Database, The Cooler has stimulated more than half again as much expression of opinion as Anything Else, with a piddling 3,922 tallies. IMDB'ers rate the former at a quite decent 7.0 out of 10, the latter at a mediocre 6.4. (Star Wars: Episode III — Revenge of the Sith rates 7.9 with nearly 60,000 voters, a level of enthusiasm and support with which I fully agree.)

Personally, I would reverse the two rankings of the "lesser" flims. I think The Cooler is a good film, Anything Else a very good one.


Anything Else
In Anything Else, Allen is in my opinion quite close to the form he exhibited earlier in his career with Annie Hall (1977) and Hannah and Her Sisters (1986). That is, he's doing well what he does best: skewering the lengths to which we will go, all of us, to both conceal and justify our cravings — to ourselves, to our freinds, to our shrinks, and especially to those we say we love.

When we crave to be with some significant other, we will say and do absolutely anything to bring that result about. Then, later, when we inevitably crave to be apart from him or her, we will again stop at nothing to insure that outcome. We'll even say the diametric opposite of what we once claimed was, with us, undying truth ... if it'll help us undo what we so foolishly did at a moment in time when we couldn't live without the person we can no longer live with.

Woody had me rolling on the floor in a scene thus described in this review: "Amanda's [Ricci's] blowsy divorcee mom Paula (Stockard Channing) unexpectedly crashes at the couple's apartment, with mom fancying herself as a piano-playing chanteuse who eventually takes up with a coke-snorting horse whisperer." When Channing straight-facedly tells Ricci, frantically hunting as usual for her Valium, that the younger man she seriously intends to take up with gives his occupation as "horse whisperer," I was no more good.

I think Woody Allen's once-major acclaim has gone into near-total eclipse for reasons having little to do with how good his films continue to be. For whatever reason, the times have turned against him. Quel dommage!


The Cooler
The Cooler seems to have parceled out some of Woody's lost acclaim to Wayne — Kramer, that is. Kramer, born in South Africa and a resident of L.A. since 1986, was 39 years old when he directed his first movie. (Allen was 34 when he directed his first from-scratch flick, 1969's Take the Money and Run.) Kramer also co-wrote The Cooler, a film about the effect love has on Lady Luck.

Picture William H. Macy (it's not hard to do) as a man whose luck is so bad, he's in the employ of casino owner Alec Baldwin to stop by gaming tables where patrons on a roll are raking in the moolah. As soon as Macy appears, their luck turns sour. His personal black cloud has, it seems, rained on their parade.

Can such a man ever be lucky in love? Turns out maybe he can, with casino waitress Maria Bello ... though at first her interest in him has been bought and paid for by Baldwin. Baldwin, for his part, is one of those characters who is so selfish, he sometimes does generous things ... for people he thinks can help him hold onto the power he was accustomed to wielding in the "Old Vegas," now rapidly vanishing in favor of the Disneyfied entertaiment megaplexes which he loathes.

The plot's outcome in The Cooler makes all the difference in the world, and it's hard to guess what it will be until the last roll of the dice comes up ... whatever it does come up (I'm not telling!). Oddly, the plot outcome in Anything Else, as in most other Woody Allen films, makes no real difference whatsover. Whatever it happens to be, no one winds up any wiser, truth to be told. All the main characters' resolutions to do better or be smarter will, as everyone in the audience knows, go totally poof! the very next time they begin to crave whatever it is those selfsame resolutions and wise sayings stand in the way of.

Happy HD'ing!

Saturday, July 02, 2005

Night Glows: Collateral on HBO-HD

Typical scene from
Collateral
Last night I watched Michael Mann's 2004 thriller Collateral, recorded on my second HD DVR cable box (ye-e-e-s-ss!), and viewed on my 32" Hitachi plasma HDTV. The HBO-HD rendering was phenomenal. A great movie in its own right, and (I couldn't help feeling) a homage to Hitchcock's "bad guy's guilt splashes all over the wrong man's life" films, it was doubly good because of its tour de force cinematography.

Jamie Foxx, as Max, is the "wrong man" in Mann's opus. He's a late-night L.A. taxi driver who agrees to convey Tom Cruise, as Vincent, around the city to transact five quick pieces of business ... which turn out to be drug-world assassinations. Foxx takes major gas, but can't get away. So we're treated to scene after scene shot in the various kinds of artifical illumination you'd expect to find in a taxi cruising the nighttime streets of a major urb that never sleeps. The actors' faces, dark-toned for Foxx, pale for Cruise, show up only because they're bathed in the ephemeral glows emanating from tungsten and sodium street lighting, neon signage, taxi dashboards, laptop screens, cell phones — you name it.

There are also a number of interior scenes: fluorescent-lit hospitals, garish disco clubs, underlit parking garages, etc. Again, as with the exteriors, the cinematographic artistry is great.


It turns out that there are two surprises concerning the eye-candy camera work of Collateral. One, it was the work of two separate directors of photography. Two, it was mostly shot direct to video, not on celluloid. This article from American Cinematographer tells the story.

Paul Cameron (Man on Fire, Gone in 60 Seconds) was the lead cinematographer while the film was in preparation and during the first three weeks of actual shooting, after which "creative differences" with director Mann (Ali) sent him packing. His replacement for the rest of the film was Dion Beebe (Chicago). How interesting that such remarkable photography could be achieved without a unary force behind it.

The use of high-definition video cameras rather than film permitted, the article says, "the extensive night-exterior work in Collateral to make the most of available light in and around Los Angeles." Cameron: “Using HD was something [director] Michael [Mann] had already settled on by the time I came aboard. He wanted to use the format to create a kind of glowing urban environment; the goal was to make the L.A. night as much of a character in the story as Vincent and Max were.”

Two different types of video cameras were used for the night exteriors, the Sony/Panavision F900 and the Thomson Grass Valley Viper FilmStream. They had in common that they shot footage in "24p" — jargon for the 1080p/24 digital video format. So each frame nominally had 1,080 scan lines (but see below), scanned progressively, all at once, rather than odd-even interlaced. The cameras recorded what they shot on internal and external digital tape decks and/or computer hard drives.

To get the lively "glowing urban environment" look they wanted, the filmmakers had to "push" the video gain of their cameras above normal levels, at the risk of adding too much video "noise" to the night-lit image. This wasn't easy to meter, since what looked fine on video monitors on the set didn't necessarily look good on the final "film-out" — i.e., when the video footage was transferred to celluloid. In fact, special electroluminescent display panels were constructed to throw enough light of the desired "slightly cool green" color temperature on the actors' faces to hold HD video noise at bay. All this was done in the full, if counterintuitive, knowledge that the oddities of the eventual film stock used would alter those of the raw HD video images, giving the final result that was anticipated.


The shape of the 1,920 x 1,080-pixel video frame used for Collateral was 2.40:1, slightly wider than one of Hollywood's two most commonly used widescreen aspect ratios, 2:35:1. (The other is 1.85:1.) This entailed using two different strategies with the two camera types. The Thomson Viper camera could do "a native internal anamorphic squeeze." It used digital signal processing to narrow the 2.40:1 (or, actually, 2.37:1) image frame to fit within a 1920:1080 or 1.78:1 box, which would then be printed to film. Optics in the film projector would eventually unsqueeze the image back to 2:40:1.

The Sony F900 couldn't do that. It had to crop the image vertically from 1,920 x 1,080 to approximately 1,920 x 764, later optically squeezed for printing to film. That meant that there was less vertical resolution in the F900 footage than in the Viper.

Most of Collateral's interiors were shot right to film, the old-fashioned way ... with the notable exception of the ultra-dark office building where the movie's climactic sequence begins. I was blissfully unaware of any difference between the video footage and the direct-to-film. The only problem I had with HBO's rendition was that it was cropped to 1.78:1 to please the every-pixel-must-be-lit crowd, which resulted in some ludicrous framing, partiuclarly in the scene where Max talks to Felix, an underworld potentate, in his night club.

That quibble aside, I thought the look of Collateral on HBO-HD was good enough to be Exhibit A in any case that one might make for buying an HDTV right now, today!

Friday, July 01, 2005

HDTV ... on DVD Real Soon Now?

The picture we get from DVDs can look pretty darn good. But its resolution is limited to 700+ pixels across a 16:9 screen, and 480 in the vertical direction. It would be nice to have truly hi-def DVDs, with resolutions up to 1,920 x 1,08o, no?

Well, maybe it would.

Such DVDs are in fact coming, probably very soon. Problem is, they're coming in two mutually incompatible formats.

It looks as if the first of these formats, HD DVD, will debut in time for Christmas 2005. The second, Blu-ray, could premiere by spring 2006. HD DVD is being backed by one roster of movie studios, consumer electronics companies, and computer makers, Blu-ray by another. If you want to be able to play every movie released on high-definition DVD, you'll need two players.

Details of the format war that is shaping up between the backers of HD DVD and those of Blu-ray appear in this article from the Los Angeles Times website.

HD DVD and Blu-ray squeeze three to five times the customary amount of information on a disc that looks about like a DVD. Where standard DVDs holds a "measly" 4.7 gigabytes, HD DVD holds a minimum of 15 GB, Blu-ray a minimum of 25 GB. Those are figures for single-layer discs. Adding semi-transparent extra layers can double the disc capacities.

Of course, you need more data capacity for hi-def, lots more. Plus, the powers that be expect consumers to demand yet more special features on hi-def DVD titles. Those take up space.

The secret to all the extra data space is switching from reading the discs with a conventional red laser to a blue one with smaller wavelength. The shorter the wavelength, the more narrowly focused the laser beam can be. Hence, ultra-tiny digital specks on the reflective surface of the disc can be squeezed more tightly together than ever before.

Both formats' players will also play conventional DVDs, never fear. The sticking point is that neither will play the other's discs. The main reason is that HD DVD places the main reflective layer 0.6 millimeters below the clear outer surface of the disc, the distance used for conventional DVDs.

Blu-ray achieves its higher single-layer data capacity by raising the reflective layer to just 0.1 mm below the surface. The downside there is that today's DVD manufacturing plants would need a near-total makeover to handle the change to Blu-ray.

It is, moreover, apparently either too costly or downright impossible to build a player which can shift between the two focus depths and decode both HD DVD and Blu-ray discs. Or, if that's not the real stumbling block, then including all the decoding logic for the other guys' technology is. It involves paying patent royalties no self-respecting consumer electronics firm wants to pay.

Is this economics, or is it politics?

Possibly some of both. The point, again, is that two players will be needed to play all the blue-laser discs that will soon be coming out.


The two sides in the emerging format war each carry a lot of clout. The hardware manufacturers backing HD DVD include Toshiba, NEC, Thomson/RCA, and Sanyo ... plus, implicitly, all 230 members of the DVD Forum, the industry group which "officially" sanctions DVD formats and which has given its blessing to HD DVD. Microsoft is another HD DVD proponent, having announced that Longhorn, its next Windows operating system, will support the format.

And Microsoft is, by the way, the originator of one of the two advanced digital video compression methods available for use with both HD DVD and Blu-ray, Microsoft Windows Media 9 compression. The other available choice is MPEG-4 AVC. Either of these to choices may be used on any given disc. (Standard DVDs use the increasingly obsolete MPEG-2 compression method.)

Equipment makers favoring Blu-ray include Sony, Philips, JVC, Pioneer, Panasonic, Hewlett-Packard, Dell, Hitachi, Samsung, and TDK. The Blu-ray list, Blu-ray advocates point out, also includes many of the numerous DVD Forum members. In the computer world, Apple is in the Blu-ray camp.

First among equals in the HD DVD camp, technology-wise, is Toshiba, primarily responsible for defining the format. Blu-ray is basically a Sony creation. Sony, in fact, plans to roll out its PlayStation 3 console with Blu-ray play capability next spring. That cross-marketing with videogame freaks may tip the format war Sony's way. But stay tuned ... Microsoft may be planning to bring out an HD DVD-equipped Xbox 2 before the PS 3 arrives.

The format war's outcome may depend mostly on which studios line up behind which format, however. So far, New Line, Paramount, Universal, and Warner Bros. have said they'll be releasing only on HD DVD, while Disney, Sony-owned MGM, and Sony's Columbia TriStar will go exclusively with Blu-ray. 20th Century Fox has recently signaled it, too, will be in Blu-ray's camp. Still on the sidelines are DreamWorks and Lion's Gate.

Everyone seems to expect the new players, of whichever format, to be pricey at first ... in the neighborhood of $1,000. The discs themselves will probably cost in the $25-40 range, for both formats.

The Blu-ray Disc Association's official website is here. A centralized repository of information about HD DVD can be found here.