Tuesday, July 26, 2005
Even though the "bit depth" of the "grayscale resolution" is nominally just a barely sufficient 8 bits, that depth can effectively be extended to 10 or 11 bits, using tricks. An early, crude way to do that bit-depth boost is "dithering." "Dark video enhancement," involving a redesigned color wheel, now replaces dithering as a more subtle and successful method of keeping false edges out of darker portions of images.
Now I'd like to talk about another subtle and successful trick which the DLP mavens at TI have dreamed up. It's SmoothPicture, a way to get a DMD with 1080 rows and 960 (not 1920!) columns of micromirrors/pixels to display a 1920 x 1080, maximally high-definition picture that is progressively scanned — i.e., 1080p!
"A DMD that supports SmoothPicture contains an array of pixels aligned on a 45-degree angle," writes Alen Koebel in the August 2005 issue of Widescreen Review magazine. "TI calls this an offset-diamond pixel layout." The revised layout contrasts with the ordinary pixel arrangement, in which all the micromirrors' edges line up with the frame of the DMD.
With this new alignment, from each micromirror can be derived not one but two pixels on the screen. How? Koebel writes, "The incoming image is split into two subframes — the first containing half of the image’s pixels in a checkerboard pattern (odd pixels on odd scanlines and even pixels on even scanlines), and the second containing the remaining half of the pixels."
"During the first half of a video frame," Koebel continues, "the first subframe is displayed on the full [micromirror] array, addressed as 1080 rows of 960 columns. Just before the start of the second half of the video frame, a mirror in the projection path is moved [slightly] to shift the image of the DMD on the screen by exactly half a pixel horizontally. The second subframe is then displayed on the DMD during the second half of the video frame."
The illustration to the right may help in visualizing what is going on. (Click on it to see a larger version.) At the bottom is a representation of part of a mirror. This, I believe, is a separate mirror, one that is "in the projection path," not one of the micromirrors. It swivels slightly such that it forms first one pixel on the screen, and then a second, slightly offset pixel next to it.
Accordingly, the pixels are diamond-shaped, nor rectangular. It's too bad Mitsubishi owns the name DiamondVision, no?
Cheating, you say? Not true 1080p? Two points. One, there is a DLP chip that produces full-fledged, honest-to-goodness 1080p — or, actually, it does even better. It produces 2K horizontal resolution, fully 2,048 pixels across by 1,080 up and down. This digital micromirror device is called the DC2K, for "digital cinema 2K," and it's used in groups of three in (you guessed it) expensive digital-cinema projectors.
The DC2K is way too pricey for consumer-level gear, even in single-chip configurations. This is because it's physically larger than other DMDs. The larger the DMD, the harder it is to produce in quantity without flaws. The manufacturing yield is low, making the price correspondingly high.
The second point in favor of SmoothPicture is that the "slight loss of resolution, mostly in the diagonal direction," due to the overlapping pixels on the screen, "has an intended beneficial effect for which the feature is named: it hides the pixel structure."
"This is not unlike the natural filtering that occurs in a CRT projector, due to the Gaussian nature of the CRT’s flying spot," Koebel goes on to say. (By that he means that the moving electron beam in a CRT lights up a tiny circular area of phosphors: a spot. The luminance of the spot rises and then falls again in a bell-shaped ("Gaussian") pattern, proceeding from edge to edge.
So, says Koebel, "TI’s SmoothPicture is probably the closest a pixel-based display has yet come to the smooth, film-like performance of a projection CRT display."
"Comparative measurements by TI show that a 1080p SmoothPicture rear-projection display exhibits higher contrast at almost all spatial frequencies (in other words, more detail) than two popular competing LCoS rear-projection displays," writes Koebel. I'm guessing that he's referring here to something experts in imaging science call the "modulation transfer function," or MTF.
My understanding of the MTF — which is admittedly quite limited — is that it quantifies an odd characteristic of human vision: an image appears to be much sharper when it has a higher modulation (that is, when it has a higher contrast ratio). At a lower modulation/contrast ratio, the image appears less sharp — even though, technically, it has the same resolution!
So, I assume, TI's SmoothPicture display gives better contrast ratios than its LCoS competitors, and that higher modulation results in subjectively sharper pictures with subjectively more detail.
Another topic I'd like to touch on briefly is also contrast ratio-related: DynamicBlack.
DynamicBlack is TI"s name for automatic iris control. In front projectors there is often a user-adjustable circular iris in the light path which may be widened or contracted much like the iris in the eye. When you buy a front projector and screen, you have many options to choose from as to how large the screen is, how far away from the projector it is, and what the "gain" of the screen is. (Gain has to do with how much of the light from the projector is transmitted to the audience's retinas. The higher the gain, the brighter the picture.)
With all these variables, it's easy to wind up with a picture that is too bright. One possible remedy: lower the contrast control on the projector. A better remedy: close down the projector's iris somewhat.
But then dark scenes may appear washed out, lacking in detail. You'd like to open the iris all the way just for those scenes.
Under other installation conditions, you might want to do just the opposite. You'd prefer to open the iris wide for bright scenes to get the maximum dazzle effect, then close the iris as much as it can be closed to make dark scenes really dark.
This kind of thing, in fact, is what TI's DynamicBlack is designed to do, automatically.
In so doing, it increases the on-off or scene-to-scene contrast ratio of a DLP display to, says Koebel, "well in excess of 5,000:1; some [displays] may reach 10,000:1."
Not that DynamicBlack actually lowers the display's black level, the way DarkChip technology does. It accordingly has no effect on the “ANSI” contrast ratio measurement for the display, which compares light and dark areas in a single scene. (It is this single-scene or "simultaneous" contrast ratio that the modulation transfer function is concerned with — see above.) DynamicBlack merely makes transitions from high-brightness scenes to low-brighntess scenes seem more dramatic.
And that, for now, ends my investigation of the state of the DLP art. I may, however, come back to the subject in future installments. Stay tuned!
Sunday, July 24, 2005
The DLP "light engine" uses either three digital micromirror device (DMD) chips, or one:
Either way, red, green and blue images — in the three primary colors, that is — are formed separately and meet at the screen or at the retina of the eye. Each video frame is either spatially (with three DMDs) or temporally (using one DMD and a color wheel) divided into three subframes, one for each color primary. White source light is changed into the appropriate color for each subframe by means of wavelength-subtracting prisms or transparent colored segments of the spinning color wheel.
But, aside from the hue thus optically afforded it, each subframe is in effect a mosaic in black, white, and shades of gray.
An important question is, how many shades of gray?
The simple (much too simple; see below) answer is that consumer DLP displays and projectors divide the luminance range from black to white (ignoring hue) into 28 or 256 steps.
This means that real-world luminance, which actually falls anywhere along the continuous tonal range from black to white, is summarized as 256 discrete shades of gray. Alternately stated, the "grayscale resolution" that is used for consumer-level DLP displays is 256. Yet another way of putting it is that the grayscale's "bit depth" is 8 bits per primary color component: the power of two which 256 is, not surprisingly, corresponds to the number of bits (8) needed to represent all 256 possible integer values from 0 to 255.
An 8-bit bit depth is not ideal; it's an engineering compromise. The problem is that the grayscale steps at lower luminance levels are visually too far apart. Instead of seeming to grade smoothly into one another, they form "bands" or "false contours" on the screen.
For mathematical reasons having to do with the decreasing percentage of step n which each new increment to step n+1 represents, the banding or false contouring problem tends to go away as luminance level rises. But in darker scenes and in dimly lit areas of ordinary scenes, banding or false contouring can be objectionable.
According to "DLP from A to Z," an excellent article by Alen Koebel in the August 2005 issue of Widescreen Review magazine, "Professional [DLP] video sources use 10 bits per component; professional displays should be able to reproduce at least 8,192 (213) gray levels. DLP Cinema projectors, arguably the most advanced manifestations of DLP technology, are currently able to reproduce more than 32,000 levels." (Magazine subscribers may download the WR article in PDF form by clicking on the appropriate hotlink on this page.)
"More than 32,000" equates to 215, or 32,768 levels of gray.
Koebel doesn't make it crystal clear, but I'm guessing that the main reason for the lower grayscale bit depth on consumer (i.e., non-professional) DLPs is that they're 1-chip, not 3-chip. Because of the need to use a color wheel, the time of each video frame has to be divided in three parts — or, rather, at least three parts, depending on how many segments there are on the color wheel. In one place in his article, Koebel does say that "having full gray scale resolution for all three colors ... is not feasible in the time available." I think this is what he means by that.
This problem is only exacerbated by the fact that the color wheel is rotated at a speed high enough to have "four (called 4X), five (5X), or six (6X) sets of RGB filters pass in front of the DMD chip during a [single] video frame." This is done to minimize the "rainbow" artifacts I discussed in An Eye on DLP, No. 1. But it reduces the duration of each primary-color subframe and thereby lowers the number of micromirror on-off cycles that can be squeezed into it. That in turn makes for an upper limit on the gray-scale resolution.
Several strategies are used to offset the banding/contouring problem at low luminance levels which results from not really having enough bit depth or gray-scale resolution. One of these strategies is "dithering."
In spatial dithering, if a pixel needs an "in-between" shade of gray that is not available due to bit-depth limitations, the pixel is formed into an ad hoc unit with adjacent pixels so that, among all the pixels of the unit, the average gray shade is correct.
Temporal dithering averages different shades of gray assigned to a particular pixel from one video frame to the next to get what the eye thinks is an in-between shade.
"Combined, spatial and temporal dithering typically add two or three bits of 'effective' resolution to the gray scale," says the WR article, "at the cost of increased noise and flicker in the image." If we assume an actual gray-scale resolution of 8 bits, dithering can nominally simulate a bit depth of 10 or 11 bits.
The algorithm used for spatial and temporal dithering is much more complex than what I have said actually indicates. Suffice it to say that dithering involves an only semi-intelligent attempt to modify the gray levels assigned to an image's pixels in order to smooth over false contours. (And remember that gray shades translate into colors, by virtue of the fact that each pixel of each red, green, or blue primary hue, before it is tinted such by the optics of the DLP light engine, comes with its own associated "shade of gray.")
On my Samsung 61" rear projector, the dithering algorithm is apparently responsible for a "stippling" or "pebbling" effect I can see in black areas of the screen when looked at up close. For example, when there are black letterboxing bars on the screen, I can see patternless, busily moving "noise" in the renering of black, if I get within inches of the screen — assuming I don't turn brightness down low enough to "swallow" it, that is.
Other people have complained in online forums about single-chip DLPs' temporal dithering in particular. They say it blurs or "pixellates" the trailing edges of brightly colored objects moving rapidly across the screen — sort of like an artifical "tail," where the object is the "comet." I myself have never noticed this effect.
Still, because dithering does produce artifacts, TI has come up with a way to minimize the need for it. It's called dark video enhancement, or DVE.
DVE puts one or two additional segments on the color wheel. In addition to two red, two green, and two blue segments, now a seventh and (depending on the implementation) possibly an eighth segment are introduced for the express purpose of creating what might be called a dark-video subframe (or two subframes) of the entire video frame.
When a red color-wheel segment moves between the light source and the DMD chip, the DMD creates a red subframe. Just the intensity of red is taken into account. When a green segment swings around, the DMD switches to creating a green subframe. When a blue segment is in the proper position, the DMD switches again to take just the intensity of green into account.
So, logically, when a dark-video segment is active, the DMD produces pixels whose intensity corresponds only to the gray-scale information at the low end of the luminance range. The higher luminance levels are ignored so that the gray-scale resolution at the low end can be given a bit depth that has been increased by one or two bits.
I'm not sure what color the dark-video segment actually is, or whether it is clear or possibly a neutral shade of dark gray. I do know (because the WR article tells me so) that "the DMD is "driven only by green-channel data during these segments. Since green contributes nearly 60 percent to the [overall] luminance of each pixel ... this gives a reasonable approximation to having full gray scale resolution for all three colors."
The best I can tell, DVE has been introduced in front projectors only, as of now. I have yet to find a rear projector which has it. (Of course, really expensive front projectors with three DMD chips don't need it, since they lack a color wheel in the first place.)
More on DLP technology in the next installment.
Saturday, July 23, 2005
61" DLP HDTV
This year, 2005, inaugurates DLP technology that (using a clever trick) simulates full-fledged 1920 x 1080 resolution, for 1080i and 1080p support. Full-fledged 1080p resolution (or, actually, at 2048 x 1080 pixels, the slightly better "2K") is now used in digital cinema projectors in commercial theaters.
A 61" Samsung HLN617W DLP-based rear projector, a 2003 model, sits in the living room of yours truly. It has 1280 x 720p resolution.
In the middle of the chip, as you can see, is a silvery window the size of a small postage stamp. The window reproduces the 16:9 aspect ratio of the TV screen. Inside this window is the "guts" of DLP technology: the Digital Micromirror Device, or DMD. It's not one mirror; it's 921,600 tiny mirrors arranged in 720 rows of 1,280 micromirrors each.
|How a DLP rear-projection TV works|
Each individual micromirror is either on or off at any given moment. If on, light from the light source strikes it and bounces through the projection lens onto the screen. If off, the micromirror swivels off-axis on its tiny hinge, and light reflected from it strikes only a light-absorbent material, ideally never reaching the screen.
At any given instant, every micromirror is assigned a primary color, red or green or blue, by virtue of the white light from the light source passing through a filter of that color built into a rapidly spinning wheel. As the wheel spins, the three colors alternate so rapidly that the eye (usually) cannot tell that the TV is producing a red, then a green, then a blue picture on the screen. Instead, the eye sees a full-color image.
A revised image is flashed on the screen numerous times during each video frame. A video frame lasts 1/30 second, but this time period is divided into (at least) three subframes of red, green, or blue. Actually, in most DLP implementations these days, the color wheel has two red segments, two green segments, and two blue segments — plus one or two extra segments I'll get to later — so each video frame's time period is subdivided even further than that.
What's more, each primary-color subframe is further split into multiple time slivers in order to produce a grayscale. More on that below.
Question: how do you render a broad enough range of grays (mentally factoring out hue entirely) when the micromirrors can be only on or off — nothing in between?
Answer: you subdivide each primary-color subframe into several time slices or intervals. Then (let's assume you want the individual micromirror to produce a 50% gray, ignoring hue) you arrange for the micromirror to be on during half of those time slices and off for the other half.
Basically, a technique called binary pulse width modulation (PWM) is used to decide during which intervals, and for how many of them, the mirrors will be on. For instance, for 5o% gray the interval whose duration is exactly half that of the subframe is "on," the other intervals "off." For 25% gray, the interval which is one-quarter that of the subframe is the only interval that is "on." For 37.5% gray, the 1/4-subframe (25%) and 1/8-subframe (12.5%) intervals are both activated. And so on.
The intervals that are activated for a particular level of gray are added to get their total duration. Then that duration is divided into many, many ultra-wee time slices which are parceled out evenly over the entre period of the primary-color subframe. This technique is called "bit-splitting."
Bit-splitting minimizes the effects of certain objectionable grayscale artifacts which some people are able to see when their eyes move to follow an object traveling across the DLP image. (I'm getting much of this information, by the way, from "DLP from A to Z," an excellent article by Alen Koebel in the August 2005 issue of Widescreen Review magazine. Magazine subscribers may download the article in PDF form by clicking on the appropriate hotlink on this page.)
One of the few drawbacks of DLP technology is its "rainbows." Just as swift eye movement can introduce grayscale oddities, it can cause the seemingly full-color picture to fractionate, just for an instant, into distinct red, green, and blue images. What is happening is that the primary-colored images flashed to the screen in rapid succession are getting fanned out, as the eye moves, to different areas of the retina. The neural pathways of human vision can't integrate the images' colors into, say, the intended teal or lemon or mauve.
Some people are more sensitive to "rainbows" than others. For some people, they completely ruin the experience of watching a DLP display. For others — I'm lucky enough to be in this group — they don't happen all that often, if at all.
To minimize the "rainbow" effect, TI learned two tricks early on. First, put two wedges of each primary, red, green, and blue, on the color wheel, not just one. For each rotation of the color wheel, there are accordingly two subframes of each primary color. Second, make the color wheel spin real fast, so that each primary-color subframe is real brief, and there are lots of primary-color subframes within each video frame's timespan. (Those micromirrors will have to pivot a lot faster, of course, but what the hey?)
|Using 3 DMDs|
You use beam-splitting prisms to route white light from a light source to three DMD chips operating in parallel. One beam contains red light for the red image, one green-for-green, and one blue-for-blue.
Thus, three images are formed independently by their respective DMDs; each is given its own primary hue. The three primary-color images are optically combined and shepherded together to the screen.
There's no color wheel in 3-chip DLP. Nor does the video-frame time interval have to be as finely chopped to allow for multiple primary-color subframe intervals — which makes things a lot simpler and allows a more smoothly gradated grayscale.
The problem with 3-DMD DLP projectors is that they're much more expensive than 1-DMD projectors. As a guess, I'd say that the optics alone cost more — imagine having to make sure that three beams carrying three images line up just right.
The real cost boost is in adding two more chips, though. The cost of making DMD chips is fairly high because the manufacturing yield — the percentage of chips that aren't rejected as imperfect — isn't all that high, yet. (As expected, the "yield curve" rises as TI's experience with making the chips accumulates over time.) So when you triple the number of chips, you add quite a lot to the cost of the projector.
As far as I know, there are no 3-chip rear projectors. Only front projectors (including those for digital cinema) are expected by customers to be pricey enough to warrant going to three DMDs.
The Achilles' heel of DLP is its inability to render deep blacks, in the way that CRTs, on the other hand, give deep, satisfying blacks.
No fixed-pixel technology — not DLP, not plasma, not LCD or its variants, LCoS, D-ILA, and SXRD — does black well. As the Widescreen Review article says, "The evolution of DLP technology, since its commercialization in 1996, could be described, as much as anything, as a search for better blacks."
Each fixed-pixel technology has a different reason for weak blacks. In the case of DLP, poor blacks are due to light from the light source that reaches the screen when it shouldn't.
Imagine an all-black image. The uniform-black signal causes every micromirror to turn off, meaning that it swivels away from the direction that carries reflected light to the screen. Instead, all the light is supposed to be beamed into a light absorber.
Well, what happens if the light absorber doesn't absorb all the light? The remaining light from the light source ends up bouncing around ("scattering") inside the RPTV or front projector ... and eventually arrives at the screen, polluting the image by washing it out just slightly.
A second problem has to do with the spaces between the micromirrors. Light reaching these gaps can easily end up bouncing off the DMD's underlying structure and eventually reaching the screen.
A third problem concerns the tiny "dimple" or "via" in the middle of each micromirror where it attaches to its hinge. The larger the via, the greater the amount of light that bounces crazily off it and reaches the screen when it shouldn't.
Problem One, the scatter problem, was addressed early on, at the time of the first commercial DLP displays, by providing the best light absorber possible. Those early DLP displays had an "on-off contrast ratio" of about 500:1. That is, an all-white field produced 500 times more luminance at the screen than an all-black field.
Problem Two, the gap/infrastructure problem, was addressed in two stages. The first, says WR, "was to coat the hinges and surrounding structure under the mirrors with a low-reflectance (light absorbing) material." That "dark metal" technology was dubbed DarkChip1, and it boosted on-off contrast ratio to 1,000:1. Later on, DarkChip3 reduced the size of the gaps between the mirrors, and contrast ratio went up even further.
In between DarkChip1 and DarkChip3 came DarkChip2, which addressed Problem Three, the dimple/via problem. The dimple or via was simply made smaller. That boosted the contrast ratio by giving less area of irregularity on each micromirror off of which light could bounce askew. That change alone deepened black renditions, improving contrast. And it further boosted contrast ratio by offering more area on each micromirror that could properly reflect the source light, making the overall picture brighter.
DarkChip3, when it came along, made the dimple or via yet smaller than DarkChip2 did, in addition to shrinking the inter-mirror gaps. Now, with DarkChip3, DLP black levels are much more respectable than in earlier versions. In fact, they're way better than those on any other non-CRT display.
One of these DarkChip revisions — I'm not quite sure which — made the micromirrors swivel further off-axis in their off position than they had before, reducing image-polluting light scatter within the display or projector. In the jargon, the "micromirror tilting angle" was increased from 10° to 12°. My guess is that this happened at the time of DarkChip1, since the tilting-angle increase surely exposed more of the structure under the mirrors.
More on DLP technology in An Eye on DLP, No. 2.
Friday, July 22, 2005
|Martin Scorsese in|
My Voyage to Italy
Specifically, why we watch movies.
Mr. Scorsese, whose films as a director have gathered awards and acclaim, from Mean Streets (1973) to The Aviator (2004), takes us by the hand into the world of the Italian films he grew up with as an American son of transplanted Sicilian immigrants. Coming of age in New York City in the late 1940s and early 1950s, he saw the Neorealist renaissance of post-WWII Italian cinema through a tiny glowing screen, as filler programming in the early days of television. Even if he didn't fully understand the films of the likes of Roberto Rossellini at such a tender age, they moved him deeply.
They still do, because more than anything else, they demand of us our full and honest humanity and compassion for all God's children.
Scorsese's two-part documentary originally aired on Turner Classic Movies, on cable, in 2003, a follow-on to his 1995 A Personal Journey with Martin Scorsese Through American Movies. As far as I know, My Voyage to Italy has not aired since, nor is it scheduled to air again — much less in high-definition. (The documentary is available as a two-disc DVD set, which is how I'm viewing it.) So I am cheating a bit to include this post in my What's on HDTV? blog. I admit that.
Yet I think it a worthy inclusion because, in this day and age — as was mentioned in Parade magazine's "Personality Parade" column in the July 17, 2005, issue — movies are no longer made for adults. They're made for adolescents. And the souls of adolescents are not the most notable in the world for their compassion.
Today, whether adolescents or grownups, we are not accustomed to wearing our hearts out on our sleeve. After watching My Voyage to Italy, you may want to have your tailor get in touch with your surgeon.
Not that there's an ounce of sentimentality, schmaltz, or false pathos in the films Scorsese presents to us, or the way he presents them — just the opposite. For example, at the outset of the journey he leads us on (I admit I haven't yet gone further than Neorealism), he gives us his own personal tour of the films that were called Neorealist because they took an un-glamorized view of life as it was lived in a torn-up country, Italy, which had long been prey to fascism and then been overrun by German Nazis and American liberators in turn.
Under the circumstances, there wasn't much basis for glamorization. Which meant there was all the more basis for compassion, humanity, and fellow-feeling. Rossellini and other Neorealists like Vittorio De Sica showed everything, mean and noble, in Italian and European life at the time, and made the recovering world know that no man is an island.
What's more, they showed the world that movies, a once-fluffy art form, were uniquely able to reveal to us the totality of the human condition, warts and all, and thereby stir our tenderest feelings.
I have to admit, when it comes to tender feelings, or even to being open to seeing beauty amid ugliness and flowers amid dirt, I personally can't hold a candle to Martin Scorsese. My personal inclination is to try to kayo the ugliness and clean up the dirt, and only then — just maybe — appreciate the beauty.
Which is why I feel so glad to have this man of the gentlest possible voice and of the widest possible mercy sit beside me in the movie theater, in effect, and whisper in my ear what to look for and what to think about as the extended clips of old B&W movies in a foreign tongue (with subtitles) unreel before my eyes.
The movies Scorsese extols are powerful, but it's all to easy for an ugliness-kayoer/dirt-cleaner-upper from way back like me to resist their power at a pre-conscious level. Which is the reason, I suppose, that when Rossellini made The Miracle, a segment of his film L'Amore (1948), in which a woman give birth to a child she believes is the Christ child, Catholic officials denounced it.
The Miracle's main character, Nanni, portrayed by Anna Magnani, has more religious faith than sense or sanity. She's taken advantage of by a stranger, a wanderer she thinks is St. Joseph, who she imagines has come along in answer to her prayer. (The stranger is played by Federico Fellini.) Pregnant, she winds up an outcast. As we watch the tender scene in which Nanni has her baby, alone, without prospects, redeemed by the simple fact that she has brought forth the "miracle" of new life, Scorsese tells us what it all means to him:
Rossellini is communicating something very elemental about the nature of sin. It's a part of who we are, and it can never be eliminated. For him, Christianity is meaningless if it can't accept sin and allow for redemption. He tried to show us that this woman's sin, like her madness, is nothing in comparison to her humanity.
"It's odd to remember," Scorsese continues, "that this film was the cause of one of the greatest scandals in American movie history. When The Miracle opened at the Paris Theater in Manhattan, Cardinal Spellman, who was the cardinal of New York at the time, and the Legion of Decency — that was the Catholic watchdog organization — called the movie a blasphemous parody, and they mounted a campaign to have it pulled from the theaters."
The case wound up in the Supreme Court, which in 1952 struck down movie censorship based on allegations of blasphemy. That's important because it paved the way for the frankness of Scorsese's and other filmmakers' films in coming decades.
But more important to me than the legal precedent is what The Miracle says about compassion and humanity, sin and redemption. It say that the true Christian attitude is not to be such an ugliness-kayoer and dirt-cleaner-upper as I know I tend to be and as Cardinal Spellman and the Legion of Decency were back in their day. It is rather to see the beauty in each of God's creatures and to view their flawed humanity, not only with the utmost of honesty, but also with the utmost of compassion.
And to be reminded of that is, ultimately, why we watch movies.
Tuesday, July 19, 2005
Now I'd like to take a look at the contributions to film restoration that are arriving from an entirely different quarter: John Lowry's Lowry Digital Images.
LDI has been responsible for returning the first three Star Wars films "to their original glory for DVD," says "Creating the Video Future," an article in the November 2004 edition of Sound & Vision. Ditto, the Indiana Jones collection. Ditto, all twenty of the James Bond films. Ditto, Snow White and the Seven Dwarfs; Singin' in the Rain; North by Northwest; Gone with the Wind; Now, Voyager; Mildred Pierce; Roman Holiday; Sunset Boulevard; and Citizen Kane.
The Lowry process is not unlike that used for digital intermediate work (see The Case of the Data-Processing DP). The first step is to scan into the digital domain each frame of the film negative, often at 4K resolution (see 2K, 4K, Who Do We Appreciate?). At four seconds per frame, I calculate it must take 691,200 seconds, or 192 hours, to scan in a two-hour film at 4K. That's a ratio of 96 scan hours per one running hour, excluding coffee breaks!
Lowry stores the entire film as data on servers that hold 6 terabytes apiece. A terabyte is 2 to the 40th power, or 1,099,511,627,776 bytes. Think of it as 1,000 gigabytes. Lowry has a total of 378 terabytes of storage hooked to his high-speed network. Perhaps the Pentagon would like to send a task force to check out how it all works.
That data network at LDI also interconnects 600 dual-processor Macintosh computers whose task it is to process all that stored data, using complex algorithms selected and parameterized according to the needs of the particular movie, in order to tweak the look of each of the 172,800 frames that make up a two-hour flick.
After a visual review of each scene on a video monitor and a possible re-parameterization and rerun of the automatic process, exceptionally problematic material is touched up "by hand."
When all that is done, the result is a "digital negative." The digital negative is said by Sound & Vision's Josef Krebs to be "every bit as good as the camera negative — but without the wear, tear, and deterioration." The digital negative can be used to spin off versions for DVD, HDTV, standard-def TV, or digital cinema, a filmless methodology which involves using digital video projectors to throw the image onto the screen in commercial theaters. Or it can be output on film for optical projection.
|Darth Vader in Star Wars|
"When you use a computer," says Lowry, "if you can understand the problem, it's always solvable." Torn film, fading, chemical deterioration — they all succumb to digital remedies.
4K-resolution scanning is not always used as the basis for his digital restorations, Lowry says ... after having extolled 4K as uniquely giving the ability "to capture everything that is on that negative, which probably has a limit somewhere in the 3- to 4K range."
The "information on a film," he says, "usually rolls off between 3 and 4K. We've experimented at 6K, but, frankly, it's pointless on a standard 35mm film frame until there are better camera lenses and film stocks." So, 4K looks like a winner when it comes to really, really capturing the whole image present on the original camera negative. Also, Lowry says, "with 4K the colors are more vibrant, subtle, and sharper."
Even so, Lowry did standard-def (!) transfers for North by Northwest; Now, Voyager; and Citizen Kane. Snow White and the Seven Dwarfs and Giant were his first digital restorations at high definition (1080p, I assume). Next came the step up to 2K, for Roman Holiday and Sunset Boulevard; they had to be scanned from duplicates of their original negatives, with accordingly lower resolution to begin with, to be captured digitally.
Lowry's current maximum resolution, 4K, is newer yet. "Now we do most of our transfers at high-def and 2K," he says, "but we also do a whole range of movies in 4K." I'll bet the demand for 4K will, despite the extra cost, soon outstrip that for 2K and HD. I'll furthermore bet that there's already little remaining demand for SD restorations.
Along those lines, Lowry says that he is doing nine of the twenty James Bonds in 4K, the rest in HD. I'll wager the day will come soon in which the film's owners wish they'd scanned them all in 4K.
Film preservationist Robert Harris, in his "Yellow Layer Failure, Vinegar Syndrome and Miscellaneous Musings" column at TheDigitalBits.com, has sometimes been critical of the Lowry process. One reason is that Harris describes himself as a "proponent" of film grain. "Grain is our friend," he has written. The Lowry process can remove most or all of the grain in a film image. Not just the excessive grain which builds up whenever film is duplicated from camera negative to interpositive to internegative ("dupe" negative) to final print — all the grain, period.
Lowry's S&V response: "We generally try to even out the grain so changes don't interfere with the storytelling. I don't recommend taking all the grain out of anything. I used to do more of that. We took out too much grain on Citizen Kane, but on Casablanca we left more in and it looks better. There's something comforting about grain for the viewer." It sounds like Lowry has become a late convert to the grain-is-our-friend school of film preservation.
Another complaint Robert Harris has made is that the Lowry process doesn't really restore or preserve the film per se. It leaves the photographic images which were originally recorded on celluloid strictly alone. It does not yield a replicated negative or fine-grain positive separation masters for archival use. It does not result in any film elements whatever, other than (optionally) a new generation of release prints derived from the "digital negative."
But release prints are not archival. They're the end of the line, photo-optically speaking. You can't use them as sources for further duping, since they lack the negative's high level of detail, low level of grain, wide dynamic range/contrast ratio, and so forth. And, due to "blocking," the shadows and highlights in a final print are drained of nuance.
So the Lowry process places the entire burden for archiving film's images squarely in the digital-video domain. The film isn't preserved for posterity; at best, the data is.
Though I don't know that I can put my finger on the source — I believe it to have been an interview I read with prime digital-cinema mover George Lucas — I've read that Lowry's precious data eventually is remanded to the studio to whom the film belongs. Or else the studio provides Lowry with the (empty) storage media in the first place, and then receives the (filled) media back at the end of the process. Something like that.
That means the studio winds up in charge of archiving the data. In the past, studios have been (shall we say) notably remiss in taking proper care of their archives.
What's more, data-file formats are notorious for their quick obsolescence. The file formats used for digital intermediates have yet to be standardized, and I'm sure those Lowry uses for "digital negatives" are just as idiosyncratic. In twenty years, will machines still be able to read these files? That's Robert Harris' main concern, I gather.
Still, the process John Lowry uses to restore not film per se but film's images is an impressive one. It's clearly the wave of the future. Hollywood is already beating a path to his door.
Monday, July 18, 2005
In the highest-definition digital TV format available over the air, 1080i, the width of the 16:9-aspect-ratio image holds fully 1,920 pixels. This number is accordingly the upper limit on "horizontal spatial resolution," which in turn means that no detail whose width is tinier than 1/1920 the screen's width — a single pixel — can be seen.
In a 2K scan, the number of pixels across the width of the scanned film frame goes up to at most 211, or 2,048. In a 4K scan, that upper limit is doubled, to 4,096.
The actual count of pixels produced horizontally in a 2K or 4K scan can be less than the stated upper limit — for example, when the aperture used by the film camera to enclose the film frame doesn't allow the image to span the whole 35-mm frame from "perforation to perforation," or when room is left along one edge of the film for a soundtrack. Furthermore, a lesser number of pixels will be generated vertically than horizontally, since the image is wider than it is tall.
Even so, 2K and 4K scans generally betoken levels of visual detail higher (though, in the case of 2K, only somewhat higher) than a so-called "HD scan" at 1920 x 1080.
With the advent of high-definition 1080i DVDs expected imminently (see HDTV ... on DVD Real Soon Now?), do yet-more-detailed film scans at sampling rates such as 2K and 4K have any relevance to us now?
Apparently, yes. Experts seem to agree that it's a good idea to digitize anything — audio, video, or what-have-you — at a sampling frequency a good deal higher than the eventual target rate. The term for this is "oversampling."
For example, The Quantel Guide to Digital Intermediate says on pp. 17-18, "Better results are obtained from ‘over sampling’ the OCN [original camera negative]: using greater-than-2K scans, say 4K. All the 4K information is then used to produce a down converted 2K image. The results are sharper and contain more detail than those of straight 2K scans. Same OCN, same picture size, but better-looking images."
Furthermore, according to "The Color-Space Conundrum," a Douglas Bankston two-part technical backgrounder for American Cinematographer online here and here:
A 2K image is a 2K image is a 2K image, right? Depends. One 2K image may appear better in quality than another 2K image. For instance, you have a 1.85 [i.e., 1.85: aspect ratio] frame scanned on a Spirit DataCine [film scanner] at its supposed 2K resolution. What the DataCine really does is scan 1714 pixels across the Academy [Ratio] frame (1920 from [perforation to perforation]), then digitally up-res it to 1828 pixels, which is the Cineon Academy camera aperture width (or 2048 if scanning from perf to perf, including soundtrack area). ... Now take that same frame and scan it on a new 4K Spirit at 4K resolution, 3656x2664 over Academy aperture, then downsize to an 1828x1332 2K file. Sure, the end resolution of the 4K-originated file is the same as the 2K-originated file, but the image from 4K origination looks better to the discerning eye. The 4096x3112 resolution file contains a tremendous amount of extra image information from which to downsample to 1828x1332. That has the same effect as oversampling does in audio.
In 1927, Harry Nyquist, Ph.D., a Swedish immigrant working for AT&T, determined that an analog signal should be sampled at twice the frequency of its highest frequency component at regular intervals over time to create an adequate representation of that signal in a digital form. The minimum sample frequency needed to reconstruct the original signal is called the Nyquist frequency. Failure to heed this theory and littering your image with artifacts is known as the Nyquist annoyance – it comes with a pink slip. The problem with Nyquist sampling is that it requires perfect reconstruction of the digital information back to analog to avoid artifacts. Because real display devices are not capable of this, the wave must be sampled at well above the Nyquist limit – oversampling – in order to minimize artifacts.
So avoiding artifact creation — "the Nyquist annoyance" — is best done by oversampling at more than twice the sampling frequency of the intended result and then downconverting to the target frequency.
This would seem to imply that it would be wise to transfer a film intended for Blu-ray or HD DVD at a scan rate of at least 2K — but 4K would be better!
It's hard to know how much the "Nyquist annoyance" would mar a typical 2K-to-HD transfer, since (see View Masters) most projected 35-mm movies have way less than 1920 x 1080 resolution to begin with. Original camera negatives generally have up to 4K resolution, or even more, but by the time release prints are made from internegatives which were made from interpositives, you're lucky to have 1K resolution, much less HD or 2K.
So if the "Nyquist annoyance" became an issue with an OCN-to-2K-to-HD transfer, the transfer could be intentionally filtered below full 1920 x 1080 HD resolution and still provide more detail than we're used to seeing at the movies.
All the same, video perfectionists will surely howl if filtering is done to suppress the "Nyquist annoyance." They'd presumably be able to see a real difference between a filtered OCN-to-2K-to-HD transfer and an unfiltered OCN-to-4K-to-HD transfer, given the opportunity.
All of which tells me the handwriting is on the wall. Someday in the not-far-distant future, aficionados are going to be appeased only by HD DVD or Blu-ray discs that are specifically "4K digitally scanned," or whatever the operative jargon will be. All of the farsightedness ostensibly displayed today by studios using "HD scans" for creating standard-def DVDs — intending one day to reuse those same transfers for Blu-ray or HD DVD — will turn out to be rank nearsightedness. Sometimes the march of progress turns into a raging gallop!
Sunday, July 17, 2005
|James Stewart in Rear Window|
Robert writes an occasional column, "Yellow Layer Failure, Vinegar Syndrome and Miscellaneous Musings," at TheDigitalBits.com. It's a hoot and a half.
Robert adores DVDs and home video in general for providing movie studios with money incentives to retrieve older movies from the dustbin, restore them to as near pristine form as possible, and issue them on plastic discs which will forever inveigle new generations of film buffs in their homes.
Reading through Robert's column archive is an education in itself. Before I started reading him, I was (vaguely) aware that really old films, mostly silents, were falling by the wayside due to nitrate-based film stocks which (who knew, way back then?) decompose over time.
But for most of the post-silent era, acetate-based "safety" stocks were used whose chemistry doesn't turn them into goop with time, heat, and humidity. So I assumed that just about every film I'd ever seen and loved in my post-1947 tenure on this earth was safe, right?
Wrong, celluloid breath!
The films of my youth are, albeit for different reasons, in as much danger as the cinematic contemporaries of The Perils of Pauline.
For one thing, apparently the switchover to safety film didn't begin until 1950. All the films of Hollywood's "Golden Age" were on nitrate, including such movies in glorious three-strip Technicolor as Gone With the Wind and The Wizard of Oz (both 1939).
Three-strip Technicolor? Its cameras made three separate negatives, with light passing through multiple optical pathways and filters, to represent each of the three "subtractive" color primaries: cyan, yellow, and magenta. Then each color's negative was transferred to a separate black-and-white "matrix," which was then bathed in dye of an appropriate hue and pressed against an originally clear, colorless print film. The dye thereby transferred permanently to the print, after which the process was repeated for the other two colors. At the end, the print film bore all three separate color "records" superimposed atop one another in perfect registration.
For restorationists like Robert Harris, classic three-strip Technicolor films can be a challenge. You can't simply take an existing Technicolor release print, assuming it's in the best possible shape, and copy it onto another piece of film or transfer it to video. It's too contrasty to copy or transfer properly.
So you have to go back at least to the black-and-white matrices, or, better still, to the original camera negatives, if they can be found. (I'm not clear on whether the matrices used in Technicolor restorations need to be ones that were never subjected to the dye imbibition process I just described.)
Modern color print film doesn't do things the way three-strip Technicolor did. Instead, since it uses so-called "monopack" film, it bears three layers of emulsion, one for each subtractive primary. This is the so-called Eastmancolor system, and it's been around (using a succession of types of Kodak film stock over the years) since the early 1950s.
But the Eastmancolor system presents its own problems to the restorationist. Some of the film stocks from the late 1950s, for instance, have been prone to "yellow layer failure" in which the yellow record on the print fades, leaving magenta and cyan to render distorted hue combinations when the print is projected.
A potential workaround is to locate "separation masters" made at the time the film was produced on fine-grain, low-contrast positive black-and-white film stock. One fine-grain "sep master," or just "sep," would often be made for each color record, using appropriate filters. (For example, I'm guessing — since colors on negative film are the complements of the colors in a scene — a green filter would be used to make a record on black-and-white film of the magenta record on the original camera negative.)
Making sep masters was a nod to archival needs, since they played no real role in producing the actual release prints. If seps were made at all, they were often not checked to see if they were made properly. Even if they were, years of uncertain storage conditions may have caused differential shrinkage of the celluloid, meaning the three seps no longer superimpose to make one image. The list of potential woes for the restorationist is a long one ...
Of course, the most basic woe is the inablity to find any usable "elements" — pieces of film — for entire movies or portions thereof. A lot of films or scenes have simply disappeared, or so it's seemed until troves of cinema's past glory have from time to time been rediscovered collecting dust in some collector's attic or misfiled in some musty film archive. That's when restoration wizards like Robert Harris and James Katz can really swing into action.
One of the primary reasons usable elements are hard to find for the early widescreen epics Robert Harris specializes in is that, in those days, release prints to be used in movie theaters were direct copies of the cut-and-assembled camera negative. To make each new print, the negative had to be reused to make "contact print" after contact print ... which was hell on the negative. Eventually it got so scratched and worn, not to mention dirty and falling apart due to failed editing splices, that it became useless.
Later on, the practice of making a small number of "interpositives," from each of which a manageable number of "internegatives" were generated, arose. The internegatives were then used to make the release prints, a procedure which preserved the camera negative for posterity.
Another problem plaguing the restorationist's task is "vinegar syndrome," the tendency of chemical reactions in so-called safety film stock's base to gradually and unstoppably ruin the film element by turning the cellulose triacetate in the base to acetic acid, the essence of vinegar. In older 35mm prints using the four-track stereo soundtrack popular in the 1950s, the magnetic stripe used for the additional sound channels even acts as a catalyst in accelerating the vinegar syndrome. Jeepers!
Harris' column is about more than just film restoring, by the way. It's really about the love he has, and we all should have, for the history of the silver screen. Thanks to DVDs, and to laserdiscs before them, and now to high-definition video, that history and that glory is not likely to fade out, ever, as Harris is continually pointing out. As long as money can be made resurrecting the treasures of the past and putting them out on home video, Hollywood will not let (say) the stunning dance routines of Fred Astaire and Ginger Rogers die. (Many of their movies from the 1930s are due on DVD soon.)
So, each time a nugget of ancient or not-so-ancient movie gold reaches DVD, Robert is apt to highhlight it in his column. Those who are interested in film classics but don't particularly care about the travails of restoring them will find Robert's words fun reads, even so.
You can read more about Robert A. Harris and James Katz in "Robert A. Harris: Tilting at Hollywood" here. The Rear Window project is documented here.
Saturday, July 16, 2005
Here again are two versions of a single photo:
|For gamma = 1.8||For gamma = 2.5|
The image on the left is intended to be viewed on a Macinstosh computer monitor whose inherent "attitude" toward rendering contrast is to prefer the subtle over the dramatic. The image on the right is aimed at a PC monitor intrinsically with a more dramatic approach to rendering contrast. So, in view of their two different target monitors, the image on the left has more contrast built in than the image on the right. On your computer monitor — no matter which type it is — the image on the left will look more dramatically "contrasty" than the one the right, simply because the image on the left it has more contrast built into it.
A monitor's (or TV's) "attitude" toward rendering contrast is betokened by its gamma. A Mac monitor typically has a gamma of 1.8: the level of the input video signal is raised to the 1.8 power to compute how much light, or luminance, the monitor's screen will produce.
A PC monitor typically has a higher gamma figure of 2.5. Its input video signal is raised to the 2.5 power, not the 1.8 power.
Gamma makes the luminance output of a monitor or TV nonlinear with respect to input signal levels. If luminance were linear — i.e., if gamma were 1.0, not 1.8 or 2.5 — the monitor or TV would provide too little contrast.
Here is a graph (click on it to see a larger version) comparing gamma 1.0, gamma 1.8, and gamma 2.5:
The horizontal axis represents the level of the input video signal, relative to an arbitrary maximum of 10.0. This signal can be an analog voltage or a digital code level; it can represent either the black-and-white "luminance" signal per se or any one of the three primary color components, red, green, or blue.
The vertical axis represents the amount of light (or luminance) the monitor or display produces for each input voltage or code level. Again, this value is computed and plotted relative to an arbitrary maximum value — this time, 1.0.
Notice that only the gamma-1.0 plot is linear. Real TV's and real monitors typically have gammas between 1.8 and 2.5, so their gamma plots curve.
Take the gamma-1.8 plot. Specifically, the plot sags downward such that low input levels toward the left side of the graph — say, input signal level 4.0 — have luminances way lower than they would be for gamma 1.0. Then, as input signal levels rise toward 10.0, the gamma-1.8 plot gets closer to the gamma-1.0 plot. It never quite catches it — until the relative input level is at 100% of its maximum, that is — but it's like a racehorse closing fast at the end of the race, after falling behind early on.
When the input signal level is low — say, 2.0 — increasing it by 1.0 to the 3.0 level has only a minor effect on luminance output. But going from 7.0 to 8.0, again a boost of 1.0, increases luminance output quite a bit. The fact that the gamma curve sags downward and then hurries upward causes details to emerge more slowly out of shadows than out of more brightly lit parts of the scene.
What's true of gamma-1.8 is even more true of gamma 2.5. Gamma 2.5's plot sags downward even more than gamma 1.8's, so shadows are yet deeper and, by comparison, highlights are more dramatic with gamma 2.5 than with gamma 1.8.
A gamma curve is actually a straight line when plotted on log-log axes:
Here, I've replaced the original axes with their logarithmic equivalents. That is, I've replaced each tick mark along the horizontal axis with one for its point's logarithm. For instance, instead of plotting 4.0 at 4.0, I've plotted it at log 4.0, which is 0.602. 9.0 is now at log 9.0, or 0.954. 10.0 is at log 10.0, or 1. And so on.
A similar mathematical distortion has been done to the vertical axis. The result: gamma "curves" that are all straight lines!
This is to be expected. If
then, as math whizzes already know,
Math whizzes will also recognize that gamma is the slope of the log-log plot. This slope has a constant value when gamma is the same at every input signal level, which is why the log-log plot is a straight line.
Why is this important? One reason is that, as with the two versions of the single image shown above, the gamma that was assumed for the eventual display device — and built into the image accordingly — ought truly to be the gamma of the actual display device. Otherwise, the image can wind up with too much or too little contrast.
When we say that the assumed display gamma is "built into" the image, in the world of television and video we're talking about gamma correction. A TV signal is "gamma-corrected" at the camera or source under the assumption that the eventual TV display will have a gamma of 2.5, which is the figure inherent in the operation of each of the three electron guns of a color CRT.
So the original camera signal (or signals, plural, one each for red, green, and blue) gets passed through a transfer function whose exponent is, not 2.5, but rather its inverse, 1/2.5, or 0.4. (Actually, for technical reasons having to do with the eye's interpretation of contrast in a presumably dimly lit TV-viewing environnment, the camera's gamma-correction exponent is altered slightly from 0.4 to 0.5.)
When the display device is not a CRT, it has no electron guns, and its "gamma" is completely ersatz. If the device is digital, a look-up table or LUT is used to instruct the actual display — say, an LCD panel or DLP micromirror device — how much luminance to produce for each digital red, green, or blue component of each pixel. Each color component of each pixel is, usually, an 8-bit digital value from 0 to 255. When that 8-bit code value is looked up in the LUT, another 8-bit value is obtained which determines the actual output luminance.
What will that second 8-bit value be? If we set aside questions about how the TV's contrast, brightness, color, and tint controls work, the answer depends mainly on the ersatz "gamma curve" built into the LUT.
If the LUT's ersatz "gamma curve" is based strictly on the exponent 1.8 or the exponent 2.5, the graphs shown above depict how the digital LUT translates input signal values to output luminances.
But the "gamma" of the LUT may reflect a different exponent, one not found in the graphs, such as 1.6 or 2.2 or 2.6. The TV's designers may have chosen this "gamma curve" for any one of a number of reasons having presumably to do with making the picture more subjectively appealing, or with camouflaging the inadequacies of their chosen display technology.
Or the "gamma" of the LUT may reflect more than one exponent! That is, the "gamma" at one video input level may be different from that at another. At a relatively low 20% of the maximum input level (video mavens call it a 20 IRE signal), the ersatz "gamma" may be, say, 2.5. At 40% (40 IRE), it may be 1.6. At 80% (80 IRE), 2.2.
So there is nothing to keep a digital TV from having a wide range of ersatz "gamma" exponents built into its LUT, such that (at the extreme) a slightly different "gamma" may be applied to each different input code level from 0 through 255 (or each IRE level from 0 to 100).
If the wide-ranging ersatz "gamma" underlying a digital TV's internal look-up table were to be plotted on log-log axes, it would no longer be a straight line! A straight-line log-log gamma curve assumes a single, constant gamma exponent, independent of the input code value (or analog voltage). Thus, when the gamma exponent varies with input level, the log-log gamma plot is no longer a straight line.
Even a CRT-based TV, if it processes its input signal in the digital domain, can play such games with gamma. True, it's electron guns have their built-in "true gamma," but that single, simple figure (say, 2.5) can be distorted at will by means of a digital look-up table. The LUT is more powerful than the electron gun!
In the service menu of any modern television, CRT-based or otherwise, there is apt to be a way to select "gamma" — not the "true gamma" of a CRT electron gun, which can't be changed, but the "ersatz gamma curve" built into an internal look-up table.
On my Samsung DLP rear-projection TV, there is in fact a GAMMA parameter in the service menu. It can be set to values ranging from 0 through 5 (values higher than 5 apparently being meaningless). Each different setting affects the "attitude" the TV has toward contrast by (I assume) re-specifying the "gamma curve" of its internal look-up table.
As best I can tell, few if any of these GAMMA choices implement a single, simple gamma exponent that is constant for every video input level, à la an old-fashioned, "LUT-less" analog CRT. The various log-log curves are not, presumably, straight. Nor do the GAMMA numbers, 0-5, have any relationship to the ersatz "gamma" exponents being selected. For example GAMMA 2 does not choose an ersatz "gamma" exponent of 2.0.
I have yet to encounter any authoritative discussion of why these alternate GAMMA settings are put there, or how they are intended to be used. Enthusiasts on various home-video forums are wont to claim that, for Samsung DLPs like mine, GAMMA 0, GAMMA 2, or GAMMA 5 is "better" or "more CRT-like" than the original "factory" setting, GAMMA 4.
But my own experiments with these alternate settings have not really borne those claims out.
Even so, I have learned that twiddling with digital, LUT-based "gamma" (because it's so ersatz) can alter the "look and feel" of the picture on the screen in myriad ways which may well appeal to you, with your digital TV display, if not me with mine. And, in the end, that's why I've gone to the trouble of trying to explain what's going on when you adjust the "gamma" underlying your digital TV's powerful internal LUT!
Friday, July 15, 2005
It was, I think, an 8-mm or 16-mm projector which he as a budding movie buff must have bugged his bemused gas-station-owner dad to buy him, along with reels of Abbott and Costello and monster movies and old silents and cartoons which he was constantly itching to show me. I was admittedly a bit antsy about sitting through an entire movie at that age. But that was duck soup compared to sitting still while he, perched at my bench-style blackboard, chalk in hand, storyboarded every scene from the latest movie his mother had taken him to in the theater.
I was more the hands-on type, and what I could really get into, patience-wise, was our attempts, Jimmy's and mine, to overlap stills from those old stereoscopic View-Master slide reels and put 3D images on the wall! 3D was all the craze at the time, remember. And View-Masters must have been a brand new product then.
I gather they're still around. A cardboard ring or reel contains a series of photos of, say, scenes from the latest Disney movie. Each slide is doubled along the opposite edge of the circular reel such that looking through a special viewer brings it into 3D. But the View-Master projector doesn't do that; it projects just one of the paired pictures at a time. So, Jimmy reasoned, why not get two copies of each ring and use his projector (which could also accept View-Master reels) to superimpose the other version of the image on the wall? That surely would reproduce the 3D effect, right?
We spent hours making it work, on several occasions at his house and mine. It never happened. The image on the wall never went 3D.
Yet that must have been the origin of what seems to have turned into a lifelong obsession to become, in essence, my own "view master." By that I mean watching movies in my home and having them look "just like they look in the theater."
And so do a lot of others, as the huge success of DVDs bears out. (I've lost touch with Jimmy S., but I imagine he must have the world's largest DVD collection.)
So what is "just like in the theater," then? What is it that we "view masters" are striving for?
Well, we want size. If not in the league of the local cineplex, where the screen can be bigger than any single wall in our house, at least big enough that we have to turn our heads to take in both sides of the image.
And we want quality, a commodious notion which subsumes things like high brightness, deep darkness, and vivid color, as well as overall clarity and freedom from distracting noise and artifacts.
Then there's the holy grail of grails: filmlike resolution, right? That's what high-definition video basically is: the ability to boost the accustomed resolution of our TV screens.
So, how high is high?
How much detail must the HDTV be able to resolve to be "filmlike"?
A little research reveals to me that it's not as much, not as high, as one might think. For example, The Quantel Guide to Digital Intermediate says on p. 16, "The image resolution of 35mm original camera negative (OCN) is at least 3,000 pixels horizontally; some would say 4000 ... ." Yet, the guide continues on p. 17, "The traditional chemical process of generating a final film print by copying [specifically, by film-to-film contact printing] changes the spatial resolution of the images from that held on the OCN. Although the OCN resolution is up to 4K digital pixels, the resolution reduces every time it is copied – to interpositive, internegative and finally to print, where the resolution is closer to 1K pixels."
Here, the designation "K" represents not 1,000 but 210, or 1,024. Either way, depending on the number of intermediate stages between the original camera negative and the projected print, the horizontal resolution of the typical pre-digital-age 35-mm movie print could be less than the horizontal resolution of a 720p HDTV, which is 1,280 pixels!
"George Lucas’s Star Wars prequels were predominantly shot in high definition at 1920 x 1080," the guide says (p. 16). And, "while grading Pinocchio for digital projection and working at 1280 x 1024 resolution, the director and main actor, Roberto Benigni, was surprised to find his frown lines visible in the digital projected final when they were not in the film print. As a result these had to have their focus softened on the digital final!" (p. 17).
(What's "grading"? It's "individually adjusting the contrast of the [red, green, and blue] content of pictures to alter the colour and look of the film." A synonym is "timing" or "color timing.")
So even 720p HDTV can offer as much spatial resolution as we usually get at the movie theater, and 1080i can offer as much as George Lucas considered necessary for his latest Star Wars epics.
Merely gaining access to HDTV accordingly will play a key role in making each of us "view masters" in our own home domain!
Thursday, July 14, 2005
- The Lord of the Rings trilogy (2001, 2002, 2003)
- Spider-Man (2002) and Spider-Man 2 (2004)
- Seabiscuit (2003)
- The Aviator (2004)
- A Very Long Engagement (2004)
- The Passion of the Christ (2004)
- Collateral (2004)
- Ray (2004)
- The Village (2004)
According to "The Color-Space Conundrum," a Douglas Bankston two-parter for American Cinematographer, online here and here, "all went through a digital-intermediate (DI) process." (Some of the above information also came from the Internet Movie Database.)
DI is a way of doing post-production on a film after scanning its negative, frame by frame, into the digital-video realm. Once converted to bits and bytes, the image can be manipulated in numerous ways. Visual effects such as computer-generated imagery (CGI) can be added during the DI process. DI is also useful for adjusting the overall look and feel of the film, manipulating its darkness or lightness, contrast, color saturation, and hue palette via "color grading" or "color timing."
Once DI has been completed, the result is typically returned to the film domain via laser recorders that beam light in three separate colors onto the emulsion of a color print filmstock.
CGI and DI today go hand in hand. Antedating the digital era, though, CGI has been around at least since "the science-fiction thriller Westworld (1973) and its sequel, Futureworld (1976)." Other CGI groundbreakers included Tron (1982) and The Last Starfighter (1984). Of course, one of the most famous early CGI pioneers was the first Star Wars film (1977).
In those days CGI, after being created in a computer, was usually projected on a screen and filmed. Optical tricks were used to combine the projected images with live-action shots. One of the first movies to place digitally composited images directly on film was Flash Gordon (1979), in some of its sequences.
A "turning point" happened in 1998, with Pleasantville. The idea of the film was to place full-color modern-day characters in scenes that evoked black-and-white 1950s television. After the film was shot in color, a computer was used to selectively desaturate just the background on nearly 1,700 individual shots — not the whole film. This proto-DI work was not CGI; it amounted to an extreme form of color grading or timing.
|O Brother, Where Art Thou?|
Cinematographer Robert Richardson shot The Aviator (2004) and then worked with visual effects supervisor Robert Legato during a post-production DI process to match the look of old two-strip and three-strip Technicolor films from the personal library of the film's director, Martin Scorsese — the film was a bio of Howard Hughes, who during Technicolor's heyday in the 1930s was a Hollywood honcho. This was a case of the increasingly common "data-processing DP" striking again!
In the future, it looks as if most big-budget Hollywood films will be doing "the digital-intermediate thing" — even those films with little or no CGI. Crucial to the look of what we will see in theaters will be, of course, the "colorist": the post-production technician who arranges for all those color-grading or color-timing palettes and densities to appear. But looking over the colorist's shoulder, guiding his or her decisions, will inevitably be the "data-processing DP," no longer simply a master of light and lens.
The payoff will be an increasing number of films whose look can only be described as "stunning." But there will be a second advantage for us end users as well. Transfers of movies to HDTV and DVD can now be derived directly from the finished digital intermediate, rather than scanning back in to the digital domain a finished piece of film with all of its analog imperfections. That could result in movie images being nearly as stunning in the home as they are in the cineplex.
Tuesday, July 12, 2005
In the prior installments, I introduced the idea that different color spaces have different characteristics that grossly or subtly affect the look of an image on film or video. Every type of color film stock, whether for use as a camera negative or as a positive print, has its own associated color space, meaning the gamut of colors it can actually render. The same is true of high-definition video as opposed to standard-def video; each of the two uses a (very slightly) different color space.
|The CIE XYZ gamut|
In the diagram, according to its caption, "the innermost cube represents the colors that can be reproduced by a particular RGB display device." A television or video projector, whether high-def or standard-def, is just such an "RGB display device."
Furthermore, the AC caption continues, "the curved surface illustrated by colored contour lines represents the boundary of colors visible to human beings; that it lies outside the RGB cube shows that many [visible] colors cannot be reproduced by an RGB device." That much is fairly self-explanatory, if disappointing.
As f0r "the outermost figure, the white wireframe rhombohedron ... represents the colors that can be specified by the CIE XYZ primaries." The X, Y, and Z primaries, as defined by international standards of the CIE (Commission Internationale de l'Eclairage), define an all-encompassing color space. To some subset of XYZ any "lesser" color space, such as RGB, can be mapped. XYZ's compass is so large, it even includes "colors" beyond those the eye can actually see.
So the XYZ rhombohedron has many more colors (visible and otherwise) than the RGB cube has. (A "color" is some combination of "hue," such as orange or teal; "saturation," such as vivid orange or muted teal; and "value," such as light vivid orange or dark muted teal.) In particular, a range of extremely saturated colors that the eye is capable of seeing and responding to lie outside the RGB cube.
Color negative and print film uses a differently positioned cube, for a different color space. Film's color space is represented by a cube whose axes are C, M, and Y, not R, G, and B. C (for cyan), M (f0r magenta), and Y (for yellow) are the secondary colors or subtractive primaries. Each can be made by adding exactly two of the additive primaries: R (for red), G (for green), and B (for blue) in equal amounts. For example, adding equal amounts of R (red) and B (blue) produces M (magenta).
Color film uses a strategy of subtracting CMY secondary colors from white light, whereas TV and video use a strategy of adding RGB primaries to make white light. That's why film uses a CMY color space whose cube does not necessarily match RGB's in the diagram above.
(Actually, film uses a CMYK color space, where K stands for "blacK." By itself, CMY does not produce convincing blacks. So an independent "primary color," K, is used as a layer in color film stock to make blacks more solid and convincing. In what follows, I will simply ignore K.)
The CMY cube, were it to be drawn into the diagram, would (for one thing) have a whole different orientation than the RGB cube has, with its corners in different places than those of the RGB cube. For another thing, the primary-color chromaticities — the x and y coordinates of the specific cyan, magenta, and yellow hues that are used to define the CMY cube — make for a CMY cube has a different color gamut than the RGB cube.
If the CMY gamut were wider than RGB's, as it apparently is, that would mean that some of the colors available in CMY as used in film don't "make the cut" when film is transfered to video. Those colors would instead be represented by colors available in the RGB gamut which approximate the original colors.
Furthermore, for like reasons having to do with the two color-space gamuts not overlapping precisely at their edges, there may be colors available in RGB that can't be accurately reproduced in CMY.
In the view of Ed Milbourn at HDTV Magazine, as stated in "Ed's View - The 'Film Look'," today "the video color gamut is somewhat greater than that of film." (Here, Milbourn is not referring to the old, highly saturated look of classic Technicolor.) "This," he continues, "gives video the capability of reproducing a slightly wider range of colors than film, adding to the color 'snap' of HDTV video. All of these factors (and a few others) combine to give film color a more muted, realistic look than video."
So HDTV color is not, and cannot be, as cinematic as it would be if it used the CMY color space common to color film stocks instead of the narrower-gamut RGB color space upon which video is standardized!
More on HDTV color in Part 4 (which I may not get to for a while).