Wednesday, November 02, 2005

More on 1080p TVs

In Sony's Groundbreaking New SXRD RPTVs I said some good things about two new HDTVs from Sony, both offering full 1080p screen resolution (but neither, unfortunately, allowing actual 1080p input). Both use Sony's version of Liquid Crystal on Silicon technology, which Sony calls SXRD.

I saw the 60" version today at Best Buy, and also a larger Toshiba DLP that likewise is 1080p-native (for more on what that means, see Eyes on the 1080p Prize). Both produced a picture that was noticeably better than the HDTVs with lesser resolution nearby. There were also some excellent Samsung 1080p DLPs (such as this one) on display.

Then I looked at the cream of the current crop: a Mitsubishi 1080p DLP-based rear projector, the WD-73927 pictured here. It was a huge 73-inch slice of video heaven, for a price in the six-grand range.

Unlike some of the others, it was in a dimly lit room where the most desirable TVs are on display at Best Buy. Unlike with most of the other models, in fron to this one there was a place to sit and ogle. And thanks to the good folks at Mitsubishi, this TV was connected to a hard-drive-based HDTV recorder feeding it an actual 1080p input signal!

The visual result was as much better than the other 1080p models, limited as they were to (presumably, vertically filtered) 1080i inputs, as the others outstripped the lesser, non-1080i/p HDTVs.

All 0f which confirms my suspicion that TVs that do 1080p input and display will one day be the way to go.

(BTW, I was told by a sales person at Best Buy that Samsung and several other manufacturers now offer 1080p flat panels that use LCD technology. Not LCoS, but LCD. It would seem that every non-CRT HDTV technology but plasma has jumped on the 1080p bandwagon. Can plasma be far behind?)

Monday, October 31, 2005

Eyes on the 1080p Prize

In the best of all possible HDTV worlds, 1080p would be a firm reality.

We would, that is, have TVs to which true 1080p source material could be input and displayed as such: video frames having 1,080 rows of pixels, each horizontal row containing 16:9 x 1,080 = 1,920 pixels.

A video frame? It's what you see when you hit pause on a DVD player, a single frozen image, with no movement. Moving video is really a succession of such frames, displayed one after another so fast that your eye perceives motion.

For 1080p reception, each 1,920 x 1,080 input frame of video would be received and presented fully intact, which is what the "p" (for "progressive scan") means. As transmitted, it would not be subdivided into two "i"-for-"interlaced" fields, with only the odd-numbered pixel rows present in field #1 and only the even-numbered rows, 1/60 second later, in field #2.

A pixel row is also called a "scan line." When the latter type of 1080-line scanning is done, with each video field containing only alternate scan lines, not the whole image "raster" at once, the transmission method is 1080i, not 1080p. 1080i scanning typically produces 60 fields per second, which comes to 30 frames per second.

Over-the-air high-definition television broadcasts use either 1080i or 720p: 720 rows of pixels, 16:9 x 720 = 1,280 pixels per row, progressive scan, 30 non-subdivided frames per second. 1080p is not supported over the air; it doubles the number of pixels transmitted per second by 1080i, requiring TV channels to hog unacceptably high bandwidth.

Unless ... a more efficient digital video compression scheme is used. At the time the standards for HDTV broadcasting were set, MPEG-2 was the best compression method around. Now we have MPEG-4, which squeezes digital video down far more compactly. We also have VC-1, based on Windows Media Video (version 9). Both are supported by the two emerging competitors for high-definition blue-laser DVDs: Blu-ray and HD DVD. Blu-ray and HD DVD could offer us true 1080p discs without paying a huge price in data rate or storage capacity.


The blue-laser DVD gurus, however, have as yet committed to no more than 1080i. For one thing, there are few if any TVs that can input 1080p — even though a number of new models can display it. (See, for example, Sony's Groundbreaking New SXRD RPTVs.) There seems to be no agreement on how to copy-protect 1080p, for another thing.

There are yet subtler problems. For example, true 1080p promises greater visual detail than merely "deinterlacing" 1080i would imply. But how to get there?

Most newfangled TVs need to deinterlace 1080i to some progressively scanned form — unless they are based on old-fashioned cathode-ray tube technology with actual scan lines traced out by an electron beam, that is. One conceptually simple form of deinterlacing is to "borrow" the pixel rows that are not present in a given field from the other field in the same frame. Another is to interpolate the pixels between those which are actually present: to "guess" what they would contain, if they were really there.

There are more sophisticated approaches. One of them is to notice that the video program originated in film, and use so-called 3:2 pulldown compensation. The TV figures out which other fields of which other frames really came from the same film frame — owing to the fact that film unwinds at a mere 24 frames per second, not 30 — so it knows how to fill in the missing pixel rows in a given field with total precision.

Unless deinterlacing can leverage such special knowledge as that, the interpolated or borrowed or calculated pixels constitute "false" information. What's worse, they don't really count as "new" information at all, for they don't contain any extra detail. For that reason alone, true 1080p can improve on the "false" detail present in deinterlaced 1080i quite considerably.


But there's another reason as well, yet more subtle, why we ought to prefer 1080p. In interlaced scanning, vertical resolution — the number of horizontal lines actually visible in the image — is reduced to avoid interline flicker.

So if there are 1,080 rows of pixels, there may be only 70% of that number of visibly distinguishable lines of image content: say, 756 lines of vertical resolution.

"When adjacent lines in the frame (which are transmitted and displayed one field-time apart) are not identical," says this web page, interlaced scanning will produce a flickering effect. If the field-time is 1/60 second, you can get a tiny detail (say, white) in the first 1/60 second that disappears (turns black) in the second 1/60 second. Enough of those details coming and going, and the display as a whole will have an annoying 30-Hz flicker.

That's bad, so the camera simply filters out the smallest details present in the vertical dimension of the image. 1,080 rows of pixels then produce only, say, 756 lines of vertical resolution.

But if you know the image is going to be captured, recorded, transmitted, received, and displayed at 1080p, you don't have to filter to avoid interline flicker.


Another thing 1080p offers is a way to avoid the 3:2-pulldown problem entirely.

When film at 24 frames per second is transferred to video at 30 frames per second, the film is "pulled down" — advanced — in a herky-jerky way such that some film frames contribute their visual information to three video fields, while other film frames contribute to only two video fields. That's 3:2 pulldown, also called 2:3 pulldown.

If such film-transferred video is not deinterlaced with totally accurate 3:2-pulldown compensation — TV user menus often call it "film mode" — then those video frames whose two fields come from two different film frames can produce "jaggies," or worse, on the screen. Fie!

1080p can avoid that problem entirely by recording video at the same frame rate as film uses: 24 fps. That way, there's a 1:1 correspondence between film frames and video frames. (To avoid the flicker sometimes associated with such a low frame rate, the TV can display each video frame three times in each 1/24 second frame time, for an effective 72 frames per second.)


In the real world, we don't even know as of now whether there will ever be any true 1080p DVDs, much less whether they will be made with all the vertical resolution 1080p "ought to have." They might be filtered in deference to CRT-based interlaced HDTV displays.

But we can always hope true 1080p DVDs are on the way, with no vertical filtering!

Monday, October 24, 2005

Waiting to Buy an HDTV? Smart Move!

Are you biding you time before investing in an HDTV? If so, you may be smart. As I mentioned in Sony's Groundbreaking New SXRD RPTVs, HDTV makers are improving the breed at a rapid clip just now. Not only has Sony just introduced a pair of attractive new rear projectors employing the version of Liquid Crystal on Silicon (LCoS) technology Sony calls SXRD, for Silicon X-tal Reflective Display, but JVC is also just entering the LCoS race with a vengeance.

Its 70FH96 rear projector is just about to hit the stores, says the Nov. 2005 Sound & Vision magazine (p. 26). This 70" 1080p behemoth, which sells for $6,000, is also an LCoS implementation, though JVC calls it D-ILA, for Direct-drive Image Light Amplifier. When it's high-def — and 1080p is the highest of high-def — JVC dubs it HD-ILA.

The same magazine also contains a blurb (p. 28) about Pioneer's latest plasma flat panels, among them the 43" PDP-4360HD for $4,500. These plasmas "boast better contrast than ever before," says S&V, mediocre contrast being one of the principal banes of plasma up to now, due to grayish black levels. The PDP-4360HD also has "multiple HDMI connectors" — so you can use more than one digital signal source without falling back on analog connections — "in a separate media box" — so you can make all external connections not to the flat panel itself, mounted on a wall, but to a separate box that connects to the flat panel by a small handful of two or three wires.

And if you realize you'll be needing a TV with a digital tuner one fine day but don't give a hoot about high definition, check out the blurb on page 20 about RCA's new digital standard-definition TVs. The 27" 27F634T, listing for $359, is shown. It has the "old-fashioned" squarish screen that we're so accustomed to and downconverts signals (including HD signals) to good old 480i, the "standard resolution for all analog signals." Translation: you can now spend what you've long been expecting to spend for a new TV — three figures — and get one that looks just like TVs of yore, in terms of picture shape and quality. But it's future-proof. It receives off-the-air digital broadcasts, so when analog transmissions bite the dust in the next year or four, you'll be ready even if you don't subscribe to cable or satellite TV.

Sunday, October 23, 2005

Looking Forward to High-Def DVD?

One of the hot topics these days in HDTV land is the imminent arrival of high-definition DVDs. There's a format war a-brewin', with two mutually incompatible versions on tap: Blu-ray and HD DVD.

These two camps have apparently stopped negotiating for a compromise solution to the forthcoming so-called "blue laser" DVD rollout which will allow DVD discs to hold far more programming than they do today. The smaller wavelength lets the laser zoom in on smaller, more densely packed specks on the disc's surface. Since high-def content uses up a lot more data, changing the laser from red to blue can enable DVDs to go high-definition.

You can get the full scoop in the just-published "Winter 2006 Buyer's Guide to Home Theater" from The Perfect Vision magazine. The article "High-Definition DVD is Coming" gives a rundown on the two technologies and lists the first titles that will be available when HD DVD launches next spring. (The launch is now expected to come in March '06, followed in April by the Blu-ray premiere.)

Meanwhile, the the Nov. 2005 Sound & Vision (p. 17) says, "So HD DVD and Blu-ray [with negotiations shelved] remain headed for a showdown. But wait, soap fans, there's a new cliffhanger: Samsung has just announced that sometime next year, it will market a machine that can play both formats."

(Note that both kinds of machines will play all standard-def DVDs, new and legacy, just fine. Standard-def DVDs will continue to come out in droves into the foreseeable future, no matter what happens in the blue-laser format wars.)


If there is indeed a dual-format, combo player in the works, that will soften the blow of a format war considerably. And it will concomitantly undercut the Nov. '05 Widescreen Review effort to boost the High-Definition Disc Consumer Advocacy Alliance. That loose confluence of movers and shakers in the world of DVD and HDTV consumers would like to burden video equipment manufacturers and Hollywood studios with a greater sense of resposibility than heretofore shown. Not only is the looming format war over blue-laser DVD a travesty, but other choices the two camps may be making might wind up "punishing the innocent for the sins of the guilty."

The "High-Definition Disc Consumer Advocacy Alliance FAQ" piece (pp. 80ff.) informs us that the analog component video outputs of the new DVD players may down-rez HD discs to 480p, the resolution now obtained with most so-called "progressive scan" DVD players. Blocking HD on analog would be done — if it is done — to foil piracy, since analog output signals are not copy-protected the way digital signals are.

The new players can also "revoke" their own ability to play the new discs. If the powers-that-be learn that a certain player (or is it player model?) has been modified to thwart digital copy protection, it will be identifed as beyond the pale by having its "encryption key ... placed on a revoked list and transmitted to players on newly released discs."

The article is unclear about this. On the one hand, "that player would then be unable to play all high-definition discs." On the other hand, "every single player of that manufacture and model number could find itself on the revoked list."

The article also mentions other ways in which the actual implementation of blue-laser DVD may punish early adopters of HDTV technology, whose gear might not work optimally with the new players and discs.

But future buyers of HDTV sets are probably fine, be it noted. As long as their sets support either a DVI or, preferably, an HDMI digital video input connection with so-called HDCP copy protection — assuming their DVD players don't get "revoked" — no problem. (All the same, it probably behooves one to have at least two such inputs: one for DVD and one for, say, an HD cable or satellite box.)

Thursday, October 20, 2005

Sony's Groundbreaking New SXRD RPTVs

The November 2005 issue of Widescreen Review contains a review (download it in PDF form here, if you are logged on to the site) of the hot new KDS-R60XBR1 HDTV from Sony. Sony's first SXRD rear projector, it produces true 1920 x 1080p resolution on a nice, big 60" screen for $5,000. Its little brother, the KDS-R50XBR1, offers 50" of widescreen 1080p resolution for $4,000. There is also a review, not yet online, of the KDS-R60XBR1 in the same month's issue of Home Theater magazine.

SXRD stands for Silicon X-tal (for "Crystal") Reflective Display. It's Sony's own version of LCoS: Liquid Crystal on Silicon, a type of microdisplay which has recently been seen in pricey front projectors but until now not in rear projectors.

As a microdisplay, SXRD creates its image on a small rectangular chip (or actually three of them, one for each primary color) about the size of a postage stamp. This little panel of pixels is made of a liquid crystal material, just as is an LCD (liquid crystal display) chip. But unlike an LCD chip, an LCoS/SXRD chip doesn't transmit light emitted from behind it. The light source instead reflects off an aluminized layer on which the pixels are more tightly squeezed together, to eliminate the so-called "screen door" effect wherein the LCD pixel structure presents itself to the eye.

Because the light passes through the crystal twice, not once, the contrast ratio is vastly improved, resulting in a bright picture with truly dark blacks. The structure of the chip also allows Sony to brag of lickety-split pixel response time, so moving elements of the image don't develop ghost trails.

Sony's LCoS/SXRD rear projectors compete directly with various manufacturers' (Samsung, HP, Mitsubishi) 1080p DLP-based RPs. Those sets use a trick to coax true 1920 x 1080p resolution out of a 960 x 1080p microdisplay chip. Texas Instruments, the originator of DLP, calls it SmoothPicture. Other folks call it "wobulation." A mirror in the projection path is pivoted ever so slightly to shift the image by one pixel horizontally, as each video frame is being projected. (For more on this, see An Eye on DLP, No. 3.) Critics say that produces eye fatigue. Sony's SXRD chips are natively 1920 x 1080, and no such trick is needed.

Bill Cruce's review of the KDS-R60XBR1 in Widescreen Review is a truly glowing one. In terms of picture quality, about the only minor flaw he cites is oversaturated green. Geoffrey Morrison in Home Theater is more guarded with his accolades, mentioning that both red and green were too strong in the unit he reviewed — though they are the correct shades of the respective hues. "So objects may look really green," he says, "but they're not greenish-yellow or greenish-blue like the colors that many digital displays can produce." But: "One side of the screen had a bluish-green tint [on test patterns], while the other side had a reddish-orange tint." The review by Cruce in WR mentions no such anomaly.

However, Morrison also says that he was reviewing was a preproduction model, and that Sony claims the color accuracy will improve in units shipped to stores.


Morrison also found the improvement in picture detail with the KDS-R60XBR1's 1920 x 1080 resolution, compared with a 720p display, no better than "subtle but noticeable." You would expect a bigger jump in apparent resolution, since 1080p more than doubles the number of pixels of 720p. But, though this set has a 1080p display, it can't input a 1080p signal!

That's right — the best signal it can take in is 1080i. Now, I am given to understand that 1080i fare is intentionally filtered to remove about 30% of its vertical resolution, because any "i"-for-interlaced scan will produce an artifact called "line twitter." When a video detail is small enough to occupy just one scan line, every second video "field" will omit it. The detail will accordingly blink on and off! Vertical filtering obviates that, making sure that no detail is so small as to completely disappear every 1/30 second.

So I'll bet Morrison was watching vertically filtered 1080i material (from a D-VHS tape player) scaled to 1080p by the KDS-R60XBR1. That's really not a good way to judge the ultimate picture quality of a 1080p-native display.

But Morrison had little choice, since (a) as yet there exists no 1080p consumer-video source hardware, other than a high-def PC hookup; (b) there is little if any 1080p source material, a situation that will probably change when high-definition DVDs arrive; and (c) the KDS-R60XBR1 can't accept a 1080p input signal anyway. Both reviews mention Sony's reason for omitting the capability: the present lack of an industry standard for copy protecting 1080p input.

I mention this because I think it a good idea for potential buyers of the KDS-R60XBR1 to wait until there is a successor model that does accept 1080p.


The KDS-R60XBR1 has stunning black levels and snappy contrast renditions, both reviewers say. Most HDTVs other than direct-view CRTs have problems rendering deep blacks, and CRTs are usually limited in how bright they can get. The KDS-R60XBR1 can do blacks nearly as well as a CRT (0.006 or 0.007 foot-Lamberts, per HT) and dazzle you with its brightness (93.31 ft-L). That gives a whopping full-on/full-off contrast ratio of fully 13,330:1.

"The best plasma we've measured had a black level of 0.023 ft-L; the best RPTV ... 0.080 ft-L," writes Morrison. "Most of the front projectors we've measured have a higher black level than this 60-inch RPTV. Suffice it to say, I was impressed." Clearly, the Sony SXRD RPTV "out-blacks" any existing DLP RPTV.

Part of the reason for these outstanding numbers is the KDS-R60XBR1's Advanced Iris, which automatically contracts, reducing the amount of light reaching the screen for dark scenes, and then opens up for bright scenes. Morrison says you can just barely detect it working, and if it bothers you you can defeat it by selecting one of six presnt iris levels manually. Even the full-open setting produced an impressive 3,100:1 contrast ratio.

The two reviews seem to disagree about how well the KDS-R60XBR1 does video processing such as deinterlacing and resolution scaling, with the WR reviewer being wholly thumbs-up about it and the HT reviewer noting some flaws. I have no idea why there was a difference in this department.

But on the whole, it seems clear that the initial Sony KDS-RnnXBR1 SXRDs come right out of the starting gate as the RPTVs to compare all others to ... and that's saying quite a lot.

***

After I wrote the above the Nov. 2005 Sound & Vision arrived, with a review of the Sony 50" SXRD model, the $4,000 KDS-R50XBR1. Al Griffin's opinion was much like those of Bill Cruce and Geoffrey Morrison: "gorgeous picture ... natural color ... deep, CRT-like blacks ... fine resolution."

Griffin also complemented "extensive feature set and picture tweaks, which go well beyond many other televisions." As the lab results and discussion online here mention, "Only minor tweaks using the red, green, and blue gain and bias controls in the set’s Advanced Video menu were needed to get [color temperature] perfect — no service-menu adjustments needed."

Color temperature is supposed to be the same value, 6500K, at all brightness levels. Most TVs don't give you that desired degree of uniformity, giving the picture a blue or red cast instead. Sometimes, there is a different false tint at different brightness levels. Such false tints are most apparent in a black-and-white picture, but also affect color pictures.

The fact that the Sony puts red, green, and blue gain and bias controls in a user-accessible menu is also unusual. RGB gain and bias are the main controls used by a professional calibrator to calibrate color temperature and grayscale. (The latter includes avoiding any green tint, not just red or blue, at the various levels of brightness.)

So the Sony SXRDs let you calibrate them on your own!

The S&V review, like the other two, noted that the Sony doesn't support 1080p HDTV input signals, even though the screen output is always 1080p. The only other minus factor the reviewer cited was the lack of a signal-strength meter to help you tune in over-the-air digital channels. Even so, the Sony pulled in all the reviewer's local stations, including the most troublesome one, without any fuss.

Oh, and the review also chides Sony's remote for not having a backlit keypad. When that's the extent of the negatives, while the positives are so prepossessing, the TV is clearly a big winner.

Sunday, October 16, 2005

Eye on HDTV, October '05

It's been a while since I posted to this blog — since July, in fact. Recently I've been enjoying the baseball playoffs in HDTV and wondering what it might take to get more homes HD-capable, sooner, so fewer of us will miss the fun!

The first big thing is to get prices down, obviously ... and that's been happening. About a year and a half ago I paid $3700 for the 32" Hitachi plasma I'm watching baseball on in my basement rec room. Today you can't buy a plasma that small — the smallest is 37" and most are 42" and 50". The small flat panels are all LCDs now. In this weekend's Circuit City flyer, there's a 32" Panasonic LCD listed for $1900, "before $190 savings."

Almost two years ago I paid $4900 for the 61" Samsung DLP rear-projection TV that I have in my living room. In the same weekend flyer, a Panasonic 61" LCD rear-projection TV lists for $3000, "before $300 savings." Or, go to the Circuit City website to find a Samsung 61" DLP rear projector for $3325 after $175 savings.

So prices have come down up to 50%, if you're willing to switch technologies, or up to 33% if you're not. Y-e-e-ss-sss!


The second needed thing is to do something about all the many sources of confusion which make potential HDTV buyers leery. That hasn't really happened yet. If anything, the marketplace is more confusing now than it was in 2004.

In the Circuit City flyer we find plasma and LCD flat panels; CRT, DLP, and LCD rear projectors; and even a direct-view CRT set with a traditional picture tube ... and those are just the HDTVs. We also find a slew of just regular TVs of the type we've had for years and years ... now on the way out, of course. When you can buy a Sony 26" HD-ready LCD flat panel for $1350 after savings, do you really want to buy a near-obsolete SDTV?

Remember, you'll need something like a 32" SDTV to be able to view a widescreen picture at the same overall size. Circuit City presently sells a Sony 32" SDTV for $550. The Sony 26" HD-ready LCD flat panel is double that in cost, admittedly ... but it is HDTV. And you can watch widescreen fare on it without annoying black bars appearing above and below the image.

Still and all, such considerations are confusing. In a few years, there will only be widescreen TVs, and virtually all of them will be HDTVs.


Not literally all, you may ask? Why won't all TVs in (say) 2009 be HDTVs?

Well, they may be ... but there is something in between SDTV and HDTV today, and it may well persist into tomorrow. It's EDTV, for extended-definition television.

EDTVs have, usually, just 480 lines of resolution: 480 rows of pixels, stacked up and down on the screen, each row containing ... well, uh, containing some even larger number of pixels; it depends. HDTVs have at least 720 lines or rows of pixels, and that number can go as high as 1,080 lines/pixel rows.

Any TV that has fewer than 720 lines is either SDTV or EDTV. As I say, usually the exact number of lines is precisely 480. The TV is SDTV if it shows the odd-numbered lines first, then the even-numbered lines a fraction of a second later. That's called interlaced scan. If all the lines are shown together each time the screen is refreshed, it's progressive scan. Progressive scan gives a more solid image with no visible scan lines. And there is twice as much detail presented over time; thus, the designation "extended definition."

You can buy a Magnavox 42" widescreen plasma EDTV this week at Circuit City for $1700. Compare that to a Philips 42" progressive-scan widescreen plasma HDTV for $2250 after savings, or a Samsung 42" progressive-scan widescreen plasma HDTV for $2700 after savings.

Yes, it's decidedly confusing ... way too confusing.


Another source of confusion is the distinction between "HD-ready" sets and "HD built-in" sets. The former don't have an internal "ATSC tuner," while the latter do.

The ATSC tuner pulls in digital over-the-air (OTA) signals, using an antenna. (Remember those?) All OTA HDTV broadcasts are digital. So are some OTA SDTV broadcasts, on the same channels — that is, some shows on these channels are HD, some SD.

Someday in the next few years, though, all OTA broadcasts will go digital. Right now, most OTA SDTV broadcasts are analog, expecting a different kind of tuner: NTSC.

The question is, do you need a built-in ATSC, or digital, tuner right now? You don't if you get all your digital channels — HD and otherwise — from cable or satellite. You do if you want OTA reception via antenna.

What if you buy an HDTV that lacks an ATSC tuner — an "HD-ready" monitor — and later regret it? You can buy a standalone "set-top box" with an ATSC tuner and hook it to your monitor at that time. Or you can get cable or satellite.

But, yes, it's another source of confusion. That's why Uncle Sam is mandating that all HDTVs built from now on be built with internal ATSC tuners. We're on the cusp of that mandate taking effect. Meanwhile, Circuit City is still offering several "HD-ready" models that presumably represent bargains — even if they're "last year's models" — since internal ATSC tuners are by no means cheap.


Another mandate of the Feds is putting the ability to receive "digital cable" right in every HDTV built from now on.

"Digital cable" involves a slew of cable channels that are being transmitted over the cable-TV wire in bits and bytes. They're typically the channels numbered 100 and above, and they include all the HD channels. Instead of renting an external cable box to pick up these channels, you can pick them up directly with a TV that (the advertising says) has "CableCARD."

In other words, the TV is "digital cable-ready."

Actually, a TV that is digital cable-ready provides just a slot into which a credit card-sized CableCARD may be inserted. You rent the card from your cable company in lieu of a box. With the card inserted in the slot, the TV suddenly can pick up digital cable channels.

Technically, the CableCARD is not needed if you just want to pick up non-scrambled digital cable channels. For example, your cable company possibly will choose not to scramble your local stations' over-the-air digital signals when transmitting them to you, allowing you to avoid the need for a CableCARD on those channels. Unfortunately, I have yet to encounter any reports of this theoretical possiblity actually working, in real life.

The CableCARD costs fewer dollars per month than a digital cable box. But the box can do more. It provides an interactive program guide to make it easier for you to find and access (or set reminders for) scheduled programs up to several days in advance. It allows you to order video-on-demand and pay-per-view programming with your remote, rather than by telephone.

Also, you can often option up the external box to one containing a digital video recorder — for more bucks per month and a lot more viewing flexibility. I find my DVR-equipped HD boxes (I have two) a godsend.

Many cable companies are gradually, over the next few years, switching to all digital channels. Then all their customers will need either a box or CableCARD. Soon, a new generation of CableCARD will let it do everything a box can do ... but the new card won't work in current slots. If you're set on CableCARD, hate the box, but want all the features, this might be a reason for you to postpone buying an HDTV.

More confusion, no?


Another thing you need to consider is whether the HDTV you're eyeing offers a HDMI (or DVI) high-definition digital interface.

It's a plug-in in the back which allows you to connect a cable box or DVD player that has an equivalent plug-in. The signal travels over the wire digitally as bits and bytes, which means it can be displayed cleaner and crisper than if it traveled as an analog signal.

DVI is good, HDMI is better. The latter incorporates digital audio signals and some other goodies that DVI doesn't support. But DVI is good enough, and if you have a cable box with a DVI output and a TV with an HDMI input, the one can be converted to the other by means of an adapter cable.

If your HDTV has neither DVI nor HDMI — I'm totally ignoring yet another digital option, Firewire — then the only way to route a high-definition signal to it from external gear is via the three-headed monster called "component video" — if it's "wideband," that is, and has enough "bandwidth" to carry a full HD signal in analog form.

True, wideband component video can give you a perfectly respectable HD picture. The all-digital HDMI or DVI picture can beat it only marginally. You don't absolutely need HDMI/DVI.

Now, that is. Rumor has it that the high-def DVD players slated to hit the market next year may not output HD at full resolution over wideband component video. Why not? An analog component-video connection can't be copy-protected. HDMI/DVI is typically copy-protected. Hollywood studios don't want you to make unlimited copies of movies in high-def.

So it is not, in my humble opinion, a good idea to buy an HDTV that lacks HDMI/DVI entirely. (Both of mine have DVI.) In fact, it won't be long before most buyers start insisting on two or more HDMI (or DVI) inputs, one for a cable or satellite box, one for a DVD player, and so forth.

And things just get more confusing, right?


Sorry about that. Because watching TV in high-definition is really great. Baseball is much more interesting, with a clean, wide, sharp, colorful picture, augmented with room-filling digital sound to give you the crack of the bat and the roar of the crowd.

And when you can record the game in glorious high-definition and then watch it when you want to, so much the better. With a HD cable box with built-in DVR, you can zap the commercials, hit pause, play with your cats, take a nap ... and miss nothing.

It's definitely habit-forming ... you've been warned!

Tuesday, July 26, 2005

An Eye on DLP, No. 3

In An Eye on DLP, No. 2, a follow-up to (ahem) An Eye on DLP, No. 1, I tried to give some idea how front projectors and RPTVs that use Texas Instruments' Digital Light Processing technology produce images with (reasonably) smooth gradients. A minimum of banding and false contouring can be had with even a consumer-level display using just one digital micromirror device and one color wheel.

Even though the "bit depth" of the "grayscale resolution" is nominally just a barely sufficient 8 bits, that depth can effectively be extended to 10 or 11 bits, using tricks. An early, crude way to do that bit-depth boost is "dithering." "Dark video enhancement," involving a redesigned color wheel, now replaces dithering as a more subtle and successful method of keeping false edges out of darker portions of images.


Now I'd like to talk about another subtle and successful trick which the DLP mavens at TI have dreamed up. It's SmoothPicture, a way to get a DMD with 1080 rows and 960 (not 1920!) columns of micromirrors/pixels to display a 1920 x 1080, maximally high-definition picture that is progressively scanned — i.e., 1080p!

"A DMD that supports SmoothPicture contains an array of pixels aligned on a 45-degree angle," writes Alen Koebel in the August 2005 issue of Widescreen Review magazine. "TI calls this an offset-diamond pixel layout." The revised layout contrasts with the ordinary pixel arrangement, in which all the micromirrors' edges line up with the frame of the DMD.

With this new alignment, from each micromirror can be derived not one but two pixels on the screen. How? Koebel writes, "The incoming image is split into two subframes — the first containing half of the image’s pixels in a checkerboard pattern (odd pixels on odd scanlines and even pixels on even scanlines), and the second containing the remaining half of the pixels."

"During the first half of a video frame," Koebel continues, "the first subframe is displayed on the full [micromirror] array, addressed as 1080 rows of 960 columns. Just before the start of the second half of the video frame, a mirror in the projection path is moved [slightly] to shift the image of the DMD on the screen by exactly half a pixel horizontally. The second subframe is then displayed on the DMD during the second half of the video frame."


The illustration to the right may help in visualizing what is going on. (Click on it to see a larger version.) At the bottom is a representation of part of a mirror. This, I believe, is a separate mirror, one that is "in the projection path," not one of the micromirrors. It swivels slightly such that it forms first one pixel on the screen, and then a second, slightly offset pixel next to it.

Accordingly, the pixels are diamond-shaped, nor rectangular. It's too bad Mitsubishi owns the name DiamondVision, no?

Cheating, you say? Not true 1080p? Two points. One, there is a DLP chip that produces full-fledged, honest-to-goodness 1080p — or, actually, it does even better. It produces 2K horizontal resolution, fully 2,048 pixels across by 1,080 up and down. This digital micromirror device is called the DC2K, for "digital cinema 2K," and it's used in groups of three in (you guessed it) expensive digital-cinema projectors.

The DC2K is way too pricey for consumer-level gear, even in single-chip configurations. This is because it's physically larger than other DMDs. The larger the DMD, the harder it is to produce in quantity without flaws. The manufacturing yield is low, making the price correspondingly high.


The second point in favor of SmoothPicture is that the "slight loss of resolution, mostly in the diagonal direction," due to the overlapping pixels on the screen, "has an intended beneficial effect for which the feature is named: it hides the pixel structure."

"This is not unlike the natural filtering that occurs in a CRT projector, due to the Gaussian nature of the CRT’s flying spot," Koebel goes on to say. (By that he means that the moving electron beam in a CRT lights up a tiny circular area of phosphors: a spot. The luminance of the spot rises and then falls again in a bell-shaped ("Gaussian") pattern, proceeding from edge to edge.

So, says Koebel, "TI’s SmoothPicture is probably the closest a pixel-based display has yet come to the smooth, film-like performance of a projection CRT display."


"Comparative measurements by TI show that a 1080p SmoothPicture rear-projection display exhibits higher contrast at almost all spatial frequencies (in other words, more detail) than two popular competing LCoS rear-projection displays," writes Koebel. I'm guessing that he's referring here to something experts in imaging science call the "modulation transfer function," or MTF.

My understanding of the MTF — which is admittedly quite limited — is that it quantifies an odd characteristic of human vision: an image appears to be much sharper when it has a higher modulation (that is, when it has a higher contrast ratio). At a lower modulation/contrast ratio, the image appears less sharp — even though, technically, it has the same resolution!

So, I assume, TI's SmoothPicture display gives better contrast ratios than its LCoS competitors, and that higher modulation results in subjectively sharper pictures with subjectively more detail.


Another topic I'd like to touch on briefly is also contrast ratio-related: DynamicBlack.

DynamicBlack is TI"s name for automatic iris control. In front projectors there is often a user-adjustable circular iris in the light path which may be widened or contracted much like the iris in the eye. When you buy a front projector and screen, you have many options to choose from as to how large the screen is, how far away from the projector it is, and what the "gain" of the screen is. (Gain has to do with how much of the light from the projector is transmitted to the audience's retinas. The higher the gain, the brighter the picture.)

With all these variables, it's easy to wind up with a picture that is too bright. One possible remedy: lower the contrast control on the projector. A better remedy: close down the projector's iris somewhat.

But then dark scenes may appear washed out, lacking in detail. You'd like to open the iris all the way just for those scenes.

Under other installation conditions, you might want to do just the opposite. You'd prefer to open the iris wide for bright scenes to get the maximum dazzle effect, then close the iris as much as it can be closed to make dark scenes really dark.

This kind of thing, in fact, is what TI's DynamicBlack is designed to do, automatically.

In so doing, it increases the on-off or scene-to-scene contrast ratio of a DLP display to, says Koebel, "well in excess of 5,000:1; some [displays] may reach 10,000:1."

Not that DynamicBlack actually lowers the display's black level, the way DarkChip technology does. It accordingly has no effect on the “ANSI” contrast ratio measurement for the display, which compares light and dark areas in a single scene. (It is this single-scene or "simultaneous" contrast ratio that the modulation transfer function is concerned with — see above.) DynamicBlack merely makes transitions from high-brightness scenes to low-brighntess scenes seem more dramatic.

And that, for now, ends my investigation of the state of the DLP art. I may, however, come back to the subject in future installments. Stay tuned!

Sunday, July 24, 2005

An Eye on DLP, No. 2

In An Eye on DLP, No. 1, I showed how a postage-stamp-sized array of 1,280 micromirrors across by 720 micromirrors vertically could make a tiny high-definition image that can then be projected onto a large screen. The technology involved is Texas Instruments' Digital Light Processing, or DLP.

The DLP "light engine" uses either three digital micromirror device (DMD) chips, or one:


Either way, red, green and blue images — in the three primary colors, that is — are formed separately and meet at the screen or at the retina of the eye. Each video frame is either spatially (with three DMDs) or temporally (using one DMD and a color wheel) divided into three subframes, one for each color primary. White source light is changed into the appropriate color for each subframe by means of wavelength-subtracting prisms or transparent colored segments of the spinning color wheel.

But, aside from the hue thus optically afforded it, each subframe is in effect a mosaic in black, white, and shades of gray.


An important question is, how many shades of gray?

The simple (much too simple; see below) answer is that consumer DLP displays and projectors divide the luminance range from black to white (ignoring hue) into 28 or 256 steps.

This means that real-world luminance, which actually falls anywhere along the continuous tonal range from black to white, is summarized as 256 discrete shades of gray. Alternately stated, the "grayscale resolution" that is used for consumer-level DLP displays is 256. Yet another way of putting it is that the grayscale's "bit depth" is 8 bits per primary color component: the power of two which 256 is, not surprisingly, corresponds to the number of bits (8) needed to represent all 256 possible integer values from 0 to 255.

An 8-bit bit depth is not ideal; it's an engineering compromise. The problem is that the grayscale steps at lower luminance levels are visually too far apart. Instead of seeming to grade smoothly into one another, they form "bands" or "false contours" on the screen.

For mathematical reasons having to do with the decreasing percentage of step n which each new increment to step n+1 represents, the banding or false contouring problem tends to go away as luminance level rises. But in darker scenes and in dimly lit areas of ordinary scenes, banding or false contouring can be objectionable.


According to "DLP from A to Z," an excellent article by Alen Koebel in the August 2005 issue of Widescreen Review magazine, "Professional [DLP] video sources use 10 bits per component; professional displays should be able to reproduce at least 8,192 (213) gray levels. DLP Cinema projectors, arguably the most advanced manifestations of DLP technology, are currently able to reproduce more than 32,000 levels." (Magazine subscribers may download the WR article in PDF form by clicking on the appropriate hotlink on this page.)

"More than 32,000" equates to 215, or 32,768 levels of gray.

Koebel doesn't make it crystal clear, but I'm guessing that the main reason for the lower grayscale bit depth on consumer (i.e., non-professional) DLPs is that they're 1-chip, not 3-chip. Because of the need to use a color wheel, the time of each video frame has to be divided in three parts — or, rather, at least three parts, depending on how many segments there are on the color wheel. In one place in his article, Koebel does say that "having full gray scale resolution for all three colors ... is not feasible in the time available." I think this is what he means by that.

This problem is only exacerbated by the fact that the color wheel is rotated at a speed high enough to have "four (called 4X), five (5X), or six (6X) sets of RGB filters pass in front of the DMD chip during a [single] video frame." This is done to minimize the "rainbow" artifacts I discussed in An Eye on DLP, No. 1. But it reduces the duration of each primary-color subframe and thereby lowers the number of micromirror on-off cycles that can be squeezed into it. That in turn makes for an upper limit on the gray-scale resolution.


Several strategies are used to offset the banding/contouring problem at low luminance levels which results from not really having enough bit depth or gray-scale resolution. One of these strategies is "dithering."

In spatial dithering, if a pixel needs an "in-between" shade of gray that is not available due to bit-depth limitations, the pixel is formed into an ad hoc unit with adjacent pixels so that, among all the pixels of the unit, the average gray shade is correct.

Temporal dithering averages different shades of gray assigned to a particular pixel from one video frame to the next to get what the eye thinks is an in-between shade.

"Combined, spatial and temporal dithering typically add two or three bits of 'effective' resolution to the gray scale," says the WR article, "at the cost of increased noise and flicker in the image." If we assume an actual gray-scale resolution of 8 bits, dithering can nominally simulate a bit depth of 10 or 11 bits.


The algorithm used for spatial and temporal dithering is much more complex than what I have said actually indicates. Suffice it to say that dithering involves an only semi-intelligent attempt to modify the gray levels assigned to an image's pixels in order to smooth over false contours. (And remember that gray shades translate into colors, by virtue of the fact that each pixel of each red, green, or blue primary hue, before it is tinted such by the optics of the DLP light engine, comes with its own associated "shade of gray.")

On my Samsung 61" rear projector, the dithering algorithm is apparently responsible for a "stippling" or "pebbling" effect I can see in black areas of the screen when looked at up close. For example, when there are black letterboxing bars on the screen, I can see patternless, busily moving "noise" in the renering of black, if I get within inches of the screen — assuming I don't turn brightness down low enough to "swallow" it, that is.

Other people have complained in online forums about single-chip DLPs' temporal dithering in particular. They say it blurs or "pixellates" the trailing edges of brightly colored objects moving rapidly across the screen — sort of like an artifical "tail," where the object is the "comet." I myself have never noticed this effect.


Still, because dithering does produce artifacts, TI has come up with a way to minimize the need for it. It's called dark video enhancement, or DVE.

DVE puts one or two additional segments on the color wheel. In addition to two red, two green, and two blue segments, now a seventh and (depending on the implementation) possibly an eighth segment are introduced for the express purpose of creating what might be called a dark-video subframe (or two subframes) of the entire video frame.

When a red color-wheel segment moves between the light source and the DMD chip, the DMD creates a red subframe. Just the intensity of red is taken into account. When a green segment swings around, the DMD switches to creating a green subframe. When a blue segment is in the proper position, the DMD switches again to take just the intensity of green into account.

So, logically, when a dark-video segment is active, the DMD produces pixels whose intensity corresponds only to the gray-scale information at the low end of the luminance range. The higher luminance levels are ignored so that the gray-scale resolution at the low end can be given a bit depth that has been increased by one or two bits.

I'm not sure what color the dark-video segment actually is, or whether it is clear or possibly a neutral shade of dark gray. I do know (because the WR article tells me so) that "the DMD is "driven only by green-channel data during these segments. Since green contributes nearly 60 percent to the [overall] luminance of each pixel ... this gives a reasonable approximation to having full gray scale resolution for all three colors."

The best I can tell, DVE has been introduced in front projectors only, as of now. I have yet to find a rear projector which has it. (Of course, really expensive front projectors with three DMD chips don't need it, since they lack a color wheel in the first place.)

More on DLP technology in the next installment.

Saturday, July 23, 2005

An Eye on DLP, No. 1

Samsung HLN617W
61" DLP HDTV

Digital Light Processing (DLP) is one of the big "players" in HDTV technology. Texas Instruments invented it in 1987 and introduced it commercially in 1996. In 2002, TI introduced the first DLP implementation that boasted hi-def resolution: 1280 x 720 pixels. (720 pixels is the minimum vertical resolution considered high definition.)

This year, 2005, inaugurates DLP technology that (using a clever trick) simulates full-fledged 1920 x 1080 resolution, for 1080i and 1080p support. Full-fledged 1080p resolution (or, actually, at 2048 x 1080 pixels, the slightly better "2K") is now used in digital cinema projectors in commercial theaters.

A 61" Samsung HLN617W DLP-based rear projector, a 2003 model, sits in the living room of yours truly. It has 1280 x 720p resolution.

HD2 "chip"
from
Texas Instruments

TI makes the integrated circuits that make DLP images; other companies such as Samsung, LG, and RCA build these chips into actual TV products. TI's so-called HD2 or "Mustang" chip, an early incarnation thereof, is inside my Samsung. This chip has been improved more than once since 2003 and is now called HD2+.

In the middle of the chip, as you can see, is a silvery window the size of a small postage stamp. The window reproduces the 16:9 aspect ratio of the TV screen. Inside this window is the "guts" of DLP technology: the Digital Micromirror Device, or DMD. It's not one mirror; it's 921,600 tiny mirrors arranged in 720 rows of 1,280 micromirrors each.


How a DLP rear-projection TV works
The diagram to the right shows how the DMD chip makes a picture on the screen of a rear-projection TV. (Click on it to see a larger version.)

Each individual micromirror is either on or off at any given moment. If on, light from the light source strikes it and bounces through the projection lens onto the screen. If off, the micromirror swivels off-axis on its tiny hinge, and light reflected from it strikes only a light-absorbent material, ideally never reaching the screen.

At any given instant, every micromirror is assigned a primary color, red or green or blue, by virtue of the white light from the light source passing through a filter of that color built into a rapidly spinning wheel. As the wheel spins, the three colors alternate so rapidly that the eye (usually) cannot tell that the TV is producing a red, then a green, then a blue picture on the screen. Instead, the eye sees a full-color image.

A revised image is flashed on the screen numerous times during each video frame. A video frame lasts 1/30 second, but this time period is divided into (at least) three subframes of red, green, or blue. Actually, in most DLP implementations these days, the color wheel has two red segments, two green segments, and two blue segments — plus one or two extra segments I'll get to later — so each video frame's time period is subdivided even further than that.

What's more, each primary-color subframe is further split into multiple time slivers in order to produce a grayscale. More on that below.


Question: how do you render a broad enough range of grays (mentally factoring out hue entirely) when the micromirrors can be only on or off — nothing in between?

Answer: you subdivide each primary-color subframe into several time slices or intervals. Then (let's assume you want the individual micromirror to produce a 50% gray, ignoring hue) you arrange for the micromirror to be on during half of those time slices and off for the other half.

Basically, a technique called binary pulse width modulation (PWM) is used to decide during which intervals, and for how many of them, the mirrors will be on. For instance, for 5o% gray the interval whose duration is exactly half that of the subframe is "on," the other intervals "off." For 25% gray, the interval which is one-quarter that of the subframe is the only interval that is "on." For 37.5% gray, the 1/4-subframe (25%) and 1/8-subframe (12.5%) intervals are both activated. And so on.

The intervals that are activated for a particular level of gray are added to get their total duration. Then that duration is divided into many, many ultra-wee time slices which are parceled out evenly over the entre period of the primary-color subframe. This technique is called "bit-splitting."

Bit-splitting minimizes the effects of certain objectionable grayscale artifacts which some people are able to see when their eyes move to follow an object traveling across the DLP image. (I'm getting much of this information, by the way, from "DLP from A to Z," an excellent article by Alen Koebel in the August 2005 issue of Widescreen Review magazine. Magazine subscribers may download the article in PDF form by clicking on the appropriate hotlink on this page.)


One of the few drawbacks of DLP technology is its "rainbows." Just as swift eye movement can introduce grayscale oddities, it can cause the seemingly full-color picture to fractionate, just for an instant, into distinct red, green, and blue images. What is happening is that the primary-colored images flashed to the screen in rapid succession are getting fanned out, as the eye moves, to different areas of the retina. The neural pathways of human vision can't integrate the images' colors into, say, the intended teal or lemon or mauve.

Some people are more sensitive to "rainbows" than others. For some people, they completely ruin the experience of watching a DLP display. For others — I'm lucky enough to be in this group — they don't happen all that often, if at all.

To minimize the "rainbow" effect, TI learned two tricks early on. First, put two wedges of each primary, red, green, and blue, on the color wheel, not just one. For each rotation of the color wheel, there are accordingly two subframes of each primary color. Second, make the color wheel spin real fast, so that each primary-color subframe is real brief, and there are lots of primary-color subframes within each video frame's timespan. (Those micromirrors will have to pivot a lot faster, of course, but what the hey?)


Using 3 DMDs
There's even a way to eliminate the "rainbow" artifact entirely: use three DMDs, not one.

You use beam-splitting prisms to route white light from a light source to three DMD chips operating in parallel. One beam contains red light for the red image, one green-for-green, and one blue-for-blue.

Thus, three images are formed independently by their respective DMDs; each is given its own primary hue. The three primary-color images are optically combined and shepherded together to the screen.

There's no color wheel in 3-chip DLP. Nor does the video-frame time interval have to be as finely chopped to allow for multiple primary-color subframe intervals — which makes things a lot simpler and allows a more smoothly gradated grayscale.


The problem with 3-DMD DLP projectors is that they're much more expensive than 1-DMD projectors. As a guess, I'd say that the optics alone cost more — imagine having to make sure that three beams carrying three images line up just right.

The real cost boost is in adding two more chips, though. The cost of making DMD chips is fairly high because the manufacturing yield — the percentage of chips that aren't rejected as imperfect — isn't all that high, yet. (As expected, the "yield curve" rises as TI's experience with making the chips accumulates over time.) So when you triple the number of chips, you add quite a lot to the cost of the projector.

As far as I know, there are no 3-chip rear projectors. Only front projectors (including those for digital cinema) are expected by customers to be pricey enough to warrant going to three DMDs.


The Achilles' heel of DLP is its inability to render deep blacks, in the way that CRTs, on the other hand, give deep, satisfying blacks.

No fixed-pixel technology — not DLP, not plasma, not LCD or its variants, LCoS, D-ILA, and SXRD — does black well. As the Widescreen Review article says, "The evolution of DLP technology, since its commercialization in 1996, could be described, as much as anything, as a search for better blacks."

Each fixed-pixel technology has a different reason for weak blacks. In the case of DLP, poor blacks are due to light from the light source that reaches the screen when it shouldn't.

Imagine an all-black image. The uniform-black signal causes every micromirror to turn off, meaning that it swivels away from the direction that carries reflected light to the screen. Instead, all the light is supposed to be beamed into a light absorber.

Well, what happens if the light absorber doesn't absorb all the light? The remaining light from the light source ends up bouncing around ("scattering") inside the RPTV or front projector ... and eventually arrives at the screen, polluting the image by washing it out just slightly.

A second problem has to do with the spaces between the micromirrors. Light reaching these gaps can easily end up bouncing off the DMD's underlying structure and eventually reaching the screen.

A third problem concerns the tiny "dimple" or "via" in the middle of each micromirror where it attaches to its hinge. The larger the via, the greater the amount of light that bounces crazily off it and reaches the screen when it shouldn't.

Problem One, the scatter problem, was addressed early on, at the time of the first commercial DLP displays, by providing the best light absorber possible. Those early DLP displays had an "on-off contrast ratio" of about 500:1. That is, an all-white field produced 500 times more luminance at the screen than an all-black field.

Problem Two, the gap/infrastructure problem, was addressed in two stages. The first, says WR, "was to coat the hinges and surrounding structure under the mirrors with a low-reflectance (light absorbing) material." That "dark metal" technology was dubbed DarkChip1, and it boosted on-off contrast ratio to 1,000:1. Later on, DarkChip3 reduced the size of the gaps between the mirrors, and contrast ratio went up even further.

In between DarkChip1 and DarkChip3 came DarkChip2, which addressed Problem Three, the dimple/via problem. The dimple or via was simply made smaller. That boosted the contrast ratio by giving less area of irregularity on each micromirror off of which light could bounce askew. That change alone deepened black renditions, improving contrast. And it further boosted contrast ratio by offering more area on each micromirror that could properly reflect the source light, making the overall picture brighter.

DarkChip3, when it came along, made the dimple or via yet smaller than DarkChip2 did, in addition to shrinking the inter-mirror gaps. Now, with DarkChip3, DLP black levels are much more respectable than in earlier versions. In fact, they're way better than those on any other non-CRT display.

One of these DarkChip revisions — I'm not quite sure which — made the micromirrors swivel further off-axis in their off position than they had before, reducing image-polluting light scatter within the display or projector. In the jargon, the "micromirror tilting angle" was increased from 10° to 12°. My guess is that this happened at the time of DarkChip1, since the tilting-angle increase surely exposed more of the structure under the mirrors.

More on DLP technology in An Eye on DLP, No. 2.

Friday, July 22, 2005

Why We Watch

Martin Scorsese in
My Voyage to Italy
Why We Fight was the umbrella title of a series of documentaries made by director Frank Capra as home-front propaganda during World War II. Martin Scorsese could have subtitled his 1999 documentary My Voyage to Italy (Il mio viaggio in Italia) Why We Watch.

Specifically, why we watch movies.

Mr. Scorsese, whose films as a director have gathered awards and acclaim, from Mean Streets (1973) to The Aviator (2004), takes us by the hand into the world of the Italian films he grew up with as an American son of transplanted Sicilian immigrants. Coming of age in New York City in the late 1940s and early 1950s, he saw the Neorealist renaissance of post-WWII Italian cinema through a tiny glowing screen, as filler programming in the early days of television. Even if he didn't fully understand the films of the likes of Roberto Rossellini at such a tender age, they moved him deeply.

They still do, because more than anything else, they demand of us our full and honest humanity and compassion for all God's children.

Scorsese's two-part documentary originally aired on Turner Classic Movies, on cable, in 2003, a follow-on to his 1995 A Personal Journey with Martin Scorsese Through American Movies. As far as I know, My Voyage to Italy has not aired since, nor is it scheduled to air again — much less in high-definition. (The documentary is available as a two-disc DVD set, which is how I'm viewing it.) So I am cheating a bit to include this post in my What's on HDTV? blog. I admit that.

Yet I think it a worthy inclusion because, in this day and age — as was mentioned in Parade magazine's "Personality Parade" column in the July 17, 2005, issue — movies are no longer made for adults. They're made for adolescents. And the souls of adolescents are not the most notable in the world for their compassion.


Today, whether adolescents or grownups, we are not accustomed to wearing our hearts out on our sleeve. After watching My Voyage to Italy, you may want to have your tailor get in touch with your surgeon.

Not that there's an ounce of sentimentality, schmaltz, or false pathos in the films Scorsese presents to us, or the way he presents them — just the opposite. For example, at the outset of the journey he leads us on (I admit I haven't yet gone further than Neorealism), he gives us his own personal tour of the films that were called Neorealist because they took an un-glamorized view of life as it was lived in a torn-up country, Italy, which had long been prey to fascism and then been overrun by German Nazis and American liberators in turn.

Under the circumstances, there wasn't much basis for glamorization. Which meant there was all the more basis for compassion, humanity, and fellow-feeling. Rossellini and other Neorealists like Vittorio De Sica showed everything, mean and noble, in Italian and European life at the time, and made the recovering world know that no man is an island.

What's more, they showed the world that movies, a once-fluffy art form, were uniquely able to reveal to us the totality of the human condition, warts and all, and thereby stir our tenderest feelings.


I have to admit, when it comes to tender feelings, or even to being open to seeing beauty amid ugliness and flowers amid dirt, I personally can't hold a candle to Martin Scorsese. My personal inclination is to try to kayo the ugliness and clean up the dirt, and only then — just maybe — appreciate the beauty.

Which is why I feel so glad to have this man of the gentlest possible voice and of the widest possible mercy sit beside me in the movie theater, in effect, and whisper in my ear what to look for and what to think about as the extended clips of old B&W movies in a foreign tongue (with subtitles) unreel before my eyes.

The movies Scorsese extols are powerful, but it's all to easy for an ugliness-kayoer/dirt-cleaner-upper from way back like me to resist their power at a pre-conscious level. Which is the reason, I suppose, that when Rossellini made The Miracle, a segment of his film L'Amore (1948), in which a woman give birth to a child she believes is the Christ child, Catholic officials denounced it.

The Miracle's main character, Nanni, portrayed by Anna Magnani, has more religious faith than sense or sanity. She's taken advantage of by a stranger, a wanderer she thinks is St. Joseph, who she imagines has come along in answer to her prayer. (The stranger is played by Federico Fellini.) Pregnant, she winds up an outcast. As we watch the tender scene in which Nanni has her baby, alone, without prospects, redeemed by the simple fact that she has brought forth the "miracle" of new life, Scorsese tells us what it all means to him:

Rossellini is communicating something very elemental about the nature of sin. It's a part of who we are, and it can never be eliminated. For him, Christianity is meaningless if it can't accept sin and allow for redemption. He tried to show us that this woman's sin, like her madness, is nothing in comparison to her humanity.

***


"It's odd to remember," Scorsese continues, "that this film was the cause of one of the greatest scandals in American movie history. When The Miracle opened at the Paris Theater in Manhattan, Cardinal Spellman, who was the cardinal of New York at the time, and the Legion of Decency — that was the Catholic watchdog organization — called the movie a blasphemous parody, and they mounted a campaign to have it pulled from the theaters."

The case wound up in the Supreme Court, which in 1952 struck down movie censorship based on allegations of blasphemy. That's important because it paved the way for the frankness of Scorsese's and other filmmakers' films in coming decades.

But more important to me than the legal precedent is what The Miracle says about compassion and humanity, sin and redemption. It say that the true Christian attitude is not to be such an ugliness-kayoer and dirt-cleaner-upper as I know I tend to be and as Cardinal Spellman and the Legion of Decency were back in their day. It is rather to see the beauty in each of God's creatures and to view their flawed humanity, not only with the utmost of honesty, but also with the utmost of compassion.

And to be reminded of that is, ultimately, why we watch movies.

Tuesday, July 19, 2005

Restoring Movie Images Digitally

It's probably no secret that this blogger thinks of movies as reason number one to climb on the HDTV bandwagon. To look their best in hi-def form, classic films, middle-aged films, and even fairly recent ones often need restoration work. In Robert A. Harris, Film Restorer Extraordinaire I discussed the work of one of film restoration's leading lights, a man whose contributions (usually along with partner James Katz) include preserved versions of Vertigo, My Fair Lady, Spartacus, Lawrence of Arabia, and Rear Window.

Now I'd like to take a look at the contributions to film restoration that are arriving from an entirely different quarter: John Lowry's Lowry Digital Images.

LDI has been responsible for returning the first three Star Wars films "to their original glory for DVD," says "Creating the Video Future," an article in the November 2004 edition of Sound & Vision. Ditto, the Indiana Jones collection. Ditto, all twenty of the James Bond films. Ditto, Snow White and the Seven Dwarfs; Singin' in the Rain; North by Northwest; Gone with the Wind; Now, Voyager; Mildred Pierce; Roman Holiday; Sunset Boulevard; and Citizen Kane.


The Lowry process is not unlike that used for digital intermediate work (see
The Case of the Data-Processing DP). The first step is to scan into the digital domain each frame of the film negative, often at 4K resolution (see 2K, 4K, Who Do We Appreciate?). At four seconds per frame, I calculate it must take 691,200 seconds, or 192 hours, to scan in a two-hour film at 4K. That's a ratio of 96 scan hours per one running hour, excluding coffee breaks!

Lowry stores the entire film as data on servers that hold 6 terabytes
apiece. A terabyte is 2 to the 40th power, or 1,099,511,627,776 bytes. Think of it as 1,000 gigabytes. Lowry has a total of 378 terabytes of storage hooked to his high-speed network. Perhaps the Pentagon would like to send a task force to check out how it all works.

That data network at LDI also interconnects 600 dual-processor Macintosh computers whose task it is to process all that stored data, using complex algorithms selected and parameterized according to the needs of the particular movie, in order to tweak the look of each of the 172,800 frames that make up a two-hour flick.

After a visual review of each scene on a video monitor and a possible re-parameterization and rerun of the automatic process, exceptionally problematic material is touched up "by hand."

When all that is done, the result is a "digital negative." The digital negative is said by Sound & Vision's Josef Krebs to be "every bit as good as the camera negative — but without the wear, tear, and deterioration." The digital negative can be used to spin off versions for DVD, HDTV, standard-def TV, or digital cinema, a filmless methodology which involves using digital video projectors to throw the image onto the screen in commercial theaters. Or it can be output on film for optical projection.


Darth Vader in Star Wars
For the story on how this process gave the original Star Wars films new life, check out "John Lowry: Three Months and 600 Macs" on the Apple Computer website. Job One was to obliterate dirt and scratches, Lowry says. The next challenge was to conceal the telltale changes in contrast, graininess, and definition that accompanied the optical special-effects scenes in those pre-CGI days. Lowry says he also "sharpened it end-to-end, reduced the granularity and got rid of the flicker."

"When you use a computer," says Lowry, "if you can understand the problem, it's always solvable." Torn film, fading, chemical deterioration — they all succumb to digital remedies.


4K-resolution scanning is not always used as the basis for his digital restorations, Lowry says ... after having extolled 4K as uniquely giving the ability "t
o capture everything that is on that negative, which probably has a limit somewhere in the 3- to 4K range."

The "information on a film," he says, "
usually rolls off between 3 and 4K. We've experimented at 6K, but, frankly, it's pointless on a standard 35mm film frame until there are better camera lenses and film stocks." So, 4K looks like a winner when it comes to really, really capturing the whole image present on the original camera negative. Also, Lowry says, "with 4K the colors are more vibrant, subtle, and sharper."

Even so, Lowry did standard-def (!) transfers for
North by Northwest; Now, Voyager; and Citizen Kane. Snow White and the Seven Dwarfs and Giant were his first digital restorations at high definition (1080p, I assume). Next came the step up to 2K, for Roman Holiday and Sunset Boulevard; they had to be scanned from duplicates of their original negatives, with accordingly lower resolution to begin with, to be captured digitally.

Lowry's current maximum resolution, 4K, is newer yet. "
Now we do most of our transfers at high-def and 2K," he says, "but we also do a whole range of movies in 4K." I'll bet the demand for 4K will, despite the extra cost, soon outstrip that for 2K and HD. I'll furthermore bet that there's already little remaining demand for SD restorations.

Along those lines, Lowry says that he is doing nine of the twenty James Bonds in 4K, the rest in HD. I'll wager the day will come soon in which the film's owners wish they'd scanned them all in 4K.


Film preservationist Robert Harris, in his
"Yellow Layer Failure, Vinegar Syndrome and Miscellaneous Musings" column at TheDigitalBits.com, has sometimes been critical of the Lowry process. One reason is that Harris describes himself as a "proponent" of film grain. "Grain is our friend," he has written. The Lowry process can remove most or all of the grain in a film image. Not just the excessive grain which builds up whenever film is duplicated from camera negative to interpositive to internegative ("dupe" negative) to final print — all the grain, period.

Lowry's S&V response: "We generally try to even out the grain so changes don't interfere with the storytelling. I don't recommend taking all the grain out of anything. I used to do more of that. We took out too much grain on Citizen Kane, but on Casablanca we left more in and it looks better. There's something comforting about grain for the viewer." It sounds like Lowry has become a late convert to the grain-is-our-friend school of film preservation.

Another complaint Robert Harris has made is that the Lowry process doesn't really restore or preserve the film per se. It leaves the photographic images which were originally recorded on celluloid strictly alone. It does not yield a replicated negative or fine-grain positive separation masters for archival use. It does not result in any film elements whatever, other than (optionally) a new generation of release prints derived from the "digital negative."

But release prints are not archival. They're the end of the line, photo-optically speaking. You can't use them as sources for further duping, since they lack the negative's high level of detail, low level of grain, wide dynamic range/contrast ratio, and so forth. And, due to "blocking," the shadows and highlights in a final print are drained of nuance.

So the Lowry process places the entire burden for archiving film's images squarely in the digital-video domain. The film isn't preserved for posterity; at best, the data is.

Though I don't know that I can put my finger on the source — I believe it to have been an interview I read with prime digital-cinema mover George Lucas — I've read that Lowry's precious data eventually is remanded to the studio to whom the film belongs. Or else the studio provides Lowry with the (empty) storage media in the first place, and then receives the (filled) media back at the end of the process. Something like that.

That means the studio winds up in charge of archiving the data. In the past, studios have been (shall we say) notably remiss in taking proper care of their archives.

What's more, data-file formats are notorious for their quick obsolescence. The file formats used for digital intermediates have yet to be standardized, and I'm sure those Lowry uses for "digital negatives" are just as idiosyncratic. In twenty years, will machines still be able to read these files? That's Robert Harris' main concern, I gather.


Still, the process John Lowry uses to restore not film per se but film's images is an impressive one. It's clearly the wave of the future. Hollywood is already beating a path to his door.

Monday, July 18, 2005

2K, 4K, Who Do We Appreciate?

In The Case of the Data-Processing DP I talked about how film production is being revolutionized by the digital intermediate: transferring movies to the domain of digital video for sprucing up the final image and adding computer-generated imagery (CGI). DI would be tough to do were it not for newfangled film scanners and data-capable telecines which can accomplish so-called 2K and 4K scans.

In the highest-definition digital TV format available over the air, 1080i, the width of the 16:9-aspect-ratio image holds fully 1,920 pixels. This number is accordingly the upper limit on "horizontal spatial resolution," which in turn means that no detail whose width is tinier than 1/1920 the screen's width — a single pixel — can be seen.

In a 2K scan, the number of pixels across the width of the scanned film frame goes up to at most 211, or 2,048. In a 4K scan, that upper limit is doubled, to 4,096.

The actual count of pixels produced horizontally in a 2K or 4K scan can be less than the stated upper limit — for example, when the aperture used by the film camera to enclose the film frame doesn't allow the image to span the whole 35-mm frame from "perforation to perforation," or when room is left along one edge of the film for a soundtrack. Furthermore, a lesser number of pixels will be generated vertically than horizontally, since the image is wider than it is tall.

Even so, 2K and 4K scans generally betoken levels of visual detail higher (though, in the case of 2K, only somewhat higher) than a so-called "HD scan" at 1920 x 1080.


With the advent of high-definition 1080i DVDs expected imminently (see HDTV ... on DVD Real Soon Now?), do yet-more-detailed film scans at sampling rates such as 2K and 4K have any relevance to us now?

Apparently, yes. Experts seem to agree that it's a good idea to digitize anything — audio, video, or what-have-you — at a sampling frequency a good deal higher than the eventual target rate. The term for this is "oversampling."

For example, The Quantel Guide to Digital Intermediate says on pp. 17-18, "Better results are obtained from ‘over sampling’ the OCN [original camera negative]: using greater-than-2K scans, say 4K. All the 4K information is then used to produce a down converted 2K image. The results are sharper and contain more detail than those of straight 2K scans. Same OCN, same picture size, but better-looking images."

Furthermore, according to "The Color-Space Conundrum," a Douglas Bankston two-part technical backgrounder for American Cinematographer online here and here:

A 2K image is a 2K image is a 2K image, right? Depends. One 2K image may appear better in quality than another 2K image. For instance, you have a 1.85 [i.e., 1.85: aspect ratio] frame scanned on a Spirit DataCine [film scanner] at its supposed 2K resolution. What the DataCine really does is scan 1714 pixels across the Academy [Ratio] frame (1920 from [perforation to perforation]), then digitally up-res it to 1828 pixels, which is the Cineon Academy camera aperture width (or 2048 if scanning from perf to perf, including soundtrack area). ... Now take that same frame and scan it on a new 4K Spirit at 4K resolution, 3656x2664 over Academy aperture, then downsize to an 1828x1332 2K file. Sure, the end resolution of the 4K-originated file is the same as the 2K-originated file, but the image from 4K origination looks better to the discerning eye. The 4096x3112 resolution file contains a tremendous amount of extra image information from which to downsample to 1828x1332. That has the same effect as oversampling does in audio.

In 1927, Harry Nyquist, Ph.D., a Swedish immigrant working for AT&T, determined that an analog signal should be sampled at twice the frequency of its highest frequency component at regular intervals over time to create an adequate representation of that signal in a digital form. The minimum sample frequency needed to reconstruct the original signal is called the Nyquist frequency. Failure to heed this theory and littering your image with artifacts is known as the Nyquist annoyance – it comes with a pink slip. The problem with Nyquist sampling is that it requires perfect reconstruction of the digital information back to analog to avoid artifacts. Because real display devices are not capable of this, the wave must be sampled at well above the Nyquist limit – oversampling – in order to minimize artifacts.

So avoiding artifact creation — "the Nyquist annoyance" — is best done by oversampling at more than twice the sampling frequency of the intended result and then downconverting to the target frequency.


This would seem to imply that it would be wise to transfer a film intended for Blu-ray or HD DVD at a scan rate of at least 2K — but 4K would be better!

It's hard to know how much the "Nyquist annoyance" would mar a typical 2K-to-HD transfer, since (see View Masters) most projected 35-mm movies have way less than 1920 x 1080 resolution to begin with. Original camera negatives generally have up to 4K resolution, or even more, but by the time release prints are made from internegatives which were made from interpositives, you're lucky to have 1K resolution, much less HD or 2K.

So if the "Nyquist annoyance" became an issue with an OCN-to-2K-to-HD transfer, the transfer could be intentionally filtered below full 1920 x 1080 HD resolution and still provide more detail than we're used to seeing at the movies.


All the same, video perfectionists will surely howl if filtering is done to suppress the "Nyquist annoyance." They'd presumably be able to see a real difference between a filtered OCN-to-2K-to-HD transfer and an unfiltered OCN-to-4K-to-HD transfer, given the opportunity.

All of which tells me the handwriting is on the wall. Someday in the not-far-distant future, aficionados are going to be appeased only by HD DVD or Blu-ray discs that are specifically "4K digitally scanned," or whatever the operative jargon will be. All of the farsightedness ostensibly displayed today by studios using "HD scans" for creating standard-def DVDs — intending one day to reuse those same transfers for Blu-ray or HD DVD — will turn out to be rank nearsightedness. Sometimes the march of progress turns into a raging gallop!