To review briefly: color perception being so personal and idiosyncratic, color scientists needed a way to standardize the millions of possible combinations of hue, intensity/saturation, and light-to-dark "value" in an objective way. Enter, accordingly, the CIE chromaticity diagram. It mapped all possible real-world "color spaces" into one large, abstract three-dimensional solid which could for convenience be represented on a flat, two-dimensional graph as a shark fin-shaped triangle made of every conceivable hue and saturation.
Every real-world application of color imaging, however, uses a narrowed gamut or palette which, for technical reasons, typically excludes one or more ranges of the purer, more intense colors lying around the outer regions of the diagram. This narowed gamut is the application's "color space." The outer limits to any particular application's gamut stem from many factors, one of which has to do with whether the application uses an additive color process or a subtractive color process.
For example, you can make pure brown by mixing two small dollops of red with one small dollop of green. Make the two dollops big instead of small, and you get pure orange, not brown. You can think of brown as "dark orange" and of orange as "light brown." But both are pure, maximally saturated colors, since the third primary, blue, is wholly absent. Start adding in a little blue, though, and the brown/orange starts verging away from a pure, maximally saturated hue and toward a (dark, for brown; light, for orange) gray. Before it becomes fully gray, it's any of a huge number of muted, desaturated versions of brown or orange.
If you add R, G, and B in equal amounts, that's when you get an actual gray — or, if you use large enough dollops, white. White and the various shades of gray can be thought of as maximally desaturated colors.
In terms of the CIE chromaticity diagram, TV imaging systems typically select, in defining their color space, a "red," a "green," and a "blue" that are not as pure as those at the "corners" of the CIE diagram. In the real world, it is hard to find sources of light that will produce absolutely pure primary colors and can be used in direct-view TV displays or in front and rear projectors' "light engines."
Consequently, the real-world primaries — the colors of the actual phosphors (or whatever) that are used to light up TV images — impose a gamut restriction on the color space of the TV. The palette of colors that are actually able to be reproduced by the TV or the signal transmission system which feeds it can be diagrammed as a subsidiary triangle within the larger CIE diagram.
(Actually, the x and y cooridnates of D65 were determined before physicists moved the so-called Planckian locus which represents the various colors of white produced by black-body radiators when heated to different absolute temperatures such as 6504 K. They decided a key numerical constant, Planck's constant, needed to be changed slightly, which moved the Planckian locus. So D65 wound up off the locus. 6504 K is the color temperature associated with the point on the locus nearest D65.)
Within the spectral locus/line of purples perimeter in the diagram above (the outer "shark fin") are two triangles, one solid and one dashed. The solid one represents the SMPTE 170M color space, defined by the Society of Motion Picture and Television Engineers for standard-definition TV. It is sometimes (somewhat inaccurately) spoken of as the International Telecommunication Union's "Rec. 601" color space. (Not shown is the ITU's "Rec. 709" color space for HDTV. Its corners, however, are quite close to those of SMPTE 170M/Rec. 601.)
The dashed triangle encloses the long-obsolete color space originally defined in 1953 for NTSC color TV transmissions in the United States. Notice that it has a wider gamut of colors (since it's a bigger triangle) than SMPTE 170M/Rec. 601. That wide gamut form 1953 was narrowed over the next few years in response to consumer preferences for brighter color TV displays. With the phosphors available, you could either have a dim-but-wide color gamut or a bright-but-narrow color gamut, and people preferred the latter.
If you hold a strip of film up to a white light and it looks yellow, for example, that's because a yellow dye in the film has subtracted all the blue from the initially white light. Only the red and green wavelengths of light get through; they combine to make yellow.
If the strip looks green, then yellow and cyan dye layers working together have subtracted, respectively, blue and red from the white light, leaving green.
If the strip looks black, then yellow, cyan, and magenta dye layers have all been at work. They have subtracted, respectively, blue, red, and green wavelengths, leaving no light whatever, or black. (In practice, though, color films employ ways of making black look solider and more convincing than simply subtracting red, blue, and green from white.)
Additive color systems' color spaces, unfortunately, do not map perfectly to those of subtractive color systems, and vice versa. That's one reason why transferring film to video can create color mismatches.
More on HDTV color later, in Part 3.