INTERLACE/PROGRESSIVE SCANNING: COMPUTER VS. VIDEO [originally posted to comp.graphics] Charles Poynton Copyright (c) 1993-10-23 It seems to me that the "computer" interests in the ATV debate are unneccessarily insistent on progressive scan. Computer people tend to treat this as an idealogical issue with a simple right answer (progressive) or wrong answer (interlace). Television system designers treat it as an engineering issue that involes compromise, tradeoff and optimization. A computer expert generally works to maximize data rate (or loosely, bandwidth). A signal processing expert deals not only with bandwidth, but also with signal-to-noise ratio, which is at the opposite end of the same scale. A television engineer, or a communication systems engineer, works not to maximize bandwidth at all costs but to optimize a system at the "sweet spot" along the bandwidth-to-SNR curve for his application. Computer experts do not generally understand the signal-to-noise aspect, so do not see that there is a tradeoff involved in adopting progressive scan. There are three important, separate limits on bandwidth of HDTV television systems today: one at the camera, one at the studio VTR, and one in terrestrial and cable broadcast channel. Technology in each of these three areas involves, at present, distinct bandwidth limits of about sixty megapixels per second. None of the limits is improving very quickly. The camera limit relates to sensitivity and signal-to-noise ratio. A CCD image sensor with today's silicon technology, of a size that is suitable for todays lenses, having sensitivity and noise performance similar to 35 mm film, imposes a limit of about sixty megapixels per second (60 Mpx/s). There are several ways to improve this: double the area of the CCD -- which exacts a huge CCD chip yield penalty and requires new lens designs -- or halve the camera sensitivity, which is unacceptable for productions that do not have the benefit of studio lighting levels. The VTR limit is imposed by the data rate at the interface between the head and the magnetic tape. This data rate is limited by several laws of physics. Sony's digital HDTV studio recorder achieves one gigabit per second -- actually 250 Mb/s on each of four simultaneous heads -- for an hour and a half. This is about a terabyte per tape. Of course non-studio recording can be digitally compressed so as not to be subject to this constraint, but compression at ratios higher than about four-to-one is unacceptable for studio use. The third limit is associated with the 6 MHz bandwidth of the analog transmission channel. A six megahertz channel with SNR typical of broadcast VHF/UHF or cable systems, say 30 dB, accomodates a data rate of about 20 Mb/s. This is about the lower limit of MPEG-2 compression of 60 Mpx/s. This limit is also representative of consumer VCRs, either analog VCRs or consumer digital VCRs adapted from Hi-8-class technology. (This is not surprising, considering that VCRs have been optimized for recording 6 MHz analog broadcast signals.) The 60 Mpx/s rate imposed by these three constraints can be utilized either for interlaced or progressive pictures. You can transmit a one megapixel picture (e.g. 1280x720) at 60 Hz progressive, or a two megapixel picture (e.g. 1920x1080) at 30 Hz, 2:1 interlaced. Admittedly an interlaced system has potential vertical and temporal artifacts. But the "purity" of the progressive approach comes at a cost of roughly half the achievable spatial resolution. Television system designers and users have, over the course of the last four decades, been employing a large variety of techniques to minimize the intrusion of interlace artifacts. We employ all of these techniques today in 1125/60/2:1 HDTV. There is absolutely no doubt among television engineers and viewers that interlaced systems deliver higher perceived spatial resolution: the interlaced systems tested at the ATTC demonstrated substantially better resolution than the progressive systems. It is for this reason that the broadcasting organizations such as the National Association of Broadcasters, the Association of Maximum Service Television, Cable Labs, and so on have been issuing press releases and position papers declaring their view that, at introduction, advanced television must be interlaced. (By which you should read, capable of interlace.) In the long term, everyone in both the television and computing industries is working towards fully progressive systems. But I am in favour of achieving the maximum possible spatial resolution at the outset of advanced television: in the early stages, doubling the spatial resolution is, in my opinion, more important than eliminating the last vestiges of interlace artifacts. Interlace systems do not have objectionable artifacts today, and the situation will only improve as deinterlacing technology and encoder technology improves. Many people in the industry now believe that the issue can be resolved for domestic ATV by mandating that consumer decoders be capable of handling both interlaced and progressive signals "in the channel". The transmission standard would include both options. Origination could then be interlaced or progressive at the option of the originator: initially, interlaced for the boroadcast users and progressive for computer users. A "dual-mode" decoder would impose a cost penalty of less than five percent; I call this the "interoperability tax". This approach would acommodate a shift to fully progressive systems as ATV technology develops, without creating a reverse-compatibility problem. By the way, I appreciate Professor Schreiber's comments about "quincunx" (offset) scanning being used to obtain the temporal performance of a progressive system at the data rate and SNR of an interlaced system. But isn't offset sampling just interlace in the V-H plane instead of V-T? Charles Poynton vox: +1 416 486 3271 fax: +1 416 486 3657 poynton@poynton.com [preferred, Mac Eudora, MIME, BinHqx]