Understanding and Using High-Definition Video
Understanding and Using High-Definition Video
plan to play back the video on a computer. Often times, 720 60p is called
720p and 1080 60i is called 1080i.
However, those labels are ambigu-
160 352 720 1280 1920 ous as to frame rate. 720 can run at
QCIF 24p, and 1080 can run at 50i in its
176x128 European version.
22,528px CIF
120 352x288
101,376px
240
NTSC DV Video formats and sizes
720x480 This chart illustrates the relative
345,600px sizes of different video formats’
483 frames, with a count for the number
of total pixels. These aren’t aspect
720
1280x720 ratio corrected. For the actual
921,600px shapes of video frames in different
720 formats, see the next image.
1080
1920x1080
2,073,600px
1080
Types of HD
Type Size Frames per Second
720 24p 1280 x 720 23.976 fps progressive
720 25p 1280 x 720 25 fps progressive
720 30p 1280 x 720 29.97 fps progressive
720 50p 1280 x 720 50 fps progressive
720 60p 1280 x 720 59.94 fps progressive The most common types of HD
NTSC normally reduces the frame rate by 0.1%; thus, for 30 frames per second you would get
29.97 frames per second to match the frame rate of NTSC color broadcasts. While this reduc-
tion is optional in most HD products, most people use the lower rates.
The original HD-broadcasting standard for consumers was Muse, created by Japan’s NHK.
Muse was an analog HD system intended for satellite broadcast. While it produced very good
image quality, market realities kept Muse from getting much traction.
At the time, there was a drive in the U.S. Congress to reallocate some unused UHF channels
from broadcasters to emergency communications and other uses. As an excuse to keep the
channels, the National Association of Broadcasters (NAB) proclaimed it needed the channels
for HD broadcasts in the future. This proclamation succeeded in the short term, but it commit-
ted the NAB to broadcasting in HD and (in theory) eventually to giving up analog channels for
other uses.
The Federal Communications Commission (FCC) set up the Advanced Television Systems
Committee (ATSC) to define the specification for HD broadcasts. The process took years more
than expected, but the standard was finally created, and broadcasters began HD broadcasts.
In 2004, most prime time shows are broadcast in HD, and both cable and satellite systems are
introducing HD channels.
Even today, only a minority of consumers in the U.S., and a much smaller minority in other
industrialized nations, actually have access to HDTV systems. But HD capable displays are sell-
ing very well now, even if many displays are only used to view standard definition content, like
DVD.
The next big advancement for HD is likely to be HD DVD for which the standard is currently in
development.
The modern interpretation of Moore’s Law is that computing power, at a given price point,
doubles about every 18 months. Consequently, a new U.S. $3000 editing system is twice as fast
as a U.S. $3000 system from a year and a half ago. A system three years ago was a quarter of the
speed, and a system four and half years ago was one-eighth of the speed. Similar predictions
hold true for other measurements, such as the amount of RAM available in a computer, and the
speed and size of hard drives.
A pixel size of 1920 x 1080 60i is only 6.5 times greater than the NTSC SD standard of 720 x 480
60i. The idea of a computer having to do more than 6.5 times more work may seem daunting,
but think about Moore’s Law—if current trends continue, we can expect that type of improve-
ment in about four years!
And Moore’s Law is just a measurement of performance. In the last four years, we’ve seen great
gains in terms of usability of video and audio-authoring tools.
The experience of watching HD at home is still not as refined as it is for SD. For example, the
amount of available content is still relatively small, and HD video recorders and personal video
recorders (PVRs) are much more expensive than their SD counterparts. But vendors view HD as
a growth area and are rapidly enhancing products for use with HD. In some ways, HD is easier
and cheaper to work with than analog SD because you can avoid analog-to-digital conversion or
compression.
While much of the rest of the world is moving toward digital broadcasting, only the U.S. and
Japan have seen the broad adoption of HD as a broadcast technology. Developments in HD lag
somewhat behind digital broadcasting, but some areas of Europe have begun HD broadcasting;
Japan has announced it, and it should continue to be adopted worldwide.
Most DBS vendors use ATSC data to create their HD broadcasts. However, Cablevision’s satellite
service will use MPEG-4. Because new equipment (a set-top box) is required for HD, and be-
cause bandwidth is so high, there is interest in the possible gains from a more modern compres-
sion system than ATSC’s MPEG-2. The two most discussed compression systems are MPEG-4
AVC and Microsoft’s VC-9.
HD in post production often uses the 2K and 4K standards—respectively 2048 and 4096 pixels
wide. Most people feel that 2K is more than good enough for typical productions. An increasing
number of companies are using 1080 24p HD video cameras and equipment.
There is hope at the low end of the market as well. JVC’s HDV-based cameras provide great
results at a miniscule price. Similar cameras with higher resolution and native support for 1080
24p are coming. These new cameras will achieve better image quality than 16mm film, will cost
less, and will be available and from a variety of vendors.
In both cases, it’s easier to go back to film with a film recorder from HD 24p content than from
any type of NTSC content. The 6.5 times as many pixels makes a huge difference, and authoring
in film’s native 24p frame rate makes motion much smoother.
In the post industry, HD video has proven to be an effective method for distributing intermedi-
ates, proofs, and clips for review. A downloadable HD file looks substantially better and can be
available much sooner than a Beta SP tape sent by mail or even overnight express.
The definition of digital cinema is still evolving. The quality requirements are much higher than
they are for consumer HD, and Hollywood likes it this way—they want the theatrical experi-
ence to have an advantage over the home experience that goes beyond popcorn. Expect whatever
standard is adopted for theatrical HD projection to be well beyond the standard that will be
affordable for the mass market.
The good news is that it is already possible for digital projection to equal or exceed the qual-
ity of film projection. It’s just a question of getting those projectors cheap enough to put in the
theaters.
One interesting possibility, after digital cinema is standard, is a transition to higher frame rates.
For many decades, 24p has been the film standard because updating cameras and projectors is
very expensive, as is the cost in film stock. But with digital projection, 60p isn’t that much more
expensive to create, distribute, or project than 24p. And 60p can provide a vivid immediacy that
is currently impossible with film.
Microsoft has been pioneering this HD-playback market. They are working with content com-
panies to create a line of two-disc sets: a conventional DVD version of the movie and a DVD-
ROM that contains an HD Windows Media Series 9 (WM9) version of the movie. The first title
releases had some problems with playback performance, Digital Rights Management (DRM),
and software players, but content vendors have clearly learned from those problems and the
second generation of titles provided a more seamless experience. The technology is not exclusive
to Microsoft; any content creator can make similar discs.
The goal is an HD DVD format, and it’s coming. However, there are at least four different com-
peting technologies in development. The leading proponents are the DVD Forum and Blu-Ray.
The DVD Forum’s efforts are the furthest along. They’ve already announced a plan for video
codecs supporting MPEG-2, MPEG-4 AVC, and Microsoft’s VC-9.
A writable format is in development for the DVD-Forum format, and it should become a viable
target for one-off media in a product-introduction cycle that is much faster than DVD-Video
was.
HD in production
Options for HD production have exploded in the last few years. Like SD, there is a broad range
of products with wildly different price points and features. The HD experience is functionally
very similar to SD, except for the additional bits and pixels. Equivalent cameras, monitors, and
other workflows are available for SD and HD, but they have somewhat higher price points for
HD. Even the formats are similar with derivatives of DVCAM and D5 as the dominant high-end
production formats.
The targeted 16:9 aspect ratio of HD is also different. The combination of 16:9 and HD can be a
boon for sports because much less panning is required to see the details. Well-shot HD hockey
and basketball can be a true revelation—HD sports are one of the major drivers for consumers
upgrading to HD.
HD tape formats
While there are some live broadcasts of HD, most HD is edited nonlinearly on personal comput-
ers and workstations, similarly to SD.
Currently, there are a variety of digital tape formats in use for professional HD production, but
the industry seems to lean toward the Sony HDCAM and Panasonic D5 formats. All formats use
the existing physical tape formats originally designed to record SD video but with new com-
pressed bitstream techniques to store the additional data HD requires. Fortunately, there isn’t a
significant archive of analog HD tape content.
The interframe encoding allows HDV to achieve high quality at lower bit rates, which means
there is more content per tape, but that content is more difficult to edit. Solutions like Cine-
Form’s Aspect HD are required to convert from HDV to an editable format.
Note that HDV uses the same bitstream as DVHS, but on a smaller tape.
For anything other than an HDV capture, a single drive does not provide sufficient bandwidth
for capture, and you will need a Redundant Array of Independent Disks (RAID) system (de-
scribed in the next section).
Beyond capture, real-time effects require enough read performance to play back two streams off
of the drive in real time.
You can now purchase a single 300-GB drive that doesn’t provide many minutes of storage for
anything other than HDV, which again suggests a RAID solution is required for working with
larger HD formats.
RAID 0 uses two or more drives and splits the data between them. RAID 0 has no redun-
dancy and offers no protection from data loss—if one drive fails, all of the data in the set is
lost. However, all drives are used for storage, and performance is blazing fast. If the source
is backed up, RAID 0’s performance and price can be worth its fragility. For editing with
CineForm’s Aspect HD, a two-drive RAID 0 is sufficient for 720 30p. More drives are needed
as speed requirements go up, and the more drives you add, the greater the chance of failure.
RAID 3 uses three or more drives combined and dedicates one to be used for redundancy. The
volume can survive any one drive failing. RAID 3 doesn’t perform quite as well as RAID 0 on
total throughput, but the fault tolerance is well worth it. Because the parity is dedicated to one
drive in RAID 3, there is no effective limit to how you size your blocks for transfer.
Using and Understanding 9
High-Definition Video
RAID 5 uses three or more drives in a set, and the redundancy is shared across all of the drives
by dedicating equal space on each drive to parity, or redundant, data. RAID 5 provides aggre-
gate transfer rates that are better than RAID 3.
RAID 5+0 uses two or more RAID 5 sets combined into one RAID 0. Typically, each of the
RAID 5 sets is on its own drive controller, allowing bandwidth to be split over two control-
lers. This division of bandwidth provides better performance than RAID 5 and still provides
redundancy (one drive per RAID 5 set can fail without data loss). RAID 5+0 is the optimal
mode for most HD use.
Either way, HD performance requires a stand-alone controller card (potentially several for
throughput). The RAID controllers on many motherboards aren’t up to the task of 1080 un-
compressed video yet. For the purposes of reliably handling video, it is not recommended to use
software RAID controllers. Hardware RAID controllers for video work often contain onboard
cache’s that are essential for efficient management of the data transfers.
HD and connectivity
RAID systems work well for storing files locally. But when the work needs to be split among
multiple workstations, for example, between an Adobe Premiere Pro editor and an After Effects
compositor, you need a system that can handle high bandwidth and high speed.
Ethernet speeds vary widely. For HD work, you should use a gigabit Ethernet, which can typi-
cally support several hundred Mbps of bandwidth with good cables and a fast switch.
You can use one of two dominant protocols to capture HD content. The high-end formats use
High Definition Serial Digital Interface (HDSDI), and HDV uses FireWire.
The drawback when using HDSDI is that HDSDI is always uncompressed. Consequently, the
bit rate of a relatively compressed format, like HDCAM or DV100, balloons. Ideally, all vendors
would provide a way for you to capture their compressed bitstreams and natively edit them in-
stead of forcing a conversion to uncompressed. This conversion increases storage requirements
and requires another lossy generation of compression every time the signal goes back to tape.
In addition, HDSDI doesn’t directly support device control; it requires traditional serial device
control.
You can embed audio in the HDSDI bitstream; however, not all capture cards support embed-
ded audio. It’s more common to use an Audio Engineering Society/European Broadcast Union
(AES/EBU) connection for transporting audio.
The disadvantage of using AES/EBU instead of embedded HDSDI audio is that using AES/EBU
requires more cables.
Currently, the only major HD format supporting native FireWire transport is HDV, which also
maximizes at 25 Mbps. However, the compressed bitstreams of many of the professional formats
could easily fit in the original 400 Mbps FireWire, and all formats could fit in 800 Mbps. Hope-
fully, vendors are rising to the challenge and providing a professional HD workflow with the
ease of use of DV.
The 720 formats are all progressive, but 1080 has a mixture of progressive and interlaced frame
types. Computers and computer monitors are inherently progressive, but television broadcast-
ing is based on interlaced techniques and standards. For computer playback, progressive offers
faster decoding and better compression than interlaced and should be used if possible.
Most HD authoring formats are 10 bit, but most HD delivery formats are 8 bit. Converting
from one to another can be tricky. Some tools might truncate the least significant 2 bits. This
process normally works fine, but in content with very smooth gradients, some banding might
occur. Some capture cards can dither on-the-fly to 8 bit while capturing, which reduces storage
requirements and provides a better dither than many software exporters.
Audio for HD is normally mastered in a multichannel format, such as 5.1 surround sound (five
speakers plus a subwoofer) or 7.1 surround sound (seven speakers plus a subwoofer). Most HD
tape formats support at least four channels of audio and many support eight. When delivering
HD content, AC-3 can support 5.1 48 KHz, and Windows Media Audio 9 Professional technol-
ogy can deliver 5.1 and 7.1 at 96 kHz.
• Canopus (www.canopus.com)
• CineForm (www.cineform.com)
• Matrox (www.matrox.com/video/home.cfm)
Using and Understanding 12
High-Definition Video
Post-production for HD content
After you’ve acquired the content, post-production begins. Post-production is the domain of
Adobe Premiere Pro for editing and Adobe After Effects for compositing and finishing.
Post-production for HD is similar to that for SD. One difference is that with HD you’re dealing
with significantly more data and consequently increasing the load on the CPU and video card.
But if you worked in Adobe Premiere Pro and After Effects for SD, you can use the same tools
and enjoy the same workflow in HD today.
Choosing an HD monitor
HD video monitors can be quite expensive, especially models with HDSDI input. Someone
authoring video for SD broadcast always uses a professional grade video monitor; similarly,
anyone making HD for broadcast should have a true HD video monitor.
For some tasks, a computer monitor is a less expensive way to view the actual HD output, espe-
cially for progressive projects. Computer monitors use a different color temperature from true
video monitors, so it’s beneficial to use a video monitor to ensure that you see the true repre-
sentation of the video as it will be delivered. A number of vendors provide HDSDI to Digital
Video Interface (DVI) converters to drive computer monitors. Because high-quality 1920 x 1200
LCD monitors are now available for around U.S. $2000, this converter is cost effective. Because
computer monitors are natively progressive, these converters are not a good solution for natively
interlaced projects in Adobe Premiere Pro (the preview in After Effects is progressive, so this
isn’t an issue for that application).
Another advantage for using broadcast monitors is calibration. These monitors provide features
that easily set them to the canonical 709 color space of HD. While this setting is possible with a
computer monitor as well, correctly setting the calibration can be difficult. Even systems with
a colorimeter for automatic color configuration don’t always include 709 presets (although, this
standard should be rapidly changing).
Adobe Premiere Pro, with the proper hardware, can also directly send previews out to a video
or computer monitor. Sending previews directly is the optimum way to work if hardware that
supports this feature is available.
HD can be slow!
Moore’s Law has many formulations, one of which is: a computer at a given price point doubles
in performance every 18 months or so. Because 1080 60i is 6.5 times as many pixels as 480 60i,
rendering time will be similar to how it was four to five years ago, all things being equal. HD is
much slower than SD, but on a modern computer, it’s about as fast as a turn of the millennium
system.
The good news is that modern tools give us a number of techniques to get around these prob-
lems.
High-quality rendering is especially important for content that is going to be compressed for
final delivery. When you use lower-quality rendering modes, the errors introduced in the video
are difficult to compress and result in more artifacts in the output.
Using and Understanding 13
High-Definition Video
Using 16-bit rendering in After Effects
16-bit per channel rendering was introduced in the After Effects Production Bundle and is
now available in both After Effects 6.0 Standard and After Effects 6.0 Pro. The 16-bit rendering
process doubles the bits of each channel, which increases the precision of video rendering by 256
times. That extra precision is quite useful, especially in reducing the appearance of banding in
an image with subtle gradients. The drawback of 16-bit rendering is that it takes twice the time
and twice the RAM as 8-bit.
Rendering in 16-bit isn’t necessary when encoding to an 8-bit delivery format like Windows
Media or MPEG-2. It does provide a very welcome quality boost when going back out to 10-bit
formats like D5.
If uncompressed video streams are not a firm requirement, the use of advanced compression
techniques in MPEG-2 and Wavelet technologies can produce visually lossless media at data
rates that allow various levels of real-time editing, including multiple streams, effects, filters and
graphics. Both hardware- and software-based solutions are available to configure these real-
time HD solutions.
Working offline in SD
Due to the intense storage and rendering requirements of HD, it is a good idea to capture and
edit a rough cut in SD, and then conform to HD. Adobe Premiere Pro makes this process quite
easy. When you use this technique, the standard real-time effects of Adobe Premiere Pro are
available when you’re editing the offline version. This allows work to proceed quickly with real
time visual feedback.
Adobe Premiere Pro 1.5 includes a project trimmer that you can use to edit in a lower-resolution
format, such as SD or DV, using any assets with little concern for storage space. You can then
save the project offline including the critical data about the assets, edit points, effects, and filters
used. Then you can recapture only the HD footage that is necessary to reassemble the project for
quick finishing and delivery.
In After Effects, using low-resolution compositions while roughing out a piece provides similar
advantages in speed. You can apply the finished low-resolution work to the identical media in
HD.
You can set up a four-node render farm for less than the cost of a single high-end HD capture
card, and it can pay off enormously in a more efficient workflow.
Using and Understanding 14
High-Definition Video
Choosing progressive or interlaced output
Unlike broadcast TV, which is natively interlaced, HD is natively progressive in many cases. This
was a contentious issue during ATSC’s development because the computer industry was lob-
bying very strongly to drop interlaced entirely from the specification, and the traditional video
engineers were fighting hard for interlaced to remain. In the end, the 720 frame sizes are always
progressive, and the 1080 frame sizes can be either progressive or interlaced.
It is always best to perform the post-production in the same format that was used for delivery.
Consistency in formats prevents the need for transcoding, which can add artifacts and require
long rendering sessions.
If you’re creating content to be used in a variety of environments, working in 24p is a great uni-
versal mastering format. The 24p format easily converts to both NTSC and PAL, as well as back
to film. And its lower frame rate reduces hardware requirements.
The good news is that these differences are transparent in most cases. For example, when After
Effects converts from its internal RGB to Y`CbCr in a video codec, the codec internally handles
the transformation. Therefore, exporting from the same After Effects project to an SD and an
HD output should work fine.
Complexity is added when you work in a native color space tool, such as Adobe Premiere Pro.
If you capture, edit, and deliver in the same color space (for example, 709), color accuracy and
fidelity are maintained. The native filters in Adobe Premiere Pro assume that the 601 color space
is used, but this assumption rarely has much effect on the math used in video processing. Still,
it’s a good idea to use a 709-calibrated monitor for previews when using Adobe Premiere Pro.
SD DVD-video distribution
Most HD content is distributed in SD at some point in its lifecycle. DVD-video is still the de
facto standard for mass-distribution of video content, so it’s important to understand how you
can achieve this.
The MPEG-2 engine in both Adobe Premiere Pro and After Effects can output DVD-compliant
MPEG-2 content from HD content. It accomplishes this output by scaling and, in some cases,
some frame-rate trickery.
By using the Adobe Media Encoder, you can specify a target frame size for the video. Before the
HD video frames are compressed to MPEG-2, the encoder scales them down to the target frame
size. Because this scaling occurs at the end of the production process, it can achieve the highest
quality compression. In the case of 23.976 fps content, the Adobe Media Encoder can insert data
in the MPEG-2 stream so that it plays back interlaced data on a standard TV and DVD player
and progressive data on a progressive scan DVD player and TV. This is the same technique that
is used in most Hollywood DVDs, which are produced from a 24-fps film master.
Using and Understanding 15
High-Definition Video
After the MPEG-2 content is created, you can output to DVD by using Adobe Encore® DVD
software. You can use the video in the DVD-authoring application in the same way as any other
video.
The MPEG-2 engine built into both Adobe Premiere Pro and After Effects is capable of making
DVHS compatible MPEG-2 bitstreams. The CineForm software previously described includes
profiles for DVHS authoring and a utility for copying the MPEG-2 data back to the tape.
As mentioned earlier, HDV uses the same bitstream as DVHS, just on a smaller cassette. While
HDV decks aren’t yet available, they should be available on the market soon and might provide
an alternative to DVHS. Still, the physically bigger tape of DVHS should translate into greater
durability, so kiosks and other environments where multiple viewings are intended should stick
with DVHS.
DVD-ROM delivery
Another major method of distributing HD content is on DVD-ROM, be it high-volume com-
mercial titles, like the Windows Media titles released so far, or one-off DVD-Rs. The installed
base of computers that can play HD video is a lot higher than any other HD devices, and you
can build an HD compliant system for only an estimated U.S. $1000.
MPEG-2 and Windows Media are the most popular formats for DVD-ROM in use today, and
both can work well. Windows Media format is the first to be used in commercial products, and
it has the compression efficiency to get a 2.5 hour movie on a dual-layer DVD in high quality.
MPEG-2 requires much higher bit rates for equivalent quality.
One drawback to Windows Media is that it doesn’t work at HD frame sizes on Macs. If you
want playback on anything other than a computer that runs Windows, another format, such
as MPEG-2, is appropriate. Note that there are issues with the availability of MPEG-2 playback
software—this topic is discussed later.
Because you can play back the files on a computer, you can build various levels of complexity
into the user interface. However, it’s a good idea to allow users to pick their own player for the
raw files because it is common for users to have their own preferences for players.
For delivery of compressed content, any modern hard drive is more than good enough.
Still, there have been impressive demonstrations of HD streaming over the academic Internet2
network. The fastest consumer connections are only 3 Mbps at best today, but the combination
of improving codecs and increasing bandwidth should make HD streaming to users a feasible
market later this decade.
Broadcast delivery
Last, but not least, you can broadcast HD as ATSC. In most cases, you provide broadcasters with
master tapes of the programs, and then they handle the encoding themselves. However, local
broadcasters may be able to accept programming as MPEG-2 transport streams.
Most HD broadcasting that occurs during the day is in the form of loops of compressed MPEG-
2 content—only prime-time programs are broadcast in HD, in most cases. Enterprising HD pro-
ducers might be able to get some airtime by providing free, high-quality HD MPEG-2 content
for local broadcasters to include as part of their loop.
The 1080 WMV format normally uses a frame size of 1440 x 1080 with a 4:3 horizontal com-
pression, which makes the files somewhat easier to decode and improves compression efficiency.
Currently, very few HD displays can resolve more than 1440 lines horizontally. WMV does 720
at its normal 1280 x 720. Windows Media has no requirement for frame size, so it is preferable
to crop out letterbox bars instead of encoding them—these elements waste bits and CPU cycles.
Thus, you might encode a 2.35:1 movie from 1080 source at 1440 x 816.
Microsoft’s HD efforts go beyond their own products. They’ve submitted the WMV9 codec
specification (as VC-9) to the Society of Motion Picture and Television Engineers (SMPTE), and
they’re one of the three proposed mandatory codecs for HD DVD.
WMV9 codec
Audience settings for the Anamor-
phic preset in the WMV9 codec.
MPEG-2 format
The standard MPEG-2 used for HD is called Main Profile @ High Level or MP@HL. This mode
uses the standard techniques for MPEG-2 that are used on DVD and other media but with much
higher limits for frame size and data rate.
One drawback to MPEG-2 is that HD compatible decoders don’t ship by default with most
computers. These decoders are commercially available for U.S. $30 or less, but you have to either
distribute the HD MPEG-2 content with a licensed player or ask the end-users to acquire a
player themselves. There are a number of freeware HD-compatible MPEG-2 players. One of the
most successful is the VideoLAN Client (VLC). However, it is unclear if it is legal to distribute
those players on a DVD-ROM without paying an MPEG-2 licensing fee.
The MPEG-2 specification has Layer II as the standard audio codec, but most HD MPEG-2
products, such as ATSC, use the Dolby Digital (AC-3) audio codec. An advantage to using AC-3
is that it natively supports multichannel audio, but most Layer II implementations are stereo
only.
MPEG-2 is another of the three proposed mandatory codecs for HD DVD. Using and Understanding 18
High-Definition Video
MPEG-4 Part 2 format
The original release of MPEG-4 standard had the video codec defined in Part 2; consequently,
that’s what the original type of MPEG-4 video (as opposed to AVC) is normally called.
MPEG-2 defines the functionality available to a format by means of profiles. The two main
profiles available for MPEG-4 for HD are Simple and Advanced Simple. Simple uses a smaller
subset of features, which reduces its compression efficiency. Advanced Simple produces smaller
files at the same quality but with higher processor requirements. It isn’t used much for HD.
MPEG-4 HD almost always uses the AAC-LC audio codec. Most implementations are two-
channel, but 5.1 does exist.
The DivX format, which is being touted as an HD format, uses MPEG-4 part 2 video in an AVI
file. While the profiles for lower-frame-size DivX allow Advanced Simple, the HD profiles are
only Simple profile for decoder performance reasons.
Other than DivX, there isn’t much momentum behind MPEG-4 Part 2 HD implementations.
One open question with AVC is how well quality holds up at HD and high data rates. The initial
implementations have some difficulty preserving fine details, like film grain. There aren’t yet any
real-time HD decoders for AVC, software or hardware, so it’s difficult to judge the real-world
quality of mature products. Initial indications suggest that it doesn’t do as well as WMV9 at HD,
largely due to the detail issue, but a definitive answer to that question needs to be determined
later.
Because CRTs use the analog VGA connector, they can be sensitive to analog noise, especially
at high resolutions like 1920 x 1440. By using short runs of high-quality cable, you can avoid
this problem. Look for cables with ferrites (barrels near each end). Running through a K Virtual
Machine (KVM) switch or using long runs of cables can result in soft images, ghosting, or other Using and Understanding 19
High-Definition Video
artifacts. If you need long runs or switching, you can use digital DVI connections and a DVI-
to-VGA converter at the display to keep the image digital as long as possible. The cheap DVI-to-
VGA adaptors won’t help—they simply put the analog VGA signal on the DVI cable instead of
converting the signal into true DVI.
When authoring content for a particular flat panel, you should try to encode content for that
panel’s native resolution to avoid scaling and achieve the crispest image.
When authoring content specifically for playback on a laptop, try to encode at the native resolu-
tion of the screen.
One issue with DVI-based televisions, as opposed to DVI-based computer monitors, is that
many DVI-based televisions have DVI ports that don’t properly communicate its supported
operating modes through the DVI connector. In some cases, directly connecting a computer to
an HDTV by means of DVI can confuse the computer and lead it to deactivating the computer’s
video display altogether. DVI dongles that solve these issues, such as the DVI Detective, are
available, although in-the-field success with these devices has yet to be quantified.
If a dongle isn’t an option, you can use a utility like PowerStrip to tightly control all of the ele-
ments of the VGA signal going out to the projector. Using a utility can be a lengthy and frustrat-
ing process and doesn’t always produce the best results. If possible, use a card with DVI output
and a display with proper DVI support.
Currently, the fastest Intel and AMD-based systems are more than capable of great HD play-
back, and many users report successful playback of Microsoft’s 1080p content with as little as an
Athlon XP2100. However, given the many other variables in an HD playback system, be liberal
with your requirement estimates.
For laptop applications, note that the GHz rating of Pentium® M computers may understate their
actual performance. Usable 720 playback results have been reported with 1.7 GHz Pentium M
systems. The faster the system, the better the performance experience. If you intend to work
with HD content of any type, get the most powerful system your budget will allow.
Modern laptops, especially those using the ATI Radeon Mobility 9700 chipset, are also capable
of quite good HD performance. A few laptops with DVI outputs are also now entering the mar-
ket.
Playing HD audio
Playing back the audio content from an HD source isn’t particularly different than the audio
playback associated with SD video. Stereo audio files tend to be the same bandwidth no matter
where they are used. One complication with HD is that it is often accompanied by multichannel
audio, such as 5.1 surround sound. This accompaniment is a great feature, but it makes playback
more complicated.
There is a massive library of matrix encoded titles available, and they work on the oldest com-
puters. Unfortunately, there is no good way to detect whether or not content is matrixed without
playing it through a Pro Logic decoder and listening for directional effects. Stereo played in Pro
Logic mode can sound audibly altered, although rarely annoying to the casual listener.
Depending on the solution, output is either in stereo pairs, or one per channel, that can be con-
nected to an amplifier, or directly connected to powered speakers. These days, 5.1 speaker setups
for gaming are becoming popular. And although 7.1 isn’t too significant for computers yet, solu-
tions do exist on the high end.
The three significant formats for digital audio are the professional AES/EBU, the consumer-
focused (but still used by professionals) SPDIF, and Dolby Digital. All of these formats run over
either TOSLink optical cable or coax. For playback, both AES/EBU and SPDIF offer perfect
multichannel playback.
Normally, the AC-3 solutions only work with AC-3 encoded audio tracks. One feature of some of
Nvidia’s Nforce motherboards is that they support real-time transcoding from any multichannel
audio source into AC-3 for easy integration with existing AC-3 decoders. This option is not as
ideal as having multichannel audio out, but it works well in practice.
Conclusion
By understanding the basics of the HD formats and the equipment that supports them, you
can make a comprehensive decision about which HD solution that meets your post-production
needs—whether you’re a hobbyist or a feature film editor.
The advances in technology and the growth in the number of manufacturers providing HD so-
lutions continue to drive prices down to the point where choosing to edit in HD is a real option
for video professionals at almost any level.
Adobe Systems Incorporated • 345 Park Avenue, San Jose, CA 95110-2704 USA • www.adobe.com
Adobe, the Adobe logo, Adobe Premiere, Adobe Encore DVD, and After Effects are either registered trademarks or trademarks of Adobe Systems Incorpo-
rated in the United States and/or other countries. Mac and Macintosh are trademarks of Apple Computer, Inc., registered in the United States and other
countries. Intel and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. All other
trademarks are the property of their respective owners.
© 2004 Adobe Systems Incorporated. All rights reserved. Printed in the USA.