0% found this document useful (0 votes)
90 views

Theory Notes DEMG (U1Ch2)

The document discusses the evolution of digital editing from analog to digital video formats. It describes how digital formats store information as binary code and provide advantages over analog like better noise resistance and ability to compress files. Early digital tapes in the 1980s had very high data rates. Chroma subsampling helped reduce file sizes while maintaining quality. Compression methods like discrete cosine transform and MPEG standards further reduced file sizes and allowed video to become more ubiquitous.

Uploaded by

Pravin Karle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views

Theory Notes DEMG (U1Ch2)

The document discusses the evolution of digital editing from analog to digital video formats. It describes how digital formats store information as binary code and provide advantages over analog like better noise resistance and ability to compress files. Early digital tapes in the 1980s had very high data rates. Chroma subsampling helped reduce file sizes while maintaining quality. Compression methods like discrete cosine transform and MPEG standards further reduced file sizes and allowed video to become more ubiquitous.

Uploaded by

Pravin Karle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Theory Notes

Unit I Chapter II -Digital Editing & Motion Graphics T. Y. B. Sc. Animation


THE EVOLUTION OF MODERN NON-LINEAR EDITING
Page | 1
THE DIGITAL REVOLUTION
In the first part we looked at the accomplishments of video engineers that made it possible to
record video signals and edit them. But now the story turns from engineers to programmers and computer
scientists as we look at the explosion of digital and how it made filmmaking accessible to practically
everyone.
ANALOG VS DIGITAL

What is the difference between Analog and Digital? To explain lets imagine an audio recording
of a tone. An analog recording would look like the original wave all the details intact, its a copy- an
analog to the original. On the other hand (pun intended) a Digital recording, breaks the wave into chunks
called samples and the measures the amplitude of the wave at each sample and stores these measurements
in a stream of binary code a square wave of 0s and 1s. A digital player would reconstruct the wave using
these measurements.
So right off the bat, you may think that analog is the better of the two formats and you arent
alone. There are plenty of people who swear that analog audio recordings are the best. But digital comes
with some great advantages and analog simply doesnt have.
The first is resistance to noise introduce noise into an analog signal and youre going to destroy
the signal. Digital signals, because theyre either 0 or 1 and nothing in between, can withstand some noise
and not lose any quality at all.
Digital is also easier to copy, there is no generation loss as analog loses a bit of quality every time its
copied like a game of telephone. Digital signals can also be synced up and read by computers which
analog cant. And very importantly for video, patterns can be found in the sequence of 1s and 0s in digital
signals, so digital can be compressed and that is key for making video as ubiquitous as it is today.
THE FIRST DIGITAL TAPES
By the late 1970s and into the 80s, electronics manufacturers were experimenting with digital
recording. The first commercially available digital video tape was made available in 1986 with the Sony
D1 video tape recorder. This machine recorded an uncompressed standard definition component signal
onto a tape at a whopping 173 million bits per second! Thats a lot of zeros and ones in a single
second!
In comparison you are watching the video in this lesson in HD at a bit rate of only 5 million bits per
second.
The D1 was expensive and only large networks could afford them. But they soon proved their
worth as a rugged format. The Sony D1 would be challenged by Ampex with D2 in 1988, and Panasonic
with D3 in 1991. Sony follow-up to the D1 was the Digital Beta format in 1993. DigiBeta was cheaper
than D1, used tapes similar to Betacam SP which was a standard television industry tape at the time, it
recorded composite video which was how most TV studios were wired, and it used a 3 to 1 Discrete
Cosine Transform video compression to get the bitrate down to 90 million bits per second.
CHROMA SUBSAMPLING
Before we dive too deeply into how data is compressed lets talk about Chroma subsampling a
type of compression that was used even on the uncompressed D1 digital video recorder.
The human eye is comprised of light sensitive cells called rods and cones. Rods are sensitive to
changes in brightness only and provide images to the brain in black and white. Cones are sensitive to

July 5, 2016/ Notes by Prof. Pravin Karle

Theory Notes
Unit I Chapter II -Digital Editing & Motion Graphics T. Y. B. Sc. Animation
eitherPage
Red Green
| 2 or Blue provide our brains with the information to see in color. But we have a lot more
rods in our eyes than cones 120 million rods to only 6 million cones.
Because of this were more responsive to changes in brightness which means you can take an
image and throw away some of the color information while keeping the brightness and it would still look
as crisp and bright as fully colored video.
So to compress color images, first we have to pull the brightness information out of the signal
Video is made of the primary colors Red Green and Blue but storing signals in RGB leads to a
lot redundancies. So the RGB signal is converted to whats called a Y Cb Cr color space. Y stands for
Luma or brightness, Cb is the difference in the blue channel and Cr is the difference in red channel.
A 3D MODEL OF THE YCBCR COLORSPACE
Now by separating out color from the brightness we can start to
compress the color information reducing the resolution of the
Cb and Cr channels.
The amount of subsampling or how much were reducing the
color resolution is expressed in a ratio: J: a: b where
J is the number of horizontal pixels in the compression
scheme, usually 4,
a is the number of Cb and Cr pixels in that sample row
of pixels and
B: is the number of different Cb and Cr pixels in the row
of pixels.
Lets illustrate what this means.
A 4:4:4 signal is said to have NO Chroma subsampling.
There are 4 pixels in our sample thats four pixels of Y. Each of those 4 pixels have their own Cr and Cb
values so 4 Cr and Cb pixels. And in the next line there are 4 more Cr and Cb pixels
Now lets start subsampling
In a 4:2:2 subsample we again have 4 pixels in the sample four pixels of y we never throw
away the Y value. But now we only have 2 pixels of Cr and Cb two of the pixels share the same
values. And in the next line again we have 2 pixels of Cr and Cb.
The information needed to construct a 4:2:2 image is a third smaller than 4:4:4 and is considered
good enough for most professional uses.
Another common one is 4:1:1, 4 pixels in the same and this time only 1 pixel of Cr and Cb in the same
and one on the following line.
Heres 4:2:0 four in the sample, 2 pixels in the sample line and zero in the next line essentially the 2
get carried over to the next line.
Both 4:1:1 and 4:2:0 need half as much data as 4:4:4.
THE COMPRESSION REVOLUTION
Chroma subsampling is a good start but we have ways to get the video data even smaller. One of
the most important ways is the Discrete Cosine Transform. DCT is a seriously brilliant mathematical
achievement basically what it does is approximates the square wave signal that is the binary stream as a
sum of different cosine waves.

July 5, 2016/ Notes by Prof. Pravin Karle

Theory Notes
Unit I Chapter II -Digital Editing & Motion Graphics T. Y. B. Sc. Animation
Page | 3

CREATING A SQUARE WAVE OUT OF COSINE WAVES


The mathematics is nothing short of amazing and seriously well beyond my capability to explain. In the
simplest terms the more cosines waves you use, the more accurately you can describe the original
square wave of the binary code. And since binary is resistant to noise you dont need that many waves to
get a pretty accurate result.
The first compression widely used for editing video was Motion-JPEG in the early 90s.
Motion JPEG is an intraframe compression. It uses DCT to breaks down individual frames into
macroblocks. It basically looks the frame and finds chunks of the image that are similar then simplifies
them. Now it didnt look that great the first Avid editing systems in the early 90s used an early form
Motion JPEG compression and the quality was about that of VHS tape. But since the compression was
done frame by frame, the codec wasnt too taxing for the computer hardware at the time and it was just
good for offline editing.
Major breakthroughs came in 1995 with two important technological releases.
On the Distribution side 1995 saw the introduction of DVD optical discs. These discs used a new kind
of compression called MPEG-2 not to be confused with motion-JPEG. MPEG-2 was developed by the
Motion Picture Experts Group who had a rather novel approach to handling compression. Instead of
standardizing the way video signals were encoded, they standardized the way video was decoded from a
digital stream. The way a MPEG-2 was decoded stayed the same no matter where it was done, on DVD
player, to your computer, or ever on a modern day DVR.
Now how that digital stream was encoded- what algorithms were used to compress that original
data was left open so that media companies could continually fight it out and develop the more and more
efficient encoders.
MPEG-2 was Inter frame compression. Unlike Intraframe, which compressed frames individually. Inter frame
compression puts frames into GOPs, groups of pictures. These
GOPs would start with an I-frames or reference frames a
full image. Then the encoder would wait a few
frames and then record a P frame a
predictive frame. This frame only
contains the information that is
DIFFERENT from the I. frame.
Then the encoder goes back calculates the difference between the I.
and P frames and records them as B frames Bidirectional predictive frames.
Describing the process almost sounds like magic This process of building frames based on
reference frames was rather computationally taxing it would take a while before computers could
muscle the processing power to edit this type of compression.
But in 1995 they didnt have to as that was the same year the DV Format was introduced.
Intended to be a consumer grade video tape format, DV recorded video at a 4:1:1 color subsample using

July 5, 2016/ Notes by Prof. Pravin Karle

Theory Notes
Unit I Chapter II -Digital Editing & Motion Graphics T. Y. B. Sc. Animation
an Intra-frame
Page | 4 DCT compression giving 25 million bits per second quite an improvement in size from
original the D1
This wasnt considered a professional quality standard, but it was a huge step up from consumer analog
formats like VHS or Hi8. And all the DV cameras had IEEE1394 (Fire wire) connections which mean
people could get a perfect digital copy of their video onto their computer without having to specialized
hardware to encode the file. The tapes themselves were extremely cheap, $3-5 per hour.
Armed with relatively inexpensive cameras digital video production began to take off.
PRESSURE FROM BELOW
In Hollywood during 90s, AVID was the king of Nonlinear editing systems but it was still a fairly
expensive system. Several companies tried to compete for a share of that video production market.
Beginning In 1990, Newtek released the first Video Toaster on the Amiga system. Though technically it
was more of a Video Switcher which only had limited linear editing capabilities until they added the
Flyer, the video toaster brought video production to lots small television studios, production shops and
schools. Costing only a few thousand but loaded with effects, character generator and even a 3d package
called Light wave 3D, Video Toast proved there was a market for small scale media production.
As computers continued getting more powerful and storage cheaper and cheaper, Software based Nonlinear editors like Adobe Premiere and Media 100 kept nipping at the heels on Avid forcing the company
to constantly lower the price of their system.
A media company called Macromedia wanted to get in the game. The hired the lead developer of Adobe
Premiere Randy Ubillos (you-bil-los) to create a program called Key grip based on Apples Quick time
codecs. The product was fairly developed when Macromedia realized it would be unable to release the
program as it interferred with their licensing agreements they had with their partner Truvision and
Microsoft. So Macromedia sought a company to buy the product they had developed and they found one
at a private demonstration at NAB in 1998. The buyers name was Steve Jobs and his company Apple
would release the software the following year as Final Cut Pro.
HIGH-DEFINITION VIDEO AND FILM SCANNING
The divide between television/video production and film production began to close with the
adoption of high definition video production. Engineering commissions had been working on the
standardization of High Def video since the 70s and experiments in HD broadcast were being conducted
by the late 80s in Japan. The first public HD broadcast in the United States occurred on July 23, 1996.
DIGITAL INTERMEDIARY
Now about this same time, the mid to late 90s, Hollywood studios
were beginning to use DI or digital intermediaries to create special
effects. A DIs were created by sending 35mm celluloid film
through a telecines which scanned the film to create a
digital files. These could be manipulated and composited
in the computer and when they were satisfied, the final
shot would sent to an optical printer which put the digital images
back on film. Hence the term Intermediary.
In 1992, Visual Effects Supervisor/Producer, Chris
Woods overcame several technological barriers with telecines to
create the visual effects for 1993s release of Super Mario Bros. Over 700 visual effects plates were
created at a 2K resolution thats roughly 2,000 pixels across.
Chris Watts, further revolutionized the DI process with 1998s Pleasantville. Pleasantville held
the record for most visual effects shots in a single film as almost every shot when the characters visit the
fictional 1950s idyllic town of Pleasantville required some kind of color special effects.
That title for most digital effects would not last long as it was overtaken by Star Wars Episode I in 1999.

July 5, 2016/ Notes by Prof. Pravin Karle

Theory Notes
Unit I Chapter II -Digital Editing & Motion Graphics T. Y. B. Sc. Animation
The| first
Page
5 Hollywood film which utilized the DI process for the ENTIRE length of the movie was
the Coen brothers O Brother Where Art Thou in 2000. After trying bleach processes but never quite
getting the right look, cinematographer Roger Deakins suggested doing it digitally with a DI. He spent 11
weeks pushing the color of the scanned 2k DI, fine-tuning the look of old-timey American south.
The thing is HD video and 2K film scans share roughly the same resolution HD being
19201080 whereas 2K is 20481080. So it wasnt long before Hollywood started asking, can we just
skip the whole 35mm film step all together.
The first major motion picture shot entirely on digital
was Star Wars II: attack of the clones on a pre-production Sony
HDW-F900
And by the late half of the 2000s,
with faster computers and storage, better
cameras and even 4K resolution, it became
conceivable to capture straight onto a digital
format, edit online which means working with the
original full quality files rather than a low quality
working file, and even project digital files all
without celluloid film.
Moving into the second decade of the 21st century were
adding even faster computer and video processors,
incredibly efficient compression techniques like
MPEG-4 and H.265 and a powerful network of data distribution with
broadband internet.
The journey to get to modern day film/video editing can trace all
the way back to TV networks needing to delay the broadcast of their shows. Everything we have now is
built on the sparks of genius that electronics engineers, software engineers, and mathematicians had over
the past 60 years coming up with incredibly brilliant solutions to problems that hounded electronics
from the start. Each step, each advancement adding more and more tools for us filmmakers to realize our
dreams. How can you not look at the momentum of history and how we got here and not wonder in awe
that so much as changed in so little time and its all so we can just tell stories to each other Filmmaking
is it technological fulfillment of our most basic human need, the need to communicate. So go out there
and communicate! Use these tools that are available. Be part of the next chapter in filmmaking history.

July 5, 2016/ Notes by Prof. Pravin Karle

You might also like