3 or 4 Point Edit in Premiere Pro
3 or 4 Point Edit in Premiere Pro
The Source and Program Monitors provide controls to perform three-point and four-point edits—standard
techniques in traditional video editing.
In a three-point edit, you mark either two In points and one Out point, or two Out points and one In point.
You don’t have to actively set the fourth point; it’s inferred by the other three. For example, in a typical
three-point edit you would specify the starting and ending frames of the source clip (the source In and Out
points), and when you want the clip to begin in the sequence (the sequence In point). Where the clip ends
in the sequence—the unspecified sequence Out point—is automatically determined by the three points
you defined. However, any combination of three points accomplishes an edit. For example, sometimes
the point where a clip ends in a sequence is more critical than where it begins. In this case, the three
points include source In and Out points, and a sequence Out point. On the other hand, if you need the clip
to begin and end at particular points in the sequence—say, perfectly over a line of voice-over narration—
you could set two points in the sequence, and only one point in the source.
In a four-point edit, you mark source In and Out points and sequence In and Out points. A four-point edit
is useful when the starting and ending frames in both the source clip and sequence are critical. If the
marked source and sequence durations are different, Premiere Pro alerts you to the discrepancy and
provides alternatives to resolve it.
Make a three-point edit
Linear Editing
Linear video editing is the process of selecting, arranging and modifying the images and sound
recorded on videotape whether captured by a video camera, generated from a computer graphicsprogram
or recorded in a studio. Until the advent of computer-based non-linear editing in the early 1990s "linear
video editing" was simply called “video editing.”
The first widely accepted videotape in the United States was two inches wide and travelled at 15 inches
per second. To gain enough head-to-tape speed, four video recording and playback heads were spun on
a head wheel across most of the two-inch width of the tape. (Audio and synchronization tracks were
recorded along the sides of the tape with stationary heads.) This system was known as Quad, for
quadruplex recording.
Originally videotape was edited by physically cutting and splicing the tape, in a manner similar to film
editing. This was an arduous process and avoided where possible. When it was used, the two pieces of
tape to be joined were painted with a solution of extremely fine iron filings suspended in carbon
tetrachloride. This 'developed' the magnetic tracks, making them visible when viewed through
a microscope so that they could be aligned in a splicer designed for this task. The tracks had to be cut
during a vertical retrace, without disturbing the odd-field/even-field ordering. The cut also had to be at the
same angle that the video tracks were laid down on the tape. Also, since the video and audio read heads
were several inches apart, it was not possible to make a physical edit that would appear correct in both
video and audio. The cut was made for video and a portion of audio then re-copied into the correct
relationship (the same technique as for editing 16mm film with a combined magnetic audio track).
While computer based non-linear editing has been adopted throughout most of the commercial, film,
industrial and consumer video industries, linear video tape editing is still commonplace in television
station newsrooms, and medium-sized production facilities which haven’t made the capital investment in
newer technologies. News departments often still use linear editing because they can start editing tape
and feeds from the field as soon as received since no additional time is spent capturing material as is
necessary in non-linear editing systems and systems that are able to digitally record and edit
simultaneously have only recently become affordable for small operations.
When helical scan video recorders became the standard it was no longer possible to physically cut the
tape. At this point video editing became a process of using two video tape machines, playing back the
source tape (or raw footage ) from one machine and copying just the portions desired on to a second tape
(the edit master ).
The bulk of linear editing is done simply, with two machines and a device to control them. Many video
tape machines are capable of controlling a second machine, eliminating the need for an external editing
control device.
This process is 'linear', rather than non-linear editing, as the nature of the tape-to-tape copying requires
that all shots be laid out in the final edited order. Once a shot is on tape, nothing can be placed ahead of it
without overwriting whatever is there already. If absolutely necessary material can be inserted by copying
the edited content onto another tape, however as each copy introduced generation-produced image
degradation this is not desirable.
One drawback of early video editing technique was that it was impractical to produce a rough cut for
presentation to an executive producer. Since executive producers are never familiar enough with the
material to be able to visualise the finished product from inspection of a decision list, they were deprived
of the opportunity to voice their opinions at a time when those opinions could be easily acted upon. Thus,
particularly in documentary television, video was resisted for quite a long time.
Non-linear editing
Non-linear editing for films and television postproduction is a modern editing method which enables
direct access to any frame in a digital video clip, without needing to play or scrub/shuttle through adjacent
footage to reach it, as was necessary with historical videotape editing systems. This method is similar in
concept to the "cut and paste" technique used in film editing. With the appropriation of non-linear editing
systems, the destructive act of cutting of film negatives is eliminated. Non-linear, non-destructive methods
began to appear with the introduction of digital video technology. It can also be viewed as the audio/video
equivalent of word processing, which is why it is called desktop editing in the consumer space.
Video and audio data are first captured to hard disks or other digital storage devices. The data is either
recorded directly to the storage device or is imported from another source. Once imported they can be
edited on a computer using any of a wide range of software. For a comprehensive list of available
software, see List of video editing software, whereas Comparison of video editing software gives more
detail of features and functionality.
In non-linear editing, the original source files are not lost or modified during editing. Professional editing
software records the decisions of the editor in an edit decision list (EDL) which can be interchanged with
other editing tools. Many generations and variations of the original source files can exist without needing
to store many different copies, allowing for very flexible editing. It also makes it easy to change cuts and
undo previous decisions simply by editing the edit decision list (without having to have the actual film data
duplicated). Loss of quality is also avoided due to not having to repeatedly re-encode the data when
different effects are applied.
Compared to the linear method of tape-to-tape editing, non-linear editing offers the flexibility of film
editing, with random access and easy project organization. With the edit decision lists, the editor can work
on low-resolution copies of the video. This makes it possible to edit both standard-definition broadcast
quality and high definition broadcast quality very quickly on normal PCs which do not have the power to
do the full processing of the huge full-quality high-resolution data in real-time.
Another leap came in the late 1990s with the launch of DV-based video formats for consumer and
professional use. With DV came IEEE 1394(FireWire/iLink), a simple and inexpensive way of getting
video into and out of computers. The video no longer had to be converted from ananalog signal to digital
data — it was recorded as digital to start with — and FireWire offered a straightforward way of transferring
that data without the need for additional hardware or compression. With this innovation, editing become a
more realistic proposition for standard computers with software-only packages. It enabled real desktop
editing producing high-quality results at a fraction of the cost of other systems.
More recently the introduction of highly compressed HD formats such as HDV has continued this trend,
making it possible to edit HD material on a standard computer running a software-only editing application.
Avid is still considered the industry standard, with the majority of major feature films, television programs,
and commercials created with its NLE systems. Final Cut Pro continues to develop a strong following, and
the software received an Technology & Engineering Emmy Award in 2002.[3]
Avid has held on to its market-leading position, but faces growing competition from other, cheaper
software packages, notably Adobe Premiere in 1992, and later Final Cut Pro in 1999. These three
competing products by Avid, Adobe, and Apple are the foremost NLEs, often referred to as the A-Team[4].
With advances in raw computer processing power, a number of new NLE software solutions have
appeared on the market. An example of this is NewTek's software application SpeedEdit. Billed as the
world's fastest video editing program, this application is an example of the continual streamlining and
refinement of video editing software interfaces.
Since 2000 many personal computers have come with basic nonlinear editing systems free of charge, as
in the case of Apple iMovie for the Macintosh platform and Kdenlive, Openshot and Cinelerra on
the Linux platform, and Windows Movie Maker for Windows platform. This has brought non-linear editing
to consumers.
Non-linear editing was not welcomed by everyone and many editors resisted the new wave. In addition,
early digital video was plagued with performance issues and uncertainty. However, the advantages of
non-linear video eventually became so overwhelming that they could not be ignored.
In the 21st Century non-linear editing is king and linear editing is widely considered to be obsolete, or at
least primitive. This is an understandable attitude considering the advantages of non-linear editing, but we
urge you not to be too judgemental. Linear editing still has some advantages:
1. It is simple and inexpensive. There are very few complications with formats, hardware conflicts,
etc.
2. For some jobs linear editing is better. For example, if all you want to do is add two sections of
video together, it is a lot quicker and easier to edit tape-to-tape than to capture and edit on a hard
drive.
3. Learning linear editing skills increases your knowledge base and versatility. According to many
professional editors, those who learn linear editing first tend to become better all-round editors.
Although the "linear vs non-linear" argument is often subjective and some editors will disagree with the
statements above, there can be little doubt that increasing your skill base is a good thing. There is nothing
to be gained by completely rejecting linear editing, and much to be gained by adding it to your repertoire.
Offline editing
Offline editing is the film and television post-production process in which raw footage is copied and
edited, without affecting the camera original film or tape. Once a programme has been completed in
offline, the original media will be conformed, or on-lined, in the online editingstage.
Modern offline editing is conducted in a non-linear editing suite. The digital revolution has made the offline
editing process immeasurably quicker, as practitioners moved from time-consuming linear (tape to tape)
suites, to computer hardware and software such as Adobe Premiere, Final Cut Pro, Avid, Sony
Vegas and Lightworks. Typically, all the original footage (often tens or hundreds of hours) is digitized into
the suite at a low resolution. The editor and director are then free to work with all the options to create the
final cut.
Online editing
Online editing is generally the final stage of video editing.
When the offline edit is complete, the pictures are re-assembled at full or 'online' resolution. An edit
decision list or equivalent is used to carry over the cuts and dissolves from the offline. Projects may be re-
captured at the lowest level of compression possible - ideally with no compression at all. This conform is
checked against a video copy of the offline edit to verify that the edits are correct and frame-accurate.
This cutting copy also provides a reference for any video effects that need to be added.
After conforming the project, the online editor will add visual effects, titles, and apply color correction, a
process known as "flapjacking". This process is typically supervised by the client(s). The editor will also
ensure that the program meets the technical delivery specs of the broadcaster, ensuring proper video
levels, aspect ratio, and blanking width.
Sometimes the online editor will package the show, putting together each version. Each version may have
different requirements for the formatting (i.e. closed blacks), use of bumpers, different credits, etc.