0% found this document useful (0 votes)
276 views

Introduction To Video Conferencing

Video conferencing allows people in different locations to see and communicate with each other in real time. It uses technology like codecs to compress audio and video streams and transmit them digitally over networks. There are two main types of video conferencing systems - dedicated systems that package all components into one device, and desktop systems that use add-ons to transform PCs into conferencing devices. Multipoint conferencing involving three or more locations is enabled through a Multipoint Control Unit bridge that interconnects all calls.

Uploaded by

Anubhav Narwal
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
276 views

Introduction To Video Conferencing

Video conferencing allows people in different locations to see and communicate with each other in real time. It uses technology like codecs to compress audio and video streams and transmit them digitally over networks. There are two main types of video conferencing systems - dedicated systems that package all components into one device, and desktop systems that use add-ons to transform PCs into conferencing devices. Multipoint conferencing involving three or more locations is enabled through a Multipoint Control Unit bridge that interconnects all calls.

Uploaded by

Anubhav Narwal
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Introduction to Video Conferencing

Why is a Video Conferencing System required?


A Video Conferencing system is required to see and talk to people in real time. Its like a
conference call, but its possible to see the persons as well as talk to them. In addition
to that, participants can also show a presentation to the other end while talking. This is
very useful for conducting business meetings, educational seminars, tele-medicine,
multi-location discussions, etc without the people involved having to travel all the
distance.

In the above architecture diagram, there is a head office with three departments (IT
Dept, Dept-1 and Dept-2) with LAN connectivity between them. There is also a branch
office and remote location (for tele-commuter). All these different locations have some
form of connectivity either all of them are connected over Internet Leased Lines or
have MPLS network set up between them, etc. So, using VC-1 and VC-2 and the IP
Network, the people from Head Office IT department can see and talk to the people
from the Branch office. Also, the person from the remote office/ home worker can dial in
to the conference by using a software VC client on his laptop and connecting to the
internet using a broadband connection. That becomes a 3-Party Video Conference.
There could also be another monitor connected in each location where they could see a
presentation (ppt slides from computer) being presented by the remote user.


Technology

The core technology used in a videoconferencing system is digital compression of audio
and video streams in real time. The hardware or software that performs compression is
called a codec (coder/decoder). Compression rates of up to 1:500 can be achieved. The
resulting digital stream of 1s and 0s is subdivided into labeled packets, which are then
transmitted through a digital network of some kind (usually ISDN or IP). The use of
audio modems in the transmission line allow for the use of POTS, or the Plain Old
Telephone System, in some low-speed applications, such as video telephony, because
they convert the digital pulses to/from analog waves in the audio spectrum range.
The other components required for a videoconferencing system include:
Video input: video camera or webcam
Video output: computer monitor, television or projector
Audio input: microphones, CD/DVD player, cassette player, or any other source of
Preamp audio outlet.
Audio output: usually loudspeakers associated with the display device or telephone
Data transfer: analog or digital telephone network, LAN or Internet
Computer: a data processing unit that ties together the other components, does the
compressing and decompressing, and initiates and maintains the data linkage via the
network.
There are basically two kinds of videoconferencing systems:
1. Dedicated systems have all required components packaged into a single piece of
equipment, usually a console with a high quality remote controlled video camera. These
cameras can be controlled at a distance to pan left and right, tilt up and down, and
zoom. They became known as PTZ cameras. The console contains all electrical
interfaces, the control computer, and the software or hardware-based codec.
Omnidirectional microphones are connected to the console, as well as a TV monitor
with loudspeakers and/or a video projector. There are several types of dedicated
videoconferencing devices:
1. Large group videoconferencing are non-portable, large, more expensive
devices used for large rooms and auditoriums.
2. Small group videoconferencing are non-portable or portable, smaller, less
expensive devices used for small meeting rooms.
3. Individual videoconferencing are usually portable devices, meant for single
users, have fixed cameras, microphones and loudspeakers integrated into the
console.
2. Desktop systems are add-ons (hardware boards, usually) to normal PCs,
transforming them into videoconferencing devices. A range of different cameras and
microphones can be used with the board, which contains the necessary codec and
transmission interfaces. Most of the desktops systems work with the H.323standard.
Videoconferences carried out via dispersed PCs are also known as e-meetings.
Conferencing layers
The components within a Conferencing System can be divided up into several different
layers: User Interface, Conference Control, Control or Signal Plane, and Media Plane.
Videoconferencing User Interfaces (VUI) can be either graphical or voice responsive.
Many in the industry have encountered both types of interfaces, and normally graphical
interfaces are encountered on a computer. User interfaces for conferencing have a
number of different uses; they can be used for scheduling, setup, and making a video
call. Through the user interface the administrator is able to control the other three layers
of the system.
Conference Control performs resource allocation, management and routing. This layer
along with the User Interface creates meetings (scheduled or unscheduled) or adds and
removes participants from a conference.
Control (Signaling) Plane contains the stacks that signal different endpoints to create a
call and/or a conference. Signals can be, but arent limited to, H.323 and Session
Initiation Protocol (SIP) Protocols. These signals control incoming and outgoing
connections as well as session parameters.
The Media Plane controls the audio and video mixing and streaming. This layer
manages Real-Time Transport Protocols, User Datagram Packets (UDP) and Real-
Time Transport Control Protocol (RTCP). The RTP and UDP normally carry information
such the payload type which is the type of codec, frame rate, video size and many
others. RTCP on the other hand acts as a quality control Protocol for detecting errors
during streaming.
Multipoint videoconferencing
Simultaneous videoconferencing among three or more remote points is possible by
means of a Multipoint Control Unit (MCU). This is a bridge that interconnects calls from
several sources (in a similar way to the audio conference call). All parties call the MCU,
or the MCU can also call the parties which are going to participate, in sequence. There
are MCU bridges for IP and ISDN-based videoconferencing. There are MCUs which are
pure software, and others which are a combination of hardware and software. An MCU
is characterized according to the number of simultaneous calls it can handle, its ability
to conduct transposing of data rates and protocols, and features such as Continuous
Presence, in which multiple parties can be seen on-screen at once. MCUs can be
stand-alone hardware devices, or they can be embedded into dedicated
videoconferencing units.
The MCU consists of two logical components:
1. A single multipoint controller (MC), and
2. Multipoint Processors (MP), sometimes referred to as the mixer.
The MC controls the conferencing while it is active on the signaling plane, which is
simply where the system manages conferencing creation, endpoint signaling and in-
conferencing controls. This component negotiates parameters with every endpoint in
the network and controls conferencing resources. While the MC controls resources and
signaling negotiations, the MP operates on the media plane and receives media from
each endpoint. The MP generates output streams from each endpoint and redirects the
information to other endpoints in the conference.
Some systems are capable of multipoint conferencing with no MCU, stand-alone,
embedded or otherwise. These use a standards-based H.323 technique known as
"decentralized multipoint", where each station in a multipoint call exchanges video and
audio directly with the other stations with no central "manager" or other bottleneck. The
advantages of this technique are that the video and audio will generally be of higher
quality because they don't have to be relayed through a central point. Also, users can
make ad-hoc multipoint calls without any concern for the availability or control of an
MCU. This added convenience and quality comes at the expense of some increased
network bandwidth, because every station must transmit to every other station directly.
Videoconferencing modes
Videoconferencing systems use several common operating modes:
1. Voice-Activated Switch (VAS);
2. Continuous Presence.
In VAS mode, the MCU switches which endpoint can be seen by the other endpoints by
the levels of ones voice. If there are four people in a conference, the only one that will
be seen in the conference is the site which is talking; the location with the loudest voice
will be seen by the other participants.
Continuous Presence mode displays multiple participants at the same time. The MP in
this mode takes the streams from the different endpoints and puts them all together into
a single video image. In this mode, the MCU normally sends the same type of images to
all participants. Typically these types of images are called layouts and can vary
depending on the number of participants in a conference.

Echo cancellation
A fundamental feature of professional videoconferencing systems is Acoustic Echo
Cancellation (AEC). Echo can be defined as the reflected source wave interference with
new wave created by source. AEC is an algorithm which is able to detect when sounds
or utterances reenter the audio input of the videoconferencing codec, which came from
the audio output of the same system, after some time delay. If unchecked, this can lead
to several problems including:
1. The remote party hearing their own voice coming back at them (usually
significantly delayed).
2. Strong reverberation, which makes the voice channel useless, and
3. Howling created by feedback. Echo cancellation is a processor-intensive task that
usually works over a narrow range of sound delays.
Cloud-based video conferencing
Cloud-based video conferencing can be used without the hardware generally required
by other video conferencing systems, and can be designed for use by SME, or larger
international companies like Facebook. Cloud-based systems can handle either 2D or
3D video broadcasting. Cloud-based systems can also implement mobile calls, VOIP,
and other forms of video calling. They can also come with a video recording function to
archive past meetings. Blue Jeans Network, Dimdim and Fuze are recognized cloud-
based system producers.


Standards

The International Telecommunications Union (ITU) (formerly: Consultative Committee
on International Telegraphy and Telephony (CCITT)) has three umbrellas of standards
for videoconferencing:
ITU H.320 is known as the standard for public switched telephone networks (PSTN) or
videoconferencing over integrated services digital networks. While still prevalent in
Europe, ISDN was never widely adopted in the United States and Canada.[citation
needed]
ITU H.264 Scalable Video Coding (SVC) is a compression standard that enables
videoconferencing systems to achieve highly error resilient Internet Protocol (IP) video
transmissions over the public Internet without quality-of-service enhanced lines.[28] This
standard has enabled wide scale deployment of high definition desktop
videoconferencing and made possible new architectures,[29] which reduces latency
between the transmitting sources and receivers, resulting in more fluid communication
without pauses. In addition, an attractive factor for IP videoconferencing is that it is
easier to set up for use along with web conferencing and data collaboration. These
combined technologies enable users to have a richer multimedia environment for live
meetings, collaboration and presentations.
ITU V.80: videoconferencing is generally compatablized with H.324 standard point-to-
point video telephony over regular plain old telephone service (POTS) phone lines.
The Unified Communications Interoperability Forum (UCIF), a non-profit alliance
between communications vendors, launched in May 2010. The organization's vision is
to maximize the interoperability of UC based on existing standards. Founding members
of UCIF include HP, Microsoft, Polycom, Logitech/Life Size Communications and
Juniper Networks.
ATM
Asynchronous Transfer Mode (ATM) is, according to the ATM Forum, "a
telecommunications concept defined by ANSI and ITU standards for carriage of a
complete range of user traffic, including voice, data, and video signals". It was designed
for a network that must handle both traditional high-throughput data traffic (e.g., file
transfers), and real-time, low-latency content such as voice and video.

A key ATM concept involves the traffic contract. When an ATM circuit is set up each
switch on the circuit is informed of the traffic class of the connection.
ATM traffic contracts form part of the mechanism by which "quality of service" (QoS) is
ensured. There are four basic types (and several variants) which each have a set of
parameters describing the connection.
CBR - Constant bit rate: a Peak Cell Rate (PCR) is specified, which is constant.
VBR - Variable bit rate: an average or Sustainable Cell Rate (SCR) is specified, which
can peak at a certain level, a PCR, for a maximum interval before being problematic.
ABR - Available bit rate: a minimum guaranteed rate is specified.
UBR - Unspecified bit rate: traffic is allocated to all remaining transmission capacity.

Constant Bit Rate (CBR)
The CBR service category is used for connections that transport traffic at a constant bit
rate, where there is an inherent reliance on time synchronisation between the traffic
source and destination. CBR is tailored for any type of data for which the end-systems
require predictable response time and a static amount of bandwidth continuously
available for the life-time of the connection. The amount of bandwidth is characterized
by a Peak Cell Rate (maximum no of cells (1 cell is 53 bytes)). These applications
include services such as video conferencing, telephony (voice services) or any type of
on-demand service, such as interactive voice and audio.
FACT can use ATM service for video conferencing services as there are leased lines for
internet connection. However it can be expensive.



Impact on Business of FACT

Videoconferencing can enable employees of FACT in distant regions namely Andhra
Pradesh, Karnataka, Kerala, Tamil Nadu and Pondicherry to participate in meetings on
short notice, with time and money savings. Technology such as VoIP can be used in
conjunction with desktop videoconferencing to enable low-cost face-to-face business
meetings without leaving the desk, especially for FACT with widespread offices. The
technology is also used for telecommuting, in which employees work from home.

Video conferencing is used for a variety of purposes, including:
Personal communication. Informal communication would normally use desk top
systems. More formal meetings with several participants at each site would probably
use dedicated studio settings.
Collaborative work between researchers using shared applications
Presentations
Training- Training usually involves one to many connections. The trainee may
receive audio and video but only send audio.
Video Conferencing is very useful whenever there is a clear communication need, and
the benefits described by those using video conferencing systems include:
reduced travel costs
Face to face rather than telephone meetings
Better quality teaching
Easier collaborative working

You might also like