HCIA-Intelligent Vision V1.0 Lab Guide
HCIA-Intelligent Vision V1.0 Lab Guide
HCIA-Intelligent Vision
1
Copyright © Huawei Technologies Co., Ltd. 2021. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any
means without prior written consent of Huawei Technologies Co., Ltd.
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property
of their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made
between Huawei and the customer. All or part of the products, services and features
described in this document may not be within the purchase scope or the usage
scope. Unless otherwise specified in the contract, all statements, information, and
recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort
has been made in the preparation of this document to ensure accuracy of the
contents, but all statements, information, and recommendations in this document
do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Overview
This document uses the Software Defined camera (SDC) to describe common
configurations and typical applications, such as intelligent target analysis and behavior
analysis, of Huawei IP cameras (IPCs). Upon completion of this course, you will be able
to configure basic features of Huawei IPCs.
Description
This experiment guide describes how to configure common parameters and application
scenarios from the following eight experiments:
Experiment 1: basic configuration. This experiment helps you master basic
operations, including setting the IP address, time, pan-tilt-zoom (PTZ), image,
video, and audio parameters.
Experiment 2: recording configuration. This experiment helps you understand
common methods of recording.
Experiment 3: snapshot configuration. Snapshot is used to store video evidence
and provide a basis for various application scenarios.
Experiment 4: configuration for intelligent behavior analysis. This experiment
helps you master the configuration of eight common behavior detection methods
and application scenarios.
Experiment 5: configuration for crowd density detection. Crowd density detection
is mainly used to check population density and generate alarms.
Experiment 6: configuration for 1+N mode. This experiment helps you understand
how to use intelligent cameras to drive legacy cameras to possess intelligent
analysis capabilities.
Experiment 7: configuration for target/person snapshot. This experiment helps
you master the target/person detection methods and their effects.
Experiment 8: configuration for intelligent vehicle analysis. This experiment helps
you master the configuration methods for license plate recognition, object
classification, and red-light running detection.
Device Introduction
It is recommended that each suite of experiment environment be configured according
to the instructions described in the following table to meet the experiment
requirements.
Laptop or desktop
1 One for each person
computer
Experiment topology
In this experiment, the cameras are connected to the LAN through a
switch. The cameras can be configured by using a laptop or desktop
computer.
1.1 Introduction
1.1.1 About This Experiment
This experiment provides guidance on how to configure the IP address, time, PTZ, on-
screen display (OSD), image, and video parameters of cameras.
1.1.2 Objectives
Upon completion of this task, you will be able to understand how to operate cameras
and set basic parameters for them.
Networking description: Connect the Huawei camera to the LAN and ensure that it can
communicate with the computer.
You must change the user name and password during the first login.
Time zone: Set the local time zone. For China, select (UTC+08:00) Beijing.
Synchronization: You can select either Manual or NTP. To ensure the accuracy of
the clock, NTP is adopted in this experiment.
After the IP address is changed, use the new IP address to log in. The figure shown
below is displayed.
----End
You can click the buttons on the toolbar at the bottom of the live video pane.
Stop live video viewing. Click again to enable live video viewing.
Start or stop local recording. Click this button to start local recording.
After local recording is started, click the button again to stop local
recording.
1. Click .
2. Draw a box on the video image. The area inside the box is
automatically displayed at the center of the video image.
Zooming in: Drag a box from left to right on a video image.
Zooming out: Drag a box from right to left on a video image.
1. Click .
2. Click a location on a video image to display that point at the center of
the video image.
The electronic PTZ (ePTZ) adjusts the focal length or shooting angle
through software, instead of by using lens zooming or PTZ rotation. The
primary use of the ePTZ is to zoom in on high-resolution images in order
to show more details. Zooming in too far on low-resolution images may
result in artifacts due to the limited pixels.
The ePTZ function can be used only after the area cropping function is
enabled and live video is switched to the required secondary stream
type.
A shorter distance between the arrow and the cross indicates a slower
rotation speed.
You can scroll the mouse wheel to control the zoom. The image is
zoomed in when the mouse wheel is scrolled up and zoomed out when it
is scrolled down.
Display the video stream resolution. The icon changes based on the video
resolution.
Toggle among Full Screen, Actual Size, 16:9, 4:3, and 1:1.
----End
Set the PTZ rotation mode. In Continuous mode, you can hold
down a directional button to rotate the PTZ device continuously
and release to stop PTZ rotation.
In Step mode, if you click a directional button, the PTZ will rotate
for a small angle and then stop.
Display the current PTZ rotation speed, where P refers to the pan
speed and T refers to the tilt speed. The speed varies depending
on the level. Level 10 refers to the highest speed. PTZ dome
cameras support this function.
Function buttons displayed on the PTZ page vary depending on the camera model.
For an external PTZ device, if you click the buttons for adjustment of the direction, aperture,
focus, zoom, or wiper for multiple times within 2s, the camera responds only to the first
operation.
2. Select Enable to enable the parking action function. Set Action to Preset position,
select a preset position from the Select preset position drop-down list box, and click
Save. The selected preset position is set as the home position. On the Preset Position
page, check the modification result. If the modification is successful, the home position
name contains the preset position information.
2. Set Tour track name. Click Add Tour Point. Select specified preset positions from the
Preset Position drop-down list box and set Duration (s). The recommended value of
Duration (s) is at least 120s. Click Save. On the Tour Track page, click to enable
the tour track.
You can enable a tour track at a time. If a tour track has been enabled and you enable another
tour track, the tour track that has been enabled will automatically stop.
To modify or delete a tour track, select it and click or .
To modify a tour track, stop it first.
2. Open the Pattern Scan tab page again, and click to enable pattern scan.
----End
The following table lists the parameters used for image settings.
Contrast Ratio of the luminance of the brightest color (white) to that of the
darkest color (black) in the image.
A larger contrast value indicates a clearer and brighter image.
A small contrast value indicates a dusky image.
The parameter is available only when Bit rate type is set to Variable
bit rate.
Max. bit rate The bit rate range depends on the selected resolution.
The image format and resolution requirements vary depending on the device model.
After all privacy areas are drawn, the View page is displayed. Check whether the
location and size of the privacy protection areas meet the requirements. The following
figure shows the privacy protection effect.
Privacy protection can be used only in the full screen mode. When area cropping is enabled, the
privacy protection function is still overlaid in full screen mode.
----End
1.3 Verification
1.3.1 Viewing a Live Video
Log in to the camera web system and view the live video on the View page. The effect
is shown below.
2.1 Introduction
2.1.1 About This Experiment
This experiment provides guidance on how to configure, view, play back, and download
videos.
2.1.2 Objectives
Upon completion of this task, you will be able to understand how to configure video
recording for cameras.
Step 2 Click Add. The Add dialog box is displayed. Set Start time and End
time, and click OK.
----End
----End
You need to download a recording only when the recording file is stored on an SD card or on
the video security platform.
You can click Stop Downloading to stop downloading recordings.
If you leave this page, recording download will be terminated.
The camera model decides the number of recording download channels. A maximum of four
channels of recordings can be downloaded at a time.
You need to use the download function provided by the browser to download recordings.
----End
2.3 Verification
Check that the recording plan can be executed properly and a recording can be
downloaded and played back properly.
3.1 Introduction
3.1.1 About This Experiment
This experiment provides guidance on how to set snapshot parameters on the camera
web system.
3.1.2 Objectives
Upon completion of this task, you will be able to understand how to configure the
snapshot function, including SD card space, image quality, and scheduled snapshot.
Networking description: Connect the Huawei camera to the LAN and ensure that it can
communicate with the computer.
Step 1 Choose Snapshot > Configuration > Snapshot Upload. The Snapshot
Upload area is displayed.
Ensure that the target server has sufficient storage space. Otherwise, the image will fail to be
uploaded. You are advised to periodically back up and delete images on the target server.
To ensure the integrity of uploaded data, ensure that the upload rate is greater than the image
storage rate of the camera.
----End
Select the snapshot to download and click Download. After downloading, you can
double-click the snapshot to open it or obtain the snapshot on the local computer.
3.3 Verification
You can view the captured images on the local computer.
4.1 Introduction
4.1.1 About This Experiment
This experiment provides guidance on how to configure intelligent behavior analysis on
the camera web system.
Behavior analysis is mutually exclusive with other intelligent analysis functions. The exclusive
relationship varies depending on the camera model. Before enabling behavior analysis, ensure that
the mutually exclusive functions are disabled.
4.1.2 Objectives
Upon completion of this task, you will be able to understand how to configure
intelligent behavior analysis, including configuration of global parameters and behavior
detection functions, and verification of the configuration.
Networking description: Connect the Huawei camera to the LAN and ensure that it can
communicate with the computer. The cameras used in this experiment must support
intelligent behavior analysis. This experiment uses Huawei cameras as an example.
Step 1 Choose Settings > Intelligent Analysis > Intelligent Alarms > Global
Configuration. On the Global Configuration tab page, select Enable
and set the parameters.
Send metadata Indicates whether to send metadata. After this function is enabled,
the camera will send metadata information such as intelligent
analysis rules and result as well as live video images to the video
security platform. If the platform requests metadata streams, enable
this function.
Min. object size Minimum size of an object that can be detected on video images in
the CIF (352 x 288) resolution.
No alarm is generated when the size of an object is smaller than the
value of this parameter.
Max. object size Maximum size of an object that can be detected on video images in
the CIF (352 x 288) resolution.
No alarm is generated when the size of an object is greater than the
value of this parameter.
Measure Pixels Click Measure Pixels. The pixel measurement box is displayed in
the preview image. You can drag the measurement box to view
the minimum and maximum sizes of objects.
The mentioned object sizes are for intelligent analysis of live
video at a resolution of 352 x 288 pixels. For live video at other
resolutions, the horizontal resolution of objects can be
calculated by the following formula: Horizontal resolution =
Required horizontal resolution (for video at 352 x 288
resolution) x Horizontal resolution of the actual live video/352.
You can use the same method to convert the vertical resolution
measured by the pixel measurement box.
For example, if the actual live video resolution is 1920 x 1080
pixels, and the required minimum size of the object is 5 x 5
pixels, the size of the pixel measurement box should be adjusted
to 28 x 19 pixels for checking the actual minimum size of the
object.
The object resolution in the video directly affects the recognition
accuracy. If the object resolution is insufficient, you need to
readjust the camera angle and focal length to ensure that the
object resolution meets the requirements.
Max. targets Maximum number of objects for intelligent analysis. Set this
parameter based on the value range displayed on the web page.
Shadow Mode in which the shadow is removed. The shadow generated when
removal mode objects are moving affects the detection size. Set this parameter
based on site requirements.
Note
In a scenario with no shadow or small shadows, Low is recommended to
ensure complete detection of objects.
In a scenario with obvious large shadows, High is recommended to cut
objects from shadows to make bounding rectangles closer to the object
sizes or cut objects overlapped with shadows to separately display the
objects.
----End
Parameter Description
Draw Button for drawing a detection area.
Note
A detection area can be a triangle, rectangle, or polygon.
To redraw a detection area that is not saved, click Redraw.
A detection area supports a maximum of 10 vertexes.
A maximum of four detection areas can be configured for analyzing a
type of behavior. A maximum of 10 detection areas can be configured for
all types of behavior.
You can perform operations on only one detection area at a time.
Measure Pixels Click Measure Pixels. The pixel measurement box is displayed in
the preview image. You can drag the measurement box to view the
minimum and maximum sizes of objects.
The mentioned object sizes are for intelligent analysis of live video
at a resolution of 352 x 288 pixels. For live video at other
resolutions, the horizontal resolution of objects can be calculated
by the following formula: Horizontal resolution = Required
horizontal resolution (for video at 352 x 288 resolution) x
Horizontal resolution of the actual live video/352. You can use the
same method to convert the vertical resolution measured by the
pixel measurement box.
For example, if the actual live video resolution is 1920 x 1080
pixels, and the required minimum size of the object is 5 x 5 pixels,
the size of the pixel measurement box should be adjusted to 28 x
19 pixels for checking the actual minimum size of the object.
The object resolution in the video directly affects the recognition
accuracy. If the object resolution is insufficient, you need to
readjust the camera angle and focal length to ensure that the
object resolution meets the requirements.
Alarm check Interval at which a camera checks for new alarms. If any alarm is
interval detected, the camera reports the alarm to the video security
platform.
Note
To prevent a flood of alarms, the camera reports only one alarm
even if it detects multiple alarms within the specified interval.
2. The Add Plan dialog box is displayed. Set Start time, End time, Schedule, and Select
days. Click OK and then Save.
If the object cannot be framed, check the configuration. The following table describes
the detailed check items.
Check whether the detection area is too Adjust the focal length and re-
large. If the detection area is too large, an configure the detection area size to
object entering the detection area cannot be Ensure that the object can be
detected due to its small size. detected.
Check whether the detection area is enabled Click the status toggle button to
under Intrusion > Detection Areas. enable the detection area.
----End
Measure Pixels Click Measure Pixels. The pixel measurement box is displayed in
the preview image. You can drag the measurement box to view
the minimum and maximum sizes of objects.
The mentioned object sizes are for intelligent analysis of live
video at a resolution of 352 x 288 pixels. For live video at other
resolutions, the horizontal resolution of objects can be calculated
by the following formula: Horizontal resolution = Required
horizontal resolution (for video at 352 x 288 resolution) x
Horizontal resolution of the actual live video/352. You can use
the same method to convert the vertical resolution measured by
the pixel measurement box.
For example, if the actual live video resolution is 1920 x 1080
pixels, and the required minimum size of the object is 5 x 5 pixels,
the size of the pixel measurement box should be adjusted to 28 x
19 pixels for checking the actual minimum size of the object.
The object resolution in the video directly affects the recognition
accuracy. If the object resolution is insufficient, you need to
readjust the camera angle and focal length to ensure that the
object resolution meets the requirements.
Alarm check Interval at which a camera checks for new alarms. If any alarm is
interval detected, the camera reports the alarm to the video security
platform.
Note
To prevent a flood of alarms, the camera reports only one alarm
even if it detects multiple alarms within the specified interval.
Alarm tolerance Time during which an object can be left in the detection area. If
time an object is left in the detection area longer than the specified
time, an alarm is generated.
2. The Add Plan dialog box is displayed. Set Start time, End time, Schedule, and Select
days. Click OK and then Save.
To modify or delete an alert plan, select it and click or in the Operation column.
----End
Measure Pixels Click Measure Pixels. The pixel measurement box is displayed in the
preview image. You can drag the measurement box to view the
minimum and maximum sizes of objects.
The mentioned object sizes are for intelligent analysis of live video
at a resolution of 352 x 288 pixels. For live video at other
resolutions, the horizontal resolution of objects can be calculated by
the following formula: Horizontal resolution = Required horizontal
resolution (for video at 352 x 288 resolution) x Horizontal resolution
of the actual live video/352. You can use the same method to
convert the vertical resolution measured by the pixel measurement
box.
For example, if the actual live video resolution is 1920 x 1080 pixels,
and the required minimum size of the object is 5 x 5 pixels, the size
of the pixel measurement box should be adjusted to 28 x 19 pixels
for checking the actual minimum size of the object.
The object resolution in the video directly affects the recognition
accuracy. If the object resolution is insufficient, you need to readjust
the camera angle and focal length to ensure that the object
resolution meets the requirements.
Alarm check Interval at which a camera checks for new alarms. If any alarm is
interval detected, the camera reports the alarm to the video security
platform.
Note
To prevent a flood of alarms, the camera reports only one alarm
even if it detects multiple alarms within the specified interval.
2. The Add Plan dialog box is displayed. Set Start time, End time, Schedule, and Select
days. Click OK and then Save.
To modify or delete an alert plan, select it and click or in the Operation column.
----End
Measure Pixels Click Measure Pixels. The pixel measurement box is displayed in
the preview image. You can drag the measurement box to view
the minimum and maximum sizes of objects.
The mentioned object sizes are for intelligent analysis of live
video at a resolution of 352 x 288 pixels. For live video at other
resolutions, the horizontal resolution of objects can be
calculated by the following formula: Horizontal resolution =
Required horizontal resolution (for video at 352 x 288
resolution) x Horizontal resolution of the actual live video/352.
You can use the same method to convert the vertical resolution
measured by the pixel measurement box.
For example, if the actual live video resolution is 1920 x 1080
pixels, and the required minimum size of the object is 5 x 5
pixels, the size of the pixel measurement box should be adjusted
to 28 x 19 pixels for checking the actual minimum size of the
object.
The object resolution in the video directly affects the
recognition accuracy. If the object resolution is insufficient, you
need to readjust the camera angle and focal length to ensure
Alarm check Interval at which a camera checks for new alarms. If any alarm
interval is detected, the camera reports the alarm to the video security
platform.
Note
To prevent a flood of alarms, the camera reports only one alarm
even if it detects multiple alarms within the specified interval.
Alarm tolerance Time during which an object can be left in the detection area. If
time an object is left in the detection area longer than the specified
time, an alarm is generated.
2. The Add Plan dialog box is displayed. Set Start time, End time, Schedule, and Select
days. Click OK and then Save.
To modify or delete an alert plan, select it and click or in the Operation column.
----End
Measure Pixels Click Measure Pixels. The pixel measurement box is displayed in
the preview image. You can drag the measurement box to view
the minimum and maximum sizes of objects.
The mentioned object sizes are for intelligent analysis of live
video at a resolution of 352 x 288 pixels. For live video at other
resolutions, the horizontal resolution of objects can be calculated
by the following formula: Horizontal resolution = Required
horizontal resolution (for video at 352 x 288 resolution) x
Horizontal resolution of the actual live video/352. You can use
the same method to convert the vertical resolution measured by
the pixel measurement box.
For example, if the actual live video resolution is 1920 x 1080
pixels, and the required minimum size of the object is 5 x 5 pixels,
the size of the pixel measurement box should be adjusted to 28 x
19 pixels for checking the actual minimum size of the object.
The object resolution in the video directly affects the recognition
accuracy. If the object resolution is insufficient, you need to
readjust the camera angle and focal length to ensure that the
object resolution meets the requirements.
Alarm check Interval at which a camera checks for new alarms. If any alarm is
interval detected, the camera reports the alarm to the video security
platform.
Note
To prevent a flood of alarms, the camera reports only one alarm even if it
detects multiple alarms within the specified interval.
2. The Add Plan dialog box is displayed. Set Start time, End time, Schedule, and Select
days. Click OK and then Save.
To modify or delete an alert plan, select it and click or in the Operation column.
----End
Measure Pixels Click Measure Pixels. The pixel measurement box is displayed in
the preview image. You can drag the measurement box to view
the minimum and maximum sizes of objects.
The mentioned object sizes are for intelligent analysis of live
video at a resolution of 352 x 288 pixels. For live video at other
resolutions, the horizontal resolution of objects can be calculated
by the following formula: Horizontal resolution = Required
horizontal resolution (for video at 352 x 288 resolution) x
Horizontal resolution of the actual live video/352. You can use
the same method to convert the vertical resolution measured by
the pixel measurement box.
For example, if the actual live video resolution is 1920 x 1080
pixels, and the required minimum size of the object is 5 x 5 pixels,
the size of the pixel measurement box should be adjusted to 28 x
19 pixels for checking the actual minimum size of the object.
The object resolution in the video directly affects the recognition
accuracy. If the object resolution is insufficient, you need to
readjust the camera angle and focal length to ensure that the
object resolution meets the requirements.
Alarm check Interval at which a camera checks for new alarms. If any alarm is
interval detected, the camera reports the alarm to the video security
platform.
Note
To prevent a flood of alarms, the camera reports only one alarm
even if it detects multiple alarms within the specified interval.
and click .
2. The Add Plan dialog box is displayed. Set Start time, End time, Schedule, and Select
days. Click OK and then Save.
To modify or delete an alert plan, select it and click or in the Operation column.
----End
Measure Pixels Click Measure Pixels. The pixel measurement box is displayed
in the preview image. You can drag the measurement box to
view the minimum and maximum sizes of objects.
The mentioned object sizes are for intelligent analysis of live
video at a resolution of 352 x 288 pixels. For live video at other
resolutions, the horizontal resolution of objects can be
calculated by the following formula: Horizontal resolution =
Required horizontal resolution (for video at 352 x 288
resolution) x Horizontal resolution of the actual live video/352.
You can use the same method to convert the vertical
resolution measured by the pixel measurement box.
For example, if the actual live video resolution is 1920 x 1080
pixels, and the required minimum size of the object is 5 x 5
pixels, the size of the pixel measurement box should be
Alarm check interval Interval at which a camera checks for new alarms. If any alarm
is detected, the camera reports the alarm to the video security
platform.
Note
To prevent a flood of alarms, the camera reports only one
alarm even if it detects multiple alarms within the specified
interval.
2. The Add Plan dialog box is displayed. Set Start time, End time, Schedule, and Select
days. Click OK and then Save.
To modify or delete an alert plan, select it and click or in the Operation column.
----End
Measure Pixels Click Measure Pixels. The pixel measurement box is displayed in
the preview image. You can drag the measurement box to view
the minimum and maximum sizes of objects.
The mentioned object sizes are for intelligent analysis of live
video at a resolution of 352 x 288 pixels. For live video at other
resolutions, the horizontal resolution of objects can be calculated
by the following formula: Horizontal resolution = Required
horizontal resolution (for video at 352 x 288 resolution) x
Horizontal resolution of the actual live video/352. You can use
the same method to convert the vertical resolution measured by
the pixel measurement box.
For example, if the actual live video resolution is 1920 x 1080
pixels, and the required minimum size of the object is 5 x 5 pixels,
the size of the pixel measurement box should be adjusted to 28 x
19 pixels for checking the actual minimum size of the object.
The object resolution in the video directly affects the recognition
accuracy. If the object resolution is insufficient, you need to
readjust the camera angle and focal length to ensure that the
object resolution meets the requirements.
Alarm check Interval at which a camera checks for new alarms. If any alarm is
To modify or delete an alert plan, select it and click or in the Operation column.
----End
4.3 Quiz
If a bank requires that cameras generate alarms upon detecting people who loiter in
the banking hall for longer than 20s, what should you do?
Step 1 Choose Settings > Intelligent Analysis > Intelligent Alarms > Loitering. The
Loitering tab page is displayed.
Step 2 Draw a detection area in the center of the image of the banking hall. Set Alarm
tolerance time to 20s.
Step 4 Set an alarm linkage policy on the Linkage Policy tab page.
5.1 Introduction
5.1.1 About This Experiment
A camera can detect the headcount and crowd density in an image and generate alarms
based on the preset threshold. This facilitates emergency handling.
The crowd density detection and the other following functions are mutually exclusive: target
detection, license plate recognition, intelligent behavior analysis, head counting, heat map, parking
detection, scene change detection, object classification, intelligent transportation security, and
parking violation detection. Before enabling the crowd density detection function, ensure that the
conflicting functions are disabled.
5.1.2 Objectives
Upon completion of this task, you will be able to understand the application scenario
and master the configuration method of crowd density detection.
Networking description: Connect the Huawei camera to the LAN and ensure that it can
communicate with the computer. The cameras used in this experiment must support
crowd density analysis. This experiment uses Huawei cameras as an example.
Step 2 Based on the current camera status, draw an area for crowd density
detection and adjust related parameters. The following table
describes the parameters.
Table 5-1 Detection area parameters
Parameter Description
Draw/Redra Button for drawing a detection area. After you click Draw, the cursor
w changes to a pen-shaped icon. Draw a polygon-shaped detection
Alarm check Interval at which a camera checks for new alarms. If any alarm is
interval detected, the camera reports the alarm to the video security
platform. The default value is 3. You can change the value based on
the site requirements.
Note
To prevent a flood of alarms, the camera reports only one alarm even if it
detects multiple alarms within the specified interval.
Alarm An alarm is generated when the head count in the detection area
threshold reaches the alarm threshold.
----End
Step 1 Click Alert Plan. The Alert Plan tab page is displayed. Click Add.
Step 2 The Add Plan dialog box is displayed. Set Start time, End time,
Schedule, and Select days. Click OK and then Save.
To modify or delete an alert plan, select it and click or in the Operation column.
----End
5.3 Verification
When an alarm is generated, you can choose Maintenance > Log and check whether
any alarm log is recorded.
After the configuration, you can view the headcount and crowd density in real time on
the Crowd page. See the following figure.
6.1 Introduction
6.1.1 About This Experiment
The 1+N mode enables a smart camera to not only provide intelligent analysis services
for its detection area but also perform video stream connection, decoding, intelligent
analysis, and intelligent analysis result output for one or more cameras on the live
network through access protocols. In this way, the legacy common cameras can obtain
some intelligent analysis capabilities, reducing the costs for reconstructing the legacy
system. This experiment describes how to configure the 1+N mode.
In the 1+N mode, the newly added high-performance camera that supports 1+N mode is called
the primary camera or device, and a common camera on the live network is called a secondary
camera or device.
The total resolution of the primary and secondary cameras must not exceed 10 megapixels. The
resolution of a secondary camera must not be higher than that of the primary camera or lower
than 2 megapixels. Secondary cameras do not need to support target detection or object
classification. The primary camera can work with the secondary cameras to implement the
functions.
6.1.2 Objectives
Upon completion of this task, you will be able to understand the application scenarios
and master the configuration methods of the 1+N mode.
Networking description: Connect the Huawei smart camera and legacy camera to the
LAN and ensure that it can communicate with the computer. The smart camera used in
this experiment must support the 1+N mode. This experiment uses Huawei cameras as
an example.
Step 2 After the restart, set the parameters for the secondary camera.
To check the ONVIF port number, log in to the secondary camera web system and choose
Settings > Network > Platform Connection > Second Protocol Parameters > ONVIF.
To check the ONVIF password, log in to the secondary camera web system and choose
Settings > Network > Platform Connection > Password Management > ONVIF Password. The
default user name and password of the ONVIF user are admin and HuaWei123, respectively.
To check the RTSP port number, log in to the secondary camera web system and choose
Settings > Network > Port Settings.
The values of RTSP user name and RTSP password are the same as those of ONVIF user name
and ONVIF password, respectively.
----End
2. Click .
3. Click . You can view the intelligent analysis result of the primary camera.
7.1 Introduction
7.1.1 About This Experiment
This experiment provides guidance on how to configure target and person detection to
capture facial images in real time and transmit data of target and person attributes.
Target and person detection are mutually exclusive with some other intelligent analysis functions.
Disable those functions before enabling target and person detection.
7.1.2 Objectives
Upon completion of this experiment, you will be able to:
Understand the application scenarios of target and person detection
Master the configuration of target and person detection
Networking description: Connect the Huawei camera to the LAN and ensure that it can
communicate with the computer. The cameras used in this experiment must support
intelligent target analysis. This experiment uses Huawei cameras as an example.
Snapshot mode You can select Optimization or Ultra-fast from the drop-down list.
Optimization: When an object is in the view, the camera
takes snapshots of the object and evaluates each snapshot.
After the object leaves the view, the snapshot with the
optimal quality is selected as the snapshot result.
Ultra-fast: When an object is in the view, the camera takes
snapshots of the object and evaluates each snapshot. Once a
snapshot meets the requirement specified by Capture
sensitivity, the snapshot is selected as the snapshot result.
The camera keeps detecting the object but does not take
snapshots of the object.
Alarm check Interval at which a camera checks for new alarms. If any alarm is
interval (s) detected, the camera reports the alarm to the video security
platform. The default value is 5. You can change the value based
on site requirements.
Note
To prevent a flood of alarms, the camera reports only one alarm
even if it detects multiple alarms within the specified interval.
Quality filter When the switch is on, the camera does not recognize the target
switch or person whose image quality is lower than the threshold.
Target filtering This parameter is available only when Quality filter switch is
sensitivity selected.
A larger value indicates a higher sensitivity and a higher filtering
rate. The default value is 1.
After Target filtering sensitivity is set, the camera filters out poor
target images, for example, target images with improper target
angle, size, or brightness. A red cross (X) is displayed in the
bottom right corner of the target images that are filtered out. If
FTP is enabled, the names of the target images that are filtered
out end with _0.
Person filtering This parameter is available only when Quality filter switch is
sensitivity selected.
A larger value indicates a higher sensitivity and a higher filtering
rate. The default value is 1.
After this parameter is set, the camera filters out full-person
images with a large area of the person blocked. A red cross (X) is
displayed in the bottom right corner of the full-person images
that are filtered out. If FTP is enabled, the names of the full-
person images that are filtered out end with _0.
Min. target Minimum target resolution that can be recognized by the camera.
Targets whose width is smaller than the value of this parameter
cannot be detected.
You can click Detect Min. Target to view the minimum width in
the image. Set this parameter based on site requirements.
Enable target If target enhancement is enabled, the system can cut out targets
enhancement and improve the facial brightness and contrast.
Note
Enabling target enhancement in a scene with sufficient brightness
and contrast will result in over-bright images. Therefore, you are
advised to enable this function only when the brightness and
contrast are insufficient.
Send target After Send target cutout is selected, a camera sends only
cutout extracted target images but not snapshots with background to
the video security platform. The number of snapshots taken for a
Send person After Send person cutout is selected, a camera sends only full-
cutout person images but not snapshots with background to the video
security platform. The number of snapshots taken for a single
object can be set. The value ranges from 1 to 10.
Target snapshot There are 10 levels of image quality. A higher level indicates
quality better image quality, but requires more storage space. Set this
parameter based on site requirements.
Person snapshot There are 10 levels of image quality. A higher level indicates
quality better image quality, but requires more storage space. Set this
parameter based on site requirements.
Send full image After Send full image is selected, the camera sends the entire
captured images (with the background) to the video security
platform.
Full image There are 10 levels of image quality. A higher level indicates
quality better image quality, but requires more storage space. Set this
parameter based on site requirements.
Detection Area The system detects targets only in the specified detection area.
You can click Draw to draw a polygon-shaped target detection
area.
exclude area The system does not detect targets in the specified exclude area.
You can click Draw to draw a polygon-shaped exclude area.
If the target snapshot quality is not satisfactory in backlight scenarios such as entrance and exit
of a hall, it is recommended that you adjust parameters based on the parameter description.
At a high frame rate (50 fps or 60 fps), if the target detection and image stabilization are
enabled at the same time, frame loss may occur due to high CPU usage. In other words, the
actual frame rate may be lower than the configured frame rate.
Step 1 Choose Alert Plan. The page shown in the following figure is
displayed. Click Add. The Add Plan dialog box is displayed.
Step 2 In the dialog box, set Start time, End time, Schedule, and Select days.
Click OK and then Save.
To modify or delete an alert plan, select the plan and click or in the Operation column.
----End
Step 1 Choose Settings > Intelligent Analysis > Target Detection and click
the Text Overlay tab.
Step 2 Set the related parameters, such as front size, location, and type.
Then, click Save.
7.3 Verification
To check the target snapshots, click the Target tab. On the tab page displayed, click
on the bottom of the video panel to enable real-time target capture. If captured target
and person images are shown on the right and the bottom of the tab page, and
identified target and person attributes are displayed on the right, the target and person
detection parameters are set successfully.
7.4 Quiz
If the captured target images are not clear due to insufficient brightness and contrast,
how can you adjust the image clarity?
1. Log in to the camera web system, choose Settings > Intelligent Analysis > Target
Detection. The Target Detection page is displayed.
8.1 Introduction
8.1.1 About This Experiment
This experiment provides guidance on how to configure intelligent vehicle analysis,
including license plate recognition (LPR), traffic behavior detection, and object
classification.
8.1.2 Objectives
Upon completion of this experiment, you will be able to:
Understand the application scenarios of intelligent vehicle analysis
Master the configuration methods in various application scenarios
Networking description: Connect the Huawei camera to the LAN and ensure that it can
communicate with the computer. The cameras used in this experiment must support
intelligent target analysis. This experiment uses Huawei cameras as an example.
HCIA-Intelligent Vision Experiment Guide Page 2
Ensure that target detection, head counting, object classification, and intelligent alarms are
disabled before enabling motor vehicle detection.
To recognize license plates from regions outside Chinese mainland, import the corresponding
algorithm and license, and enable motor vehicle detection again.
Before enabling area entry/exit detection, ensure that vehicle event detection and traffic flow
statistics are disabled.
If the system software is upgraded or downgraded, check whether the lane lines and detection
lines recorded in the system match with the actual ones. If they do not match, re-draw them.
Trigger mode Loop: When a vehicle passes through the Select a value
inductive loop, the loop triggers the signal from the drop-
to implement vehicle and license plate down list box.
capture and recognition at the entrance or
exit.
Video: Real-time detection is
implemented. The camera captures and
recognizes vehicles and license plates that
enter the camera's view.
Hybrid: The loop and video trigger modes
are used together. When a vehicle or
license plate is detected in either of the
two modes, the vehicle or license plate is
captured and recognized.
Enable data Whether vehicle and license plate data Select the check
backhaul needs to be transferred to the barrier gate box.
system.
Display license Only license plates are extracted and Select the check
plate thumbnail displayed in preview when real-time box.
capture is enabled.
Display panorama The full image is displayed in preview Select the check
when real-time capture is enabled. box.
Enable target After this function is enabled, the targets First select
detection of the driver and front passenger will be Enable
captured. secondary
feature
recognition and
then Enable
target
detection.
Priority province or Vehicles with license plates registered in Enter the name
city the specified province or city will be of the province
preferentially detected. Only one priority or city.
province or city can be set.
You can double-click the preview image to view it in full-screen mode to draw more accurately.
Press Esc to exit the full-screen mode when you complete drawing.
To relocate a red lane line, move the cursor to the middle of the line, and then click and drag
the line. To change the position of the ends and the length of a red lane line, click and drag
either end of the line to adjust.
Detection line 1 must stay between the two horizontal dotted lines. Otherwise, the
settings will fail to save.
Figure 8-4 Detection line 1 that is not between the two horizontal dotted lines
If no ROI is drawn, the camera will detect all vehicles and license plates that appear in the
camera's view.
In entrance and exit scenarios, you are advised to draw an ROI.
Horizontal resolution directly affects the accuracy of LPR. You need to readjust the camera angle
and focal length to meet the horizontal resolution requirements.
----End
The function is mutually exclusive with motor vehicle detection. Before enabling object classification,
ensure that motor vehicle detection is disabled under Settings > Intelligent Analysis > Motor Vehicle
Settings.
Upload small Whether to upload the captured small Select the check
image via FTP images to the FTP or SFTP server. You box.
need to configure the FTP or SFTP upload
service in advance.
HCIA-Intelligent Vision Experiment Guide Page 10
Upload full Whether to upload the captured full Select the check
image via FTP images to the FTP or SFTP server. You box.
need to configure the FTP or SFTP upload
service in advance.
Priority province Vehicles with license plates registered in Enter the name of
or city the specified province or city will be the province or city.
preferentially detected. Only one priority
province or city can be set.
If the snapshot quality is unsatisfactory, you are advised to adjust the backlight compensation.
Choose Settings > Video/Audio/Image > Image > Backlight. Set WDR/HLC/BLC to Backlight
compensation, set BLC area, and then check the captured image quality.
You can double-click the preview image to view it in full-screen mode to draw more accurately.
Press Esc to exit the full-screen mode when you complete drawing.
HCIA-Intelligent Vision Experiment Guide Page 11
To relocate a red lane line, move the cursor to the middle of the line, and then click and drag
the line. To change the position of the ends and the length of a red lane line, click and drag
either end of the line to adjust.
To disable real-time object capture, click on the bottom of the video panel again.
After real-time object capture is disabled, the camera stops displaying the captured object
images in real time, but still classifies objects and saves the classification results based on the
object classification plan.
HCIA-Intelligent Vision Experiment Guide Page 12
To disable real-time object capture, click on the bottom of the video panel again.
After real-time object capture is disabled, the camera stops displaying the captured object
images in real time, but still classifies objects and saves the classification results based on the
object classification plan.
By default, the captured images are stored in the path C:\Users\User
name\IPC_MediaPlayer\CaptureUpload\. Ensure that the available storage space of the system
disk is greater than 200 MB. Otherwise, images will fail to be captured.
HCIA-Intelligent Vision Experiment Guide Page 13
Real-time capture cannot be enabled on multiple tabs in a browser at once due to process
conflict. If you attempt to enable the function on multiple tabs, a message will be displayed,
indicating that the process of reporting object classification information has been started. In
this case, close unnecessary tabs and leave only one tab open or restart the browser and try
again.
Some objects may fail to be captured due to causes such as improper image size or object
angle. If the capture result is not ideal, adjust the camera image quality, angle, and
illumination.
----End
External device This detection mode can be selected if the camera has been
connected to an external device (such as a loop detector). In
this mode, only traffic behavior detected by external devices
can trigger snapshot taking. If you select this detection
mode but no external device is connected or the connected
external device is faulty, the camera cannot perform traffic
detection and capture.
Note
If external devices involve only the traffic light detectors, do
not select this mode. Otherwise, the camera cannot be
triggered to take snapshots.
External device When the camera has been connected to an external device
priority and the external device is working properly, the external
device will detect and take snapshots of traffic violations
that can be identified by the external device. Other traffic
violations will be detected and captured by the camera.
When the camera is not connected to an external device or
the connected external device is faulty, the camera will
detect and take snapshots of traffic violations.
Loop detector Detects vehicles that pass the Passing through checkpoints,
induction loop. When multiple speeding, driving too slow,
loops are deployed, the wrong-way driving, and red-
driving speed of the vehicle light running
can be estimated based on
the distance between loops
and the time for the vehicle to
pass the loops.
HCIA-Intelligent Vision Experiment Guide Page 16
In external device detection mode, some violation detections require associated settings of external
devices. For example, wrong-way driving detection requires that the fifth DIP switch on the RS-485
loop vehicle detector be set to ON. Speeding or driving-too-slow detection requires the radar or loop
detectors to provide speed data.
Set Capture interval to configure the interval for the camera to take snapshots of
vehicles violating traffic rules or regulations. You can set the capture interval or use an
adaptive interval.
Capture interval
Generally, the first snapshot is triggered when a vehicle reaches a specified position.
You can set the interval between the first and the second snapshots in the first text box,
for example, to 1000 (indicating 1s). Similarly, you can set the interval between the
second and the third snapshots in the second text box. If only two snapshots are
required, set the first interval only.
Adaptive interval
The snapshot interval can be adjusted automatically based on the vehicle speed.
Enter the vehicle speed limit (unit: km/h) in the text boxes following Low speed and
Medium speed. Then, the low speed range, medium speed range, and high speed range
are specified.
HCIA-Intelligent Vision Experiment Guide Page 17
− If the speed of the target vehicle is within the low speed range, the snapshot
interval is the value specified by Frame interval at low speed.
− If the speed of the target vehicle is within the medium speed range, the snapshot
interval is the value specified by Frame interval at medium speed.
− If the speed of the target vehicle is within the high speed range, the snapshot
interval is the value specified by Frame interval at high speed.
For some traffic violations (such as marked lanes violation and unsafe lane change), the
detection and capture are performed based on the vehicle location instead of the snapshot
intervals. For example, upon detecting an unsafe lane change, the camera takes a snapshot
when the vehicle is driving on the first lane, crossing the two lanes, and on the second lane
separately.
If Detection mode is set to External device or External device priority and the camera is
connected to a loop detector, the snapshot intervals set here are not applied to detection of
red-light running, passing through checkpoints, wrong-way driving, speeding, or driving too
slow. Instead, the camera uses the delay set under Intelligent Transportation > External
Device > Vehicle Detector as the snapshot interval.
If the camera is connected to an external flashlight, the snapshot interval must be greater than
the recycle time of the flashlight. Otherwise, the flashlight will fail to produce a flash when
snapshots are being taken, resulting in dark images. To obtain the recycle time of a flashlight,
refer to its user manual or contact the vendor.
Select the traffic violations on which the detection is to be enabled, set the number of
snapshots to take and other related parameters, and click Save.
HCIA-Intelligent Vision Experiment Guide Page 18
Duration (s) You need to set this parameter for detection of the following
violations:
Parking in yellow zone. This parameter specifies the maximum
time for which a vehicle can stay in a yellow zone. If a vehicle stays
in the yellow zone for a period longer than the specified value, the
camera determines that the vehicle is committing a traffic offense.
Bus lane violation. If a non-bus motor vehicle travels or stays on a
bus lane for a period longer than the specified value, the camera
determines that the vehicle is committing a traffic offense.
Motor vehicle driving on non-motor vehicle lanes. If a motor
vehicle travels or stays in a non-motor vehicle lane for a period
longer than the specified value, the camera determines that the
vehicle is committing a traffic offense.
Driving large vehicles on prohibited lanes. If a large vehicle travels
or stays on a lane from which large vehicles are prohibited for a
period longer than the specified value, the camera determines that
the vehicle is committing a traffic offense.
The permitted stay duration varies depending on local traffic rules
and regulations. Set the value accordingly.
Parameter Description
Click the Traffic Light Settings tab and set related parameters.
The following figure shows the Traffic Light Settings tab page. The Setting 1, Setting 2,
Setting 3, and Setting 4 tab pages correspond to traffic lights for different lanes at an
intersection. Generally, there are three traffic lights at an intersection, directing traffic
flow on the left-turn lane, right-turn lane, and straight-through lane respectively.
Setting 1, Setting 2, and Setting 3 can correspond to each of the three traffic lights.
After you draw the traffic light area and set the parameters, the camera can analyze
and determine the status of the traffic light in real time based on the live video.
Access mode Select an option based on the actual Select a value from the
mode in which the traffic light is drop-down list.
connected to the camera:
485 priority: The traffic light signal is
reported to the camera by a traffic
light detector. Select this value if the
camera has been connected to a
traffic light detector. If this option is
selected, the subsequent parameters
on this page will not be available for
setting. You need to set related
parameters under Intelligent
Transportation > External Device >
Traffic Light.
Video access: The camera captures
the traffic light in real time and
obtains its status through video
analysis. Select this value if the
traffic light status cannot be
obtained through a traffic light
detector, for example, when the
camera cannot be connected to a
traffic light detector due to onsite
constructions. After this value is
selected, the subsequent parameters
on this page are available for setting.
Note
When 485 priority is selected (in
other words, a traffic light detector is
connected), the traffic light status
can be identified more accurately. If
Video access is selected, the
identification accuracy of the traffic
light status is greatly affected by the
surrounding environment.
HCIA-Intelligent Vision Experiment Guide Page 22
Traffic light Direction denoted by the actual Select one or more check
direction traffic lights, such as left turn or boxes.
straight through.
Traffic light Colors of the actual traffic lights. Select one or more check
color boxes.
Yellow light Duration in which the traffic light is Enter a number, in seconds.
duration yellow.
Total number of Total number of lanes associated with the current video
associated lanes image. Set this parameter based on site requirements.
HCIA-Intelligent Vision Experiment Guide Page 24
Parameter Description
Violation recording Whether the camera records the traffic violations and stores
the footage in the SD card. Before enabling this function,
ensure that the SD card is installed on the camera and the
video recording is functioning. You can search for and
download required footage on the Recording tab page or
under Snapshot > ITS.
Note
After the violation recording is enabled, the camera records
a video for 20s by default. When a motor vehicle enters a
yellow zone, the camera starts videotaping. When the
motor vehicle stays in the yellow zone for a specified period,
the camera stops videotaping.
Lane direction Turning direction on the lane. Select an option based on the
actual situation.
Marked speed for Posted speed limit for small vehicles moving on the lane.
small vehicles According to some local traffic rules and regulations, a
warning will be issued to a vehicle that exceeds the limit by
less than 10%.
Lowest speed for Minimum speed for small vehicles driving on the lane.
small vehicles Vehicles moving below this speed can impede traffic flow or
be dangerous.
HCIA-Intelligent Vision Experiment Guide Page 25
Parameter Description
Highest speed for If a small vehicle on this lane moves at a speed higher than
small vehicles the specified value, the camera determines that the vehicle
is speeding, captures its image, and recognizes its related
information. The traffic rules and regulations may vary in
different cities. Therefore, set this parameter based on the
actual regulated highest speed. For example, in some cities,
driving at a speed 10% over the posted speed limit will be
considered a speeding violation. In this case, set this
parameter to 1.1 times the value of Marked speed for small
vehicles.
Marked speed for Posted speed limit for large vehicles moving on this lane.
large vehicles According to some local traffic rules and regulations, a
warning will be issued to a vehicle that exceeds the limit by
less than 10%.
Lowest speed for Minimum speed for large vehicles driving on the lane.
large vehicles Vehicles moving below this speed can impede traffic flow or
be dangerous.
Highest speed for If a large vehicle on this lane moves at a speed higher than
large vehicles the specified value, the camera determines that the vehicle
is speeding, captures its image, and recognizes its related
information. The traffic rules and regulations may vary in
different cities. Therefore, set this parameter based on the
actual regulated highest speed. For example, in some cities,
driving at a speed 10% over the posted speed limit will be
considered a speeding violation. In this case, set this
parameter to 1.1 times the value of Marked speed for large
vehicles.
After configuring an external device under External Device, you do not need to configure it again
under Extension Interface. It also works the other way around.
HCIA-Intelligent Vision Experiment Guide Page 26
Device type External device type. Set this parameter to Loop detector.
Serial port Serial port through which the loop detector connects to the
camera.
HCIA-Intelligent Vision Experiment Guide Page 27
Parameter Description
Snapshot Scheme for vehicle capture after the detector detects the vehicle.
scheme The options are as follows:
Two_Out of loop 1_Out of loop 2_Out of loop 2 delay: Two loops
are deployed. The camera takes a snapshot at the moments the
vehicle leaves loop 1 and loop 2, and after a specified delay from
the moment the vehicle leaves loop 2. A total of three snapshots
are taken.
Two_Into loop 1_Out of loop 2_Out of loop 2 delay: Two loops are
deployed. The camera takes a snapshot at the moments the vehicle
enters loop 1 and leaves loop 2, and after a specified delay from
the moment the vehicle leaves loop 2. A total of three snapshots
are taken.
Two_Into loop 2_Out of loop 2_Out of loop 2 delay: Two loops are
deployed. The camera takes a snapshot at the moments the vehicle
enters loop 2 and leaves loop 2, and after a specified delay from
the moment the vehicle leaves loop 2. A total of three snapshots
are taken.
Delay (ms) Interval between the last two snapshots when the snapshot
scheme involves a delay. The value is an integer ranging from 0 to
1800, in milliseconds. This parameter applies only to the snapshot
scheme for red-light-running detection.
Device type External device type. Set this parameter to Traffic signal
detector.
Serial port Serial port through which the traffic light detector connects to
the camera.
Port count Number of ports through which the traffic light detector
connects to external traffic lights.
Left-turn light ID of the port that connects the left-turn traffic light to the
detector.
Right-turn light ID of the port that connects the right-turn traffic light to the
detector.
U-turn light ID of the port that connects the U-turn traffic light to the
detector.
HCIA-Intelligent Vision Experiment Guide Page 31
In the Windows 10 operating system, if the live video resolution exceeds 5 megapixels (3072 x 1728),
drawing lines may fail to be displayed on the live video image. To address this issue, log in to the
camera web system, choose Settings > System > Local Settings, and set Rendering mode to D3D.
Alternatively, choose Settings > Video/Audio/Image > Video and decrease the resolution of the
primary stream.
1. Choose Intelligent Transportation > Scene Configuration. Select the province or city
to which the local license plates belong from the Local license plate drop-down list box.
2. Select lane lines and set the lane line types based on site requirements.
HCIA-Intelligent Vision Experiment Guide Page 32
3. Draw the lane lines, stop lines, left-turn boundary, right-turn boundary, straight-
through trigger line, and right lane boundary.
HCIA-Intelligent Vision Experiment Guide Page 33
Figure 8-23 Drawing lane lines, boundary lines, and trigger lines
In the image, lane lines are in green, turning boundaries are in purple, stop lines and
the straight-through trigger line are in red, and the right lane boundary is in blue.
You can move the cursor to the middle of a line and hold down the left mouse button
to drag the lane line. You can also drag either end of a line to shorten, extend, or change
the direction of the line.
The following are requirements for drawing lines:
The lane lines and stop lines should match the actual ones. A stop line should be
as long as the width of the corresponding lane and cap the lane lines. At the
same time, the lane lines should not intrude into the crosswalk.
Turning boundaries should cover all possible turning paths while avoiding paths
of straight-through vehicles. In addition, draw the left-turn and right-turn trigger
lines where vehicles are about to turn left or right.
The lower end of the lane lines and right lane boundary should reach the lower
edge of the image. Otherwise, the traffic flow statistics (such as lane occupancy)
will be affected.
Straight-through trigger lines are used to detect straight-through vehicles and
straight-through violations. These lines must be set appropriately to meet the
following requirements:
− All vehicles passing straight through the intersection can reach the trigger line so
that they can be detected.
− The trigger line should be out of reach for left-turn and right-turn vehicles.
− When a vehicle reaches the straight-through trigger line, the horizontal resolution
of its license plate in the video is at least 120 pixels so that LPR can be performed.
You can set the front detection line based on site requirements. If the camera is
installed far away from the stop line, the resolution of the license plate image
captured when the vehicle is close to the stop line may be small. When the
vehicle rear passes the front detection line, a license plate image is captured to
ensure that the captured image is clear. To enable the front detection line, ensure
that the front detection line meets the following requirements:
− The front detection line should be below all stop lines. The distance from the front
detection line to the lower edge of the image is half of the vehicle length.
− The front detection line must be higher than the lowest point of all lane lines and
lower than the highest point of all lane lines. In other words, ensure that the front
detection line is within the lane lines.
− When the vehicle rear passes the front detection line, the license plate is captured.
4. Click Draw LPR Area to draw an LPR area for each lane. The camera will recognize
license plates in this area. Click in the preview image to specify points of the LPR area.
To finish the drawing, click the first point again. All the points will be connected to form
an LPR area. After the drawing is complete, click Save.
HCIA-Intelligent Vision Experiment Guide Page 34
It is recommended that the upper and lower edges of an LPR area be one or two vehicle-
lengths away from the stop lines, and the LPR area cover the lane area.
The LPR area must be a convex polygon. Otherwise, the LPR will not take effect.
LPR areas cannot overlap each other.
An LPR area must include the stop line and its side edges must be as close as possible to the
lane lines or on the lane lines. Ensure that a vehicle is in the LPR area when it reaches the stop
line.
5. (Optional) Draw a yellow area. When the detection of parking in yellow zones is
enabled, you need to draw a yellow area in the preview image. The method for drawing
a yellow area is the same as that for drawing an LPR area. The yellow area drawn in
the preview image should cover the actual yellow zone. After the drawing is complete,
click Save.
The distance between the yellow area and the four edges of the video image must be greater than
or equal to the length of one vehicle.
6. (Optional) Draw a pedestrian area. When the detection of failure to give right-of-
way to pedestrians is enabled, you need to draw a pedestrian area to mark the
crosswalk in the video image. The method for drawing a pedestrian area is the same as
that for drawing an LPR area. The pedestrian area should cover only the sidewalk and
must not cover the non-motor vehicle lane. Otherwise, the detection may be affected
HCIA-Intelligent Vision Experiment Guide Page 35
by pedestrians and non-motor vehicles on the non-motor vehicle lane. Generally, there
are two cases when drawing a pedestrian area:
There is a green belt between the motor vehicle lane and the non-motor vehicle
lane. In this case, it is recommended that the pedestrian area include the
pedestrian waiting area between the two green belt segments, and not cover the
non-motor vehicle lane.
When there is no green belt between the motor vehicle lane and the non-motor
vehicle lane, it is recommended that the pedestrian area cover only the crosswalk,
and not cover the non-motor vehicle lane.
HCIA-Intelligent Vision Experiment Guide Page 36
Enable target synthesis After this function is enabled, target images of the driver
and front passenger will be superimposed on snapshots.
Violation Type The image synthesis is enabled during the detection on the
selected violation types.
Note
The violation types displayed here are available to choose
from in ePolice and checkpoint modes.
HCIA-Intelligent Vision Experiment Guide Page 38
Parameter Description
Number of Synthesized Rule for arranging multiple images for synthesis. The
Images specific image in a corresponding position is configured
under Arrangement. In a synthesized image, 1 corresponds
to Image 1, 2 corresponds to Image 2, 3 corresponds to
Image 3, and so forth.
1: rule for synthesizing one snapshot
2: rule for synthesizing two snapshots
3: rule for synthesizing three snapshots
Enable target synthesis If this option is selected, the in-vehicle front-row target
image will be synthesized with the original snapshot.
If a target is detected, the target image will be displayed
in the upper right corner of the original snapshot.
If two targets are detected, the image of the target on the
left will be displayed in the upper left corner of the
original snapshot; the image of the target on the right will
be displayed in the upper right corner of the original
snapshot.
If more than two targets are detected, the image of the
target on the leftmost will be displayed in the upper right
corner of the original snapshot; the image of the target on
the rightmost will be displayed in the upper right corner of
the original snapshot; the images of the targets in other
positions will be not displayed on the original snapshot.
In general, ePolice cameras capture images of the rear
part of vehicles. Therefore, there are no target images.
Target images are captured only when vehicles drive
against the direction of traffic.
3. Click Save.
Image type Type of the image where text will be superimposed. The
options are as follows:
Original: Text is superimposed on the original snapshots.
Synthesis: Text is superimposed on the synthesized images.
Note
After text overlay is enabled, the superimposed image is
displayed in Panorama on the View page, and the saved
snapshots are superimposed images.
Parameter Description
3. Click Save.
In night scenarios, the traffic light color is too saturated in the image to be accurately recognized by
video analysis. In this case, it is recommended that the traffic light detector be connected to the
camera for better recognition of the traffic light status.
2. Set related parameters and click Save. The following table describes the parameters.
Red light type Type of the red light. Select an option from the drop-down list:
Circle: red circular light that stops driving to all directions
Arrow: red arrow light that stops driving to a specific direction
Countdown: traffic light that has a timer displaying the time left
before the signal changes.
Electronic Zoom After you click this button, you can zoom in on the video image
In to view details. The image resumes after you click this button
again.
It is recommended that each traffic light be set as a traffic light group and each frame
cover only one light. The size of the frame is close to that of the light as much as
possible. The following figure shows an example: The left-turn light is a traffic light
group, and a frame is drawn to completely cover the light.
HCIA-Intelligent Vision Experiment Guide Page 42
Violation detection
Simulate the red light to verify the traffic violation detections such as red-light-running
detection.
Log in to the camera web system and choose Intelligent Transportation > Application >
Traffic Light Settings > Simulate traffic light. Then select a direction and click Enable.
The camera will consider that the traffic light in your specified direction is always red.
By simulating the red light, the function verification and commissioning efficiency can
be improved.
If the LPR accuracy is lower than 90%, check the configurations, including whether the related
functions are enabled and whether the detection lines are correctly set. If the image quality is
unsatisfactory, for example, the license plate is underexposed or overexposed, adjust the image or
illuminator parameters.
----End
Huawei Intelligent Vision Certification Training
HCIA-Intelligent Vision
System Lab Guide
ISSUE:1.0
1
Copyright © Huawei Technologies Co., Ltd. 2021. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any means without
prior written consent of Huawei Technologies Co., Ltd.
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their
respective holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei
and the customer. All or part of the products, services and features described in this document may
not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all
statements, information, and recommendations in this document are provided "AS IS" without
warranties, guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in
the preparation of this document to ensure accuracy of the contents, but all statements, information,
and recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Overview
This document is applicable to the candidates who are preparing for the HCIA-
Intelligent Vision exam and the readers who want to understand the Intelligent Vision
basics, Intelligent Vision networking, Huawei Intelligent Vision product features, and
security configuration.
Description
This document introduces multiple experiments, including basic configurations, video
features, and intelligent analysis functions.
HoloSens Intelligent
2 Shared by all groups
Vision System
Laptop or desktop
1 for each group
computer
Contents
1.1 Introduction
1.1.1 About This Experiment
The IVS1800 have two configuration scenario. One is the Local Display Unit (LDU)
scenario where a single IVS1800 is deployed and a monitor is required. Another is the
iClient scenario where multiple IVS1800s are deployed.In this case a PC and a monitor
are required.
This document uses the iClient scenario as an example to describe how to configure
services on the Huawei Intelligent Vision platform.
1.1.2 Objectives
Upon completion of this task, you will be able to:
Master how to download and install the iClient.
Learn how to initialize the IVS1800.
Network One or more network ports One or more network ports with
port with bandwidth of at least bandwidth of at least 1000 Mbit/s
1000 Mbit/s
2. Check the number of live video channels supported by the client in different
configurations.
The following table lists the number of live video channels supported in different
configurations.
•Configuration A: I5-2400 CPU @3.10 GHz; 32-bit operating system; memory: 4 GB;
Windows 7 Professional Edition
•Configuration B: I7-6600 CPU @3.2 GHz; 64-bit operating system; memory: 16 GB;
Windows 10 Professional Edition (CPU software decoding)
•Configuration C: I7-8700 CPU @3.2 GHz; 64-bit operating system; memory: 16 GB;
Windows 10 Professional Edition (hardware decoding based on the integrated
graphics card)
CIF (512 38 38 60 42 64 60
Kbit/s)
4CIF/D1 22 22 45 36 50 45
(2
Mbit/s)
720P (2 12 10 16 14 34 30
Mbit/s@
30fps)
1080P (4 8 6 14 10 20 14
Mbit/s@
30fps)
3840 x 1 1 3 1 4 4
2160 (12
Mbit/s@
25fps)
3840*216 1 1 2 1 4 3
0 (12
Mbit/s@
30fps)
3. Installing the iClient. We can obtain the iClient installation package in two ways.
Option 1:Obtain the iClient installation package from
(1) Visit http://support.huawei.com/enterprise.
(2) Search and select IVS1800. The IVS1800 product page is displayed, as shown in the
following figure.
• The versions listed in the table are for reference only. In actual projects, please
select the corresponding software version based on the scenario.
Option 2: Obtain the iClient installation program from the OMU portal.
(1) Open Internet Explorer, enter https://OMU portal IP address :8443 in the address
box, and press Enter.
(2) Enter the user name and password and click Log In, as shown in the following
figure.
(3)Click Client Download in the upper right corner, and save the installation
program to a local directory, for example, D:\test.
4. Install the client. Find the iClient software package obtained in step 3. There are
two types of iClient software packages:
•The HoloSens_iClient_x64.exe program is used for installing the iClient on a 64-bit
Windows operating system.
Select the installation package that matches the operating system of the client,
double-click the client installation program, and complete the installation as
prompted.
----End
Security Security certificate. Select this check box only when Security
certificate Protocol is selected.
If you select this check box, the iClient will perform security
verification for the IVS1800 during login.
If you do not select this check box, the system displays a message
indicating that the system is prone to attacks. You are advised to
select this check box.
•For first login to the IVS1800, change the password as prompted. You are advised to
use a password with high complexity.
•A maximum of four IVS1800s can be added.
•If you want to add more IVS1800s after login, add them under System
Management > Device Management. You can edit or delete IVS1800s as required.
----End
RAID5 All hard disks form a RAID 5 group. Each recording file is stored on
all hard disks. RAID 5 provides higher data storage reliability, but
its disk usage is lower than that in non-RAID mode.
Standard
•Four or more hard disks are required.
•One hard disk is used as a hot spare disk. The hot spare disk does
not store data. When a hard disk in the RAID group is faulty, the
hot spare disk will replace the faulty one and function as a
member disk of the RAID group.
•The disk usage is lower than that in economical configuration
mode, but the data storage reliability is higher than that in
economical configuration mode.
Economical
•Three or more hard disks are required.
•No hot spare disk is configured.
•The disk usage is higher than that in recommended configuration
mode, but the data storage reliability is lower than that in
recommended configuration mode.
NON-RAID Each recording file is stored on only one hard disk. The disk usage
is high, but the data storage reliability is lower than that in RAID 5
mode.
5. Click OK.
6. Choose Local Configuration > Local Disk. Check the RAID group status under RAID
Groups. This section uses the RAID 5 mode as an example. In this example, RAID 1 is
created for the system partition that stores system data and RAID 5 is created for the
data partition that stores recordings.
2.1 Overview
2.1.1 About This Experiment
You can configure a system operator user, a common operator user or a role_interface
user on the OMU portal.
•A system administrator can log in to the OMU portal to perform routine maintenance
and log in to the iClient to perform operations on the IVS1800 connected to the
iClient.
•A common operator can log in to the OMU portal to view the CPU, memory, and
hard disk usage, and log in to the iClient to perform basic operations such as live
video viewing, recording playback, and alarm handling on the IVS1800 connected to
the iClient.
•A role_interface user is dedicated to logging in to the SDK Service.
2.1.2 Objectives
Upon completion of this task, you will be able to:
Understand the differences between the OMU portal rights and iClient rights
of different roles.
Master how to add a user and assign different rights to the user.
Login/Logout √ √ √ √
Password √ √ √ √
change
Client √ √ √ ×
download
Home page √ √ √ ×
System √ √ × ×
configuration
Maintenance √ √ × ×
and
management
Storage √ √ × ×
management
Inspection √ √ × ×
Fault √ √ × ×
information
collection
NVR upgrade √ × × ×
User √ × × ×
management
The table below describes the iClient permissions of users with different roles.
Device √ √ × ×
management
Recording √ √ × ×
management
Alarm √ √ × ×
management
NMS √ √ × ×
Recording √ √ × ×
bookmark
management
Log √ √ × ×
management
Alarm √ √ √ ×
handling
Protection √ √ × ×
zone alert
deployment
and
withdrawal
Watermark √ √ √ ×
verification
Snapshot √ √ √ ×
management
Live video √ √ √ ×
viewing
Recording √ √ √ ×
playback
Recording √ √ √ ×
download
Voice √ √ √ ×
Basic PTZ √ √ √ ×
controls
Manual √ √ √ ×
recording
Snapshot √ √ √ ×
taking
Checkpoint √ √ √ ×
Advanced PTZ √ √ √ ×
controls
Recording lock √ √ √ ×
System √ √ × ×
configuration
management
Intelligent √ √ √ ×
analysis task
viewing
Intelligent √ √ √ ×
analysis task
search
Static library √ √ √ ×
viewing
Static library √ √ √ ×
management
Redlist viewing √ √ × ×
Redlist √ √ × ×
management
Select a role to create a user based on the site requirements. The procedures for
configuring users are similar. The following describes how to configure the system
administrator.
1. Log in to the OMU portal as the admin user.
2. Choose UserManagement > AddUser.
3. Configure a system operator, as shown in the following figure.
User Name User name and password. You are advised to create a
complex password. For details about password complexity
Password
requirements, see.
Confirm password
Enable account A user account is only valid in the account validity period
validity when you select Enable account validity.
Enable all domain to Enables you to view all cameras on the IVS1800 when
look selected.
----End
----End
User Name The user name and password are those of the new user.
Password
Security Security certificate. Select this check box only when Security
Certificate Protocol is selected.
If you select this check box, the iClient will perform security
verification for the IVS1800 during login.
You are advised to select this check box.
----End
3 Connecting Cameras
3.1 Overview
3.1.1 About This Experiment
This experiment provides guidance on how to centrally manage cameras on the
intelligent vision platform after they are registered with the platform, use them to
view live and recorded video, and store and forward video streams. In this way, video
sharing and networking between departments or organizations at all levels can be
achieved.
There are several camera registration modes. The following table describes the
application scenario of each camera registration mode.
Manual If you have planned IP addresses for a HWSDK, ONVIF, and GB/T
Batch Access large number of cameras and want to 28181
connect them to the video security
platform, you can add the cameras in
batches.
Manual If only a few cameras need to connect HWSDK, ONVIF, and GB/T
Single to the platform, you can add the 28181
Access cameras one by one.
3.1.2 Objectives
Upon completion of this task, you will be able to:
Understand the methods used in different camera access scenarios.
Master how to add and configure basic camera information.
Parameter Description
Port - If the cameras use the encryption transmission protocol TLS, set
the port number to 6061.
- If the cameras use a non-encryption transmission protocol, set
the port number to 6060.
7. Click Finish.
8. Click Finish.
Device Name Device name displayed on the iClient. You are advised to enter
the installation location or detection area.
User Name User name and password used to register a camera with the
IVS1800.
Choose Settings > Network > Platform Connection > Password
Management. On the SDK Password tab page, the value of
Current password is the registration password. The default
value is HuaWei123.
Password/Confirm
Device Login Port If the cameras use the encryption transmission protocol TLS,
set the port number to 6061.
If the cameras use a non-encryption transmission protocol, set
the port number to 6060.
Non-encryption transmission protocols may have security risks.
You are advised to use an encryption transmission protocol.
6. Click Finish.
Parameter Description
Enable SDK Select this check box when the camera is passively registered
with the IVS1800 through the HWSDK protocol. You do not need
to set all the preceding parameters.
Add You can click Add to access the camera connection page.
Delete You can click Delete to delete a camera. You can restore the
deleted camera on the recycle bin page.
After a camera is deleted, the recordings from the camera will
be deleted as well. Exercise caution when performing this
operation.
Recycle Bin You can click Recycle Bin to access the recycle bin page. On this
page, you can click Restore to restore a deleted camera or click
Delete to permanently delete a deleted camera.
Only permanently deleted cameras can be connected to the
IVS1800 again.
Camera Login You can click Camera Login to access the page for logging in to
the web system of a camera.
View PU Logs You can click View PU Logs to view logs of the current camera.
View Platform You can click View Platform Logs to view platform logs of the
Logs iClient.
The following table describes the key parameters. Use the default values for
parameters that are not listed in the table.
Parameter Setting
Name Set the main device name as required. The name will be
displayed on the Main Devices tab page. You are advised
to set this parameter to the installation position of a
camera.
User Name Change the values of these parameters to the user name
and password for registering a camera with the IVS1800.
New The change will be synchronized to the camera.
Password/Confirm
Password
Camera Name Set the camera name as required. The name will be displayed
on the Cameras tab page and can be the same as the main
device name.
Display time If you select Display time, the current time of a camera is
displayed in the upper left corner of the video pane by default.
You can drag the time display position.
Display text Set the OSD text as required. By default, all text records overlay
each other. You can drag a red box shown in the preceding
figure to display an OSD text record in the required position.
The system supports a maximum of eight OSD text records.
Enable text If you select Enable text blinking for an OSD text record, the
blinking text blinks.
The following table describes the key parameters. You are advised to use default
values for the parameters that are not listed in the table.
Name The values are the same as those of Camera Name, User
Name, and PU Password on the Common Parameters tab
User Name/PU page.
Password
Connection Code Code for connecting a camera to the IVS1800. After you
modify this parameter, its new value is synchronized to the
camera.
network quality.
The default value is No. Set this parameter as required.
b. Configure WebUI login information about the camera, as shown in the following
figure.
Camera Web Port Communication port for a camera web system access
protocol.
The following table describes key parameters. You only need to set the parameters
that are listed here.
7. Set Report Synthesized Image and Report Vehicle Coordinates. These parameters
need to be set only when Intelligent Attribute is set to Plate analysis.
•To enable a checkpoint camera to upload synthesized images, set Report synthesized
image to Yes.
3.2.4.1 HWSDK/ONVIF
1. Log in to the OMU portal as the admin user.
2. Choose Maintenance > Unified Configuration.
3. Configure the camera NTP time synchronization function, as shown in the following
figure.
Synchronization Set the interval for a camera to synchronize time with the
interval (min) IVS1800 as required.
The camera then automatically synchronizes time with the
----End
4 Live Video
4.1 Overview
4.1.1 About This Experiment
This section aims to help you master operations related to live video viewing on the
IVS1800 platform, including previewing live video, performing camera sequencing,
setting snapshots, setting preset positions and home positions, and setting tour.
4.1.2 Objectives
Upon completion of this task, you will be able to:
Understand the functions and application scenarios of live video.
Understand how to configure live video viewing and camera cycling.
2. If the common layouts cannot meet your requirements, you can customize a layout.
In the dialog box displayed, click to customize the layout based on the current
one. For example, click a customized layout ID C1 and set the number of video panes
to 8x8.
4. Click Save. The customized window layout is displayed on the Live tab page.
3. Click in the lower right corner of the page. To add a cycle group for the first
time, click +. Otherwise, click New Group.
4. Add a cycle group, as shown in the following figure.
Pane Quantity The number of cycling panes must be less than the number of
cameras to cycle.
Dwell Time (s) If you do not select Adjustable, you can set the dwell duration
for cameras in all groups at the same time.
Dwell Time (s) If you select Adjustable, you can set the dwell duration for
cameras in a specific cycle group.
Preset Position Select a preset position to view live video at the position during
cycling.
Live Stream Select the live video stream based on the site requirements.
Locally set: The stream is the same as that set under System
Management > System Management > Network Settings > Live
Stream.
Automatic: No stream is specified. After this option is selected,
the stream that the camera has transmitted to the platform is
preferentially used. If the camera has not transmitted any
stream to the platform, the primary stream of the camera is
used by default.
Primary: The primary stream of the camera is used. The primary
stream features a high bit rate, definition, and bandwidth
usage.
Secondary 1: Secondary stream 1 of the camera is used. The
secondary stream features low definition and bandwidth usage.
Secondary 2: Secondary stream 2 of the camera is used. The
secondary stream features low definition and bandwidth usage.
Video Wall Select the video stream to be displayed on the video wall based
Stream on the site requirements.
Locally set: The stream is the same as that set under System
Management > System Management > Network Settings >
Video Wall Stream.
Automatic: No stream is specified. After this option is selected,
the stream that the camera has transmitted to the platform is
preferentially used. If the camera has not transmitted any
stream to the platform, the primary stream of the camera is
used by default.
Primary: The primary stream of the camera is used. The primary
stream features a high bit rate, definition, and bandwidth
usage.
Secondary 1: Secondary stream 1 of the camera is used. The
secondary stream features low definition and bandwidth usage.
Secondary 2: Secondary stream 2 of the camera is used. The
secondary stream features low definition and bandwidth usage.
5. After the configuration is complete, choose Basic Operations > Live > Cameras.
Enable camera cycling, as shown in the following figure.
Regards the selected pane as the first cycle pane. Live video
from cameras to cycle is played in sequence.
Live video from cameras in the cycle group is played on live video viewing page.
D. Click to lock the PTZ device, preventing other users at the same level or
lower levels from controlling the PTZ device.
When the PTZ device is locked, you can preempt the PTZ control permission based on
user levels. There are 32 user levels, and level 1 is the highest. A higher user level
indicates a higher PTZ control priority.
When the PTZ device is unlocked, the user who performed the last operation can
control it regardless of priority.
E. Click Advanced and use the advanced PTZ control functions.
The following table lists the advanced PTZ control functions of buttons on the toolbar
Horizontal View live video from the camera that is performing a horizontal
tour tour.
Function Description
direction at a specified degree (including 360°).
Vertical View live video from the camera that is performing a vertical
tour tour.
Aperture Reduce the aperture size so that less light travels through the
reducing hole.
Aperture Increase the aperture size so that more light travels through the
increasing hole.
4. Select the live video pane of the PTZ dome camera and click Expand in the lower
left corner.
5. Control the PTZ direction and zoom ratio, and rotate the PTZ to an appropriate
position.
6. Expand PTZ Control and choose Advanced.
7. Configure a preset position, as shown in the following figure.
9. Invoke a preset position, as shown in the following figure. The preset position is
successfully invoked, and the camera rotates to the preset position.
Parameter/Button Setting
Dwell Time (s) Set the tour duration for each preset position.
A short tour duration may shorten the service life of the
camera motor and belt. If the preset positions do not need to
be frequently switched onsite, you are advised to set the tour
duration to be longer (for example, over 60 seconds).
Speed Level Set the tour speed for each preset position. The value ranges
from 1 to 10.
Delete Select a preset position and click Delete to delete the preset
position.
4. Select the pane of the live video displayed. Expand PTZ Control and choose
Advanced.
Execute a tour, as shown in the following figure. The tour is successfully executed.
5 Video Wall
5.1 Overview
5.1.1 About This Experiment
In this experiment, you need to configure the decoder service password, add the
decoder to the video wall client, and create a video wall layout on the video wall
client to play live video from a camera on the video wall. This experiment uses
Huawei DEC6108 decoder as an example to describe the configuration process. In
actual projects, obtain the product documentation based on the decoder model.
5.1.2 Objectives
Upon completion of this task, you will be able to:
Understand the hardware devices required for pushing video to the video wall.
Master the configuration of Huawei decoders.
Master how to add decoders to the iClient.
Master how to create the video wall layout on the iClient.
Master the operations of pushing live video to the video wall.
4. Add decoders.
•Add decoders one by one, as shown in the following figure.
User Name If the system is connected to DEC6108, enter admin as the user
name and DEC6108 business password as the password.
Password
If the system is connected to DEC6501L, enter the user name and
password for decoder login.
Parameter Setting
Pane Quantity The number of cycling panes must be less than the number of
cameras to cycle.
Dwell Time (s) If you do not select Adjustable, you can set the dwell duration
for cameras in all groups at the same time.
Dwell Time (s) If you select Adjustable, you can set the dwell duration for
cameras in a specific cycle group.
Preset Position Select a preset position to view live video at the position
during cycling.
Live Stream Select the live video stream based on the site requirements.
Locally set: The stream is the same as that set under System
Management > System Management > Network Settings >
Live Stream.
Video Wall Stream Select the video stream to be displayed on the video wall
based on the site requirements.
Locally set: The stream is the same as that set under System
Management > System Management > Network Settings >
Video Wall Stream.
Automatic: No stream is specified. After this option is selected,
the stream that the camera has transmitted to the platform is
preferentially used. If the camera has not transmitted any
stream to the platform, the primary stream of the camera is
used by default.
Primary: The primary stream of the camera is used. The
primary stream features a high bit rate, definition, and
bandwidth usage.
Secondary 1: Secondary stream 1 of the camera is used. The
secondary stream features low definition and bandwidth
usage.
Secondary 2: Secondary stream 2 of the camera is used. The
secondary stream features low definition and bandwidth
usage.
4. Choose Basic Operations > Video Wall. Start cycling on the video wall, as shown in
the following figure.
Parameter Setting
Cycle Select a cycle for a view schedule as required. The options are
as follows:
- Weekly
- Daily
Period picker Drag the mouse to select a time segment and double-click the
time segment to set a precise time point. Right-click the time
segment and choose a view.
4. Favorite a layout as a view, as shown in the following figure. You can overwrite an
existing view when favoriting a layout as a new view.
Cycle Select a cycle for a view schedule as required. The options are
as follows:
- Weekly
- Daily
Period picker Drag the mouse to select a time segment and double-click the
time segment to set a precise time point. Right-click the time
segment and choose a view.
6 Snapshot Taking
6.1 Overview
6.1.1 About This Experiment
After completing this experiment, you will understand how to save a key image to the
local PC or server for subsequent analysis and determination.
6.1.2 Objectives
Upon completion of this task, you will be able to:
Understand the application scenarios of snapshot taking.
Master the method of configuring snapshot taking.
•If Snapshot Storage Location is set to Server, the message "Server-based snapshot
successful" is displayed after snapshots are successfully taken.
7 Video Recording
7.1 Overview
7.1.1 About This Experiment
After completing this experiment, you will understand how to record videos in a
specified time segment in key areas and store valid video clips to provide basic
materials for video analysis and video evidence collection.
7.1.2 Objectives
Upon completion of this task, you will be able to:
Master the method of configuring server-based recording.
Learn how to play back, download, and lock recordings.
Insufficient Policy used when the recording space is insufficient. The options
Recording are as follows:
Space Policy Stop if insufficient: Only recordings whose retention period
expires are recycled. The system stops recording when the storage
space is used up.
Overwrite if insufficient: Recordings are recycled based on the
following sequence until the available storage space is greater
than the threshold: recordings whose retention period expires >
recordings whose retention period is not configured > earliest
recordings in the retention period.
Video recycle Policy for recycling video. The options are as follows:
policy Recycle by storage space: Video is recycled only when the storage
space is less than the threshold.
Recycle by expiration time: Video is recycled when the storage
space is less than the threshold or the retention period expires.
Server Recording Stream type used for server scheduled recordings. Select stream
Stream types based on the bandwidth and network quality.
The options are as follows:
Automatic: No stream is specified, and the primary stream is
used by default.
Primary: The primary stream of the camera is used. The primary
stream features a high bit rate, definition, and bandwidth
usage.
Secondary: Secondary streams of the camera are used.
Secondary streams feature low bit rates, definition, and
bandwidth usage. If the camera has multiple secondary streams,
the parameter will display multiple secondary stream options
such as secondary stream 1 and secondary stream 2.
Image Storage The default value is 365. Set this parameter based on the site
(days) requirements.
Apply to Others Button for applying the current recording policy to other
cameras.
Daily After this option is selected, you will set the recording time
segment on a daily basis.
Weekly After this option is selected, you will set the recording time
segment on a weekly basis.
Continuous After this option is selected, the system will record video
around the clock.
Apply to Others Button for applying a server recording policy of this camera to
other cameras.
Add to Restore Button for restoring video and images from the preferred
Priority List cameras first based on data in the Data Safe. You can set these
cameras on the iClient when the system is running properly.
Delete Plan - To delete the server recording plan of the selected camera,
click Delete Plan under Server Recording Plan.
- To delete server recording plans of multiple cameras, click
Delete Plan under the device tree.
Parameter Description
Reserved Space (MB) Minimum disk space for storing recordings. To ensure
that recordings can be stored correctly, the available disk
space must be greater than the minimum disk space.
Space Limit Warning Disk space threshold for triggering an alarm. When the
(MB) available disk space is less than the threshold, an alarm
is triggered.
You can also drag the entire camera group to the live video layout to view the live
video from all the cameras in the group. When you add an IVS1800, a camera group is
automatically generated. You can also customize a camera group. You can create a
customized camera group under Device Management > Cameras.
•Cameras: Displays all camera groups and cameras in the groups. You can view live
video from a camera or a group of cameras.
•Micro Cloud: Displays all IVS1800s and allows users to view live video from cameras
connected to each IVS1800.
•Favorites: Displays favorited cameras and views. You can view live video from a
favorited camera and invoke a favorited view.
4. Click on the live video pane of the camera to start manual recording.
The recording icon is displayed in red in the live video pane.
5. Click on the live video pane of the camera to stop manual recording.
•If Recording Storage Location is set to Local, the message "Recording saved" is
displayed. You can click Click here to open folder and view the recordings in the local
path.
•If Recording Storage Location is set to Device, the message "Recording stopped for
camera *****" is displayed.
7.2.5 Playback
1. Log in to the iClient as a local user.
2. Choose Basic Operations > Playback > Cameras.
3. Configure a recording playback, as shown in the following figure.
Storage Location Location of video files to be searched for. The options are as
follows:
Server: Search for video files stored on the server, including
scheduled and manual server recordings.
PU: Search for video files stored on the PU.
If a camera has an SD card and is directly connected to the
system, recordings are stored in the SD card of the camera.
Local: Search for local video files recorded through manual
operations on the live video page.
Lock Period Set the recording lock period. The default period is 30
days.
You can select Permanently to lock the recording
permanently.
8 Alarms
8.1 Overview
8.1.1 About This Experiment
After an alarm is configured, if a camera reports an exception to the platform, the
platform can link the camera or peripheral cameras to perform some actions, such as
recording and displaying the live video page. This experiment describes how to
manage alarms and configure alarm linkage and protection zones.
8.1.2 Objectives
Upon completion of this task, you will be able to:
Learn how to set alarm parameters on the platform.
Learn how to configure alarm linkage.
Understand the concept of protection zones and how to configure protection
zones.
Play Non-triggered Live Indicates whether an alarm pane of a video wall can
Video in Alarm Pane play common live video.
On: yes
Off: no
Alarm Pop-up
Click and select an alarm type or severity. When an
alarm of the corresponding type or severity is received,
an alarm dialog box is displayed.
Auto Close After (s) Period during which the alarm dialog box is displayed.
Parameter Setting
Parameter Description
Severity Name Name of the alarm severity, which must be different from the
name of an existing alarm severity.
The default alarm severities are Critical, Major, Minor, and
Warning.
Weight Weight of the alarm severity, which must be different from the
weight of an existing alarm severity.
The default alarm weights are as follows:
Critical: 100
Major: 80
Minor: 50
Warning: 1
Camera List List of cameras for which you need to configure an alarm linkage.
In this example, select cameras.
Alarm Source Alarm source list. You can select alarm devices, for example, alarm
List bells to configure an alarm linkage.
Protection Protection zone list. You can select protection zones to configure
Zone List an alarm linkage.
Parameter Description
Add Device Select devices in Camera List and Alarm Source List.
Disable Observation Click Disable Observation to withdraw the alert that has
been deployed in a detection area.
Plan Click Plan and set an alert deployment plan for the
protection zone.
8.2.4 Verification
8.2.4.1 Service Alarms
Log in to the iClient as a local user.
2. Choose Basic Operations > Alarm Center.
Historical target alert alarms, vehicle alert alarms, and behavior analysis alarms are
displayed. The following describes how to query and handle vehicle alert alarms as an
example.
3. Choose Vehicle Alerts.
4. Query an alarm.
Alarm Query
1. Log in to the iClient as a local user.
2. Choose System Management > Device Management.
3. Right-click an IVS800 and choose O&M Center.
4. Choose O&M Alarms > Alarm Query.
5. Search for current alarms.
b. Click Search. The alarms that meet the search criteria are displayed.
Alarms can be exported. Select alarms and click Export to export the selected alarms
in an Excel file to the local computer.
9.1 Overview
9.1.1 About This Experiment
You can configure intelligent behavior analysis on Huawei SDCs or the IVS1800
platform. This experiment describes how to configure behavior analysis on the
IVS1800 platform.
9.1.2 Objectives
Upon completion of this task, you will be able to:
Learn how to load intelligent algorithm plug-ins on the platform.
Master how to configure and verify intelligent behavior analysis.
IVS1800_V1R019C50SPC100_Plugin_Sensitive.zip SENSITIVE
IVS1800_V100R019C50SPC100_Plugin_Huawei_Target_MCS.zip MCS
IVS1800_V100R019C50SPC100_Plugin_Huawei_Multifunc_A_VA.zip VA
IVS1800_V100R019C50SPC100_Plugin_Huawei_Behavior_D_VA.zip
IVS1800_V100R019C50SPC100_Plugin_Huawei_Multifunc_D_VA.zip
Parameter Setting
Video Type Video type. The options are Live and Recorded.
Analysis Period Analysis period. You can set a specific analysis period
based on the site requirements.
Set this parameter only when Video Type is set to
Recorded.
Work Mode Work mode. You can set the overall sensitivity of an analysis
task.
0: low.
1: medium.
2: high.
3: lower.
4: the lowest.
Select Behavior Analysis Rule Select a behavior analysis rule. In this example,
select Intrusion detection.
Min. Object The target object entering a detection area triggers an alarm
only when a target object is sized between the ranges defined
Max. Object by the two parameters.
9.2.3 Verification
1. Log in to the iClient in local mode.
2. Choose Basic Operations > Alarm Center > Behavior Analysis Alarms.
3. Search for behavior analysis alarms, as shown in the following figure.
Parameter Description
Cameras Source. Click the Cameras text box and select cameras from the
IVS1800.
If there are multiple IVS1800s, you can select cameras from only
one IVS1800 for search.
Period Time segment in which alarms are generated. The options are as
follows:
Past 3 Days
Past Week
Past Month
4. Click an alarm in the search result to view the corresponding camera's live video
and alarm-triggered recording.
5. Click , enter the alarm handling suggestion, and confirm the alarm.
10.1 Overview
10.1.1 About This Experiment
This experiment describes how to create an intelligent target analysis task on the
platform for target search and alert deployment.
10.1.2 Objectives
Upon completion of this task, you will be able to:
Understand the application scenarios of intelligent target analysis.
Master how to configure intelligent target analysis tasks on the platform for
target search and alert deployment.
List File List file. Select the directory where the generated CSV file is
located, for example,
D:\target\target_template_en_US\template.csv.
and sex can be identified for a person, the target image can be named in Name___Sex
format. For example, Zhang San___0.
The following table describes the enumerated values for some naming fields. Set
other fields based on the site requirements.
Date of Birth The value is in the format YYYY -MM-DD, for example,
2019-09-30.
List File Select the directory where the target images are located,
for example, D:\photos.
7. Click OK.
5. Save the target images to be imported to the specified folder, for example, D:\test.
•Images in JPG, JPEG, JPE, DIB, BMP, and PNG formats can be uploaded. The size of
each image cannot exceed 5 MB.
•The image must be named Name, for example, Tom.
List File Select the directory where the target images are
located, for example, D:\test.
7. Click OK.
Parameter Description
Video Type Video type. The options are Live and Recorded.
Analysis Period Analysis period. You can set a specific analysis period
based on the site requirements.
Set this parameter only when Video Type is set to
Recorded.
Source Click the Cameras text box and select cameras from the IVS1800.
If there are multiple IVS1800s, you can select cameras from only
one IVS1800 for search.
Parameter Description
Source Click the Cameras text box and select cameras from the
IVS1800.
If there are multiple IVS1800s, you can select cameras from only
one IVS1800 for search.
Active Time range for executing a target alert task. Use either of the
following methods to set this parameter:
Select One Week, One Month, or One Year.
Customize the time range.
Click Set Hours to set the time segment for alert every day.
Algorithm Algorithm. Select For all algorithms and set Match Threshold (%).
Match Threshold (%): When the similarity between a passer-by
target and the target is greater than or equal to the threshold, the
two targets match.
Alert Level Alert level. The options are Critical, Major, Minor, and Warning.
The alert level is the same as the severity of alarms generated by
the alert task.
Alert Scope Alert scope. Click the text box under Camera and then select
cameras from the camera list. A maximum of 64 cameras can be
selected.
Send alarm in When a user selects multiple alert objects, the IVS1800 matches
order captured images with the images in the lists based on list creation
time. If a target person is hit in a list, the IVS1800 does not match
it with the images in other lists.
Set this parameter based on the site requirements.
10.2.6.3 Verification
1. Log in to the iClient in local mode.
2. Choose Basic Operations > Alarm Center > Target Alert Alarms.
3. Search for target alerts, as shown in the following figure.
Operation Description
Export alarm Export the target alerts to an Excel file and save the file to the
information local computer.
Export Selected: Export the selected alarms.
Export This Page: Export all alarms on the current page.
Export All: Export all the found alarms.
11.1 Overview
11.1.1 About This Analysis
This experiment describes how to create an intelligent vehicle analysis task on the IoT
platform for vehicle search and alert deployment.
11.1.2 Objectives
Upon completion of this task, you will be able to:
Understand the application scenarios of intelligent vehicle analysis.
Master how to configure intelligent vehicle analysis tasks on the platform for
vehicle search and alert deployment.
Analysis Period Time range. The default value is the past 24 hours.
Set this parameter only when Video Type is set to Recorded.
Task Type Task type. Select Pedestrian and vehicle data structuring and
select an algorithm.
11.2.2.2 Verification
1. Log in to the iClient in local mode.
2. Choose Intelligent Applications > Person Search > Filter by Criteria.
3. Configure search criteria, as shown in the following figure.
Parameter Description
Source Click the Cameras text box and select cameras from the
IVS1800.
If there are multiple IVS1800s, you can select cameras from only
one IVS1800 for search.
f. Fill in the parameter values by referring to the enumerated values in the template.
g. Click Generate CSV File.
The message "CSV file generated successfully" is displayed, and the template.csv file is
generated in D:\vehicles\car_template_en_US.
6. Click Import.
6. Click Import.
6. Click Import.
a. Click , select an image from the local host, and take a snapshot of the image.
•You can upload images in JPG, JPEG, JPE, DIB, BMP, and PNG formats.
Analysis Period Time range. The default value is the past 24 hours.
Set this parameter only when Video Type is set to Recorded.
Task Type Task type. Select Pedestrian and vehicle data structuring and
select an algorithm.
Parameter Description
Source Click the Cameras text box and select cameras from the
IVS1800.
If there are multiple IVS1800s, you can select cameras from
only one IVS1800 for search.
6. Choose Basic Operations > Alarm Center > Vehicle Alerts. Search for vehicle alerts,
as shown in the following figure.
alarm is generated.
Vehicle Search: Search for images of the vehicle that
triggers the alarm.
Task View: View information about the alert task that
generates the alarm.
Export alarm Export vehicle alerts to an Excel file and save the file to
information the local computer.
Export Selected
Export This Page
Export All