0% found this document useful (0 votes)
25 views13 pages

Lecture Notes 11-14 Unit 1

The document discusses different types of 3D scanners, including how they work and the technologies used. It covers contact scanners like coordinate measuring machines and articulating arms, as well as non-contact scanners like 3D laser scanners, white light scanners, and time-of-flight LiDAR. It also compares 2D and 3D scanners.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views13 pages

Lecture Notes 11-14 Unit 1

The document discusses different types of 3D scanners, including how they work and the technologies used. It covers contact scanners like coordinate measuring machines and articulating arms, as well as non-contact scanners like 3D laser scanners, white light scanners, and time-of-flight LiDAR. It also compares 2D and 3D scanners.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

3D Scanners

 3D scanning is the process of analyzing a real-world object or environment to collect three


dimensional data of its shape and possibly its appearance (e.g. color). The collected data can
then be used to construct digital 3D models.
 3D scanners are devices designed to capture the physical world in three dimensions by
collecting detailed information about the shape and surface characteristics of objects. These
scanners employ various technologies, such as lasers, structured light, or photogrammetry, to
measure distances and create accurate 3D models of real-world objects or environments.
 Each with its own limitations, advantages and costs. Many limitations in the kind of objects
that can be digitised are still present. For example, optical technology may encounter many
difficulties with dark, shiny, reflective or transparent objects. For example, industrial
computed tomography scanning, structured-light 3D scanners, LiDAR and Time Of Flight 3D
Scanners can be used to construct digital 3D models, without destructive testing.

Functionality

The purpose of a 3D scanner is usually to create a 3D model. This 3D model consists of a polygon mesh
or point cloud of geometric samples on the surface of the subject. These points can then be used to
extrapolate the shape of the subject (a process called reconstruction). If colour information is
collected at each point, then the colours or textures on the surface of the subject can also be
determined.

3D scanners share several traits with cameras. Like most cameras, they have a cone-like field of view,
and like cameras, they can only collect information about surfaces that are not obscured. While a
camera collects colour information about surfaces within its field of view, a 3D scanner collects
distance information about surfaces within its field of view. The "picture" produced by a 3D scanner
describes the distance to a surface at each point in the picture. This allows the three dimensional
position of each point in the picture to be identified.

In some situations, a single scan will not produce a complete model of the subject. Multiple scans,
from different directions are usually helpful to obtain information about all sides of the subject. These
scans have to be brought into a common reference system, a process that is usually called alignment
or registration, and then merged to create a complete 3D model. This whole process, going from the
single range map to the whole model, is usually known as the 3D scanning pipeline.

Technology based 3D Scanners

There are a variety of technologies for digitally acquiring the shape of a 3D object. The techniques
work with most or all sensor types including optical, acoustic, laser scanning, radar, thermal, and
seismic. A well-established classification divides them into two types: contact and non-contact

Contact Scanner

One method for collecting measurement data involves physically scanning the object with a device
that comes into contact with every point on the surface. Contact scanners are available in multiple
types that can be used for various applications.

 Coordinate Measuring Machines

Coordinate Measuring Machines (CMMs) are mechanical systems that use a measuring probe and
transducer technology to convert physical measurements of an object’s surface into electrical signals
that are then analyzed by specialized metrology software. There are many different types of CMMs;
the most basic systems use hard probes and XYZ read-outs, while the most complex employ fully
automated continuous contact probing

Articulating Arms

An articulating arm is a type of CMM that uses rotary encoders on multiple rotation axes instead of
linear scales to determine the position of the probe. These manual systems are not automated, but
they are portable and can reach around or into objects in a way that cannot be accomplished with a
conventional CMM

Portable Optical CMM

Some applications call for a portable solution, for example, taking measurements on a shop floor or
in the field. In these cases a portable CMM can be used to gather measurement data for areas that
are difficult to reach. The hand-held device transmits data wirelessly and allows the operator to move
both the part and the scanner during the measuring process.

Form and Contour Tracers

Form and contour tracers are purpose-specific devices that use extremely accurate continuous
contact sensors and styli to obtain small-part geometry. These devices are especially useful for
scanning objects that include threaded, cylindrical, or round features.

Non-Contact Scanners

The main reason to utilize non-contact scanners is immense amounts of data that can be collected
quickly. Also, in many cases, using a contact sensor is not appropriate because the act of touching the
object during measurement will alter its geometry, thus creating an inaccurate 3D model. Objects that
are fragile, flexible, or otherwise sensitive are more suitable for the following types of3D scanning
technologies:

3D Laser Triangulation

With this type of 3D scanning system, a laser is projected onto the surface of an object and a camera
captures the reflection. The laser can be in the form of a single point, a line, or an entire field of view.
When the reflection is captured, each point is triangulated, measured, and recorded, resulting in a 3D
rendering of the shape and surface measurements of the object. Laser scanning tends to work better
with more reflective surfaces than structured light scanners.

White Light Scanners

White light scanners, also referred to as structured light scanners, use halogen or LED lights to project
a pattern of pixels onto an object. The distortion of the pixels created by the object’s surface and the
resulting light pattern can be measured and used to reconstruct a 3D image. Such scanners also may
use other colors of the light spectrum such as blue or red light though the effect orimprovement in
results is small.

Conoscopic Holography

Another type of 3D laser scanning technology is conoscopic holography. A single laser is projected
onto the object, and the reflection is returned along the same path. The reflected beam goes through
a conoscopic crystal and is projected onto a charge-coupled device (CCD). The diffraction pattern is
then analyzed to determine the precise distance to the surface. The most common applications for
this type of device are measuring small features as well as interior surface geometry where
triangulation would not be possible. It is highly precise and commonly found on multi-sensor vision
systems. This technology works fairly well despite surfaces that are highly reflective or absorbent.

Time-of-Flight and LiDAR

This type of laser scanning uses a time-of-flight laser rangefinder based on LiDAR technology to
measure the distance between the laser and the object’s surface. The laser rangefinder sends a pulse
of light to the object and measures the amount of time it takes for the reflection to return in order to
calculate the distance of each point on the surface. Point measurements are taken by aiming the
device at the object and using a series of mirrors to redirect the light from the laser to different areas
on the object. Although the process may seem cumbersome, typical time-of-flight 3D laser scanners
can collect between 10,000 and 100,000 points per second, which is much faster though less accurate
than contact sensors.

Photogrammetry

Perhaps the oldest type of non-contact 3D scanning method, photogrammetry has been in use since
the development of photography. In simple terms, measurements between two points on an image
can be used to determine the distance between two points on an object. Several factors play a role in
the accuracy of this type of system, including knowledge of the scale of the image, the focal length of
the lens, orientation of the camera, and lens distortions. Photogrammetry can be used to measure
discrete points using retro reflective markers which can be highly accurate given the measurement
envelope. More recently, photogrammetry coupled with special image processing software can be
used to obtain complete and dense point clouds. These point clouds are typically less accurate than
other forms of scanning, however only a camera and software is required making it one of the lowest
cost methods of 3D scanning. Photogrammetry is also often used in combination with other types of
3Dscanning technologies that produce point cloud results, primarily to increase the measurement
range by creating a reference frame of discrete points on which to match multiple 3D scans.

Difference between 2D and 3D Scanner

The primary difference between 2D and 3D scanners lies in their capability to capture and represent
spatial information. Here's a breakdown of the distinctions between 2D and 3D scanners:

1. Dimensionality:

2D Scanners: Capture and reproduce images or documents in two dimensions, typically height and
width, without capturing depth information.

3D Scanners: Capture spatial information in three dimensions, encompassing height, width, and
depth, providing a more comprehensive representation of the object's shape.

2. Type of Information Captured:


2D Scanners: Record flat images or surfaces, suitable for tasks such as document scanning, image
capture, or reading barcodes and QR codes.

3D Scanners: Capture the geometry and spatial structure of objects, allowing for the creation of
detailed 3D models, which can be used in fields like manufacturing, design, and virtual reality.

3. Applications:

2D Scanners: Commonly used in tasks where a flat representation of an object or document is


sufficient, such as in photocopiers, document scanners, or image capture devices.

3D Scanners: Applied in fields requiring detailed spatial information, including reverse engineering,
quality control, medical imaging, animation and gaming, and cultural heritage preservation.

4. Output:

2D Scanners: Produce flat, two-dimensional images, often in formats like JPEG, PNG, or PDF, without
depth information.

3D Scanners: Generate three-dimensional models with information about the object's shape and
structure, often represented as point clouds, mesh models, or CAD files.

5. Technology:

2D Scanners: Use technologies like CCD (Charge-Coupled Device) or CIS (Contact Image Sensor) to
capture images, relying on reflected light from a surface.

3D Scanners: Employ various technologies such as laser triangulation, structured light patterns, time-
of-flight, or photogrammetry to capture spatial information and create detailed 3D representations.

6. Use Cases:

2D Scanners: Suitable for tasks like document scanning, barcode reading, image capture, and optical
character recognition (OCR).

3D Scanners: Applied in fields requiring accurate 3D models, including industrial design, quality
inspection, virtual reality content creation, and archaeology.

In summary, while 2D scanners are focused on capturing flat, two-dimensional representations, 3D


scanners provide the additional dimension of depth, enabling the creation of detailed spatial models
of physical objects. The choice between 2D and 3D scanning depends on the specific requirements of
the task or application.

Applications of 3D Scanners

Space experiments

3D scanning technology has been used to scan space rocks for the European Space Agency.

Construction industry and civil engineering

 Robotic control: e.g. a laser scanner may function as the "eye" of a robot
 As-built drawings of bridges, industrial plants, and monuments
 Documentation of historical sites
 Site modelling and lay outing
 Quality control
 Quantity surveys
 Payload monitoring
 Freeway redesign
 Establishing a bench mark of pre-existing shape/state in order to detect structural changes
resulting from exposure to extreme loadings such as earthquake, vessel/truck impact or fire.
 Create GIS (geographic information system) map sand geomatics.
 Subsurface laser scanning in mines and karst voids.
 Forensic documentation

Design process

 Increasing accuracy working with complex parts and shapes,


 Coordinating product design using parts from multiple sources,
 Updating old CD scans with those from more current technology,
 Replacing missing or older parts,
 Creating cost savings by allowing as-built design services, for example in automotive
manufacturing plants,
 "Bringing the plant to the engineers" with web shared scans, and Saving travel costs.

Entertainment

3D scanners are used by the entertainment industry to create digital 3D models for movies, video
games and leisure purposes. They are heavily utilized in virtual cinematography. In cases where a real-
world equivalent of a model exists, it is much faster to scan the real-world object than to manually
create a model using 3D modelling software. Frequently, artists sculpt physical models of what they
want and scan them into digital form rather than directly creating digital models on a computer.

3D photography

3D scanners are evolving for the use of cameras to represent 3D objects in an accurate manner.
Companies are emerging since 2010 that create 3D portraits of people (3D figurines or 3D selfie). An
augmented reality menu for the Madrid restaurant chain 80 Degrees.

Virtual/remote tourism

The environment at a place of interest can be captured and converted into a 3D model. This model
can then be explored by the public, either through a VR interface or a traditional "2D" interface. This
allows the user to explore locations which are inconvenient for travel. A group of history students at
Vancouver iTech Preparatory Middle School created a Virtual Museum by 3D scanning more than 100
artifacts.

How does Scanner Works

Capturing Images

 First of all we take two images of the object each step first one while the laser is on, the other
one while the laser off. After that we rotate the object, we repeat these steps for all object
(360°).

Subtraction

 Subtract the lasered image with the one that we took without laser both of them has been
taken from the same view of object.
Threshold
 After subtraction we make thresholding for the image this step is essential for the next step

Skeletonization

 Shrink the laser line in the image to get the core “the middle region” of the line

Get Points

 Read the points from the Skeletonized image “by scanning pixels”, and calculate it’s
coordinates

Equations & Calculations

 Get the points of each image we make a large complex calculations to find the coordinate of
each point in 3-dimentional view.

Surface Reconstruction

 In computer vision and computer graphics, 3D reconstruction is the process of capturing the
shape and appearance of real objects. This process can be accomplished either by active or
passive methods. If the model is allowed to change its shape in time, this is referred to as non-
rigid or spatiotemporal reconstruction.
 Is the process that reconstructs a surface from its sample points.
 The input can be co-ordinates of the point cloud in 3D and output is a piecewise linear
approximation of the surface.

Output

 The user have the choice to pick either VRML (Virtual Reality Modeling Language).
 Can be imported using 3dmax, AutoCAD and many other graphics software.
 Point cloud format: used by AutoCAD and point cloud viewer
 3D scanners usually creates a point cloud of geometric samples on the surface of the subject.
These points can then be used to show the shape of the subject (in a process called surface
reconstruction).

Possible Problems

Speed

 The first and the most difficult problem is the speed of the scanning process, It may take
several minutes to make full scan.
 It can be managed to reduce the time by modifying the equations and quality choice.

Noise

 There are a lot of noise in the captured image due to the changing in intensity of the light, It
can be limited by modifying the subtraction process

Cost

 Most of the industrial 3D scanner is to expensive


 Either on the hardware or software level. We try to make it as cheap as possible by making 3D
scanner from very simple tools.
Haptic System

The word Haptic comes from the Greek verb haptesthai, which means “to contact or to touch”.
Haptic technology adds the consciousness of touch and feeling to computers. It is a tactile
feedback technology which takes advantage of the sensitivity of touch which could be by applying
forces, vibrations, or motion to the user and recreates actions.

A Haptic gadget gives individuals a feeling of touch with computer generated conditions, with the
goal that when virtual items are contacted they appear to be genuine and real. This
communication or the interaction between haptic device and the control system is referred as
“Haptic Feedback”.

Haptic Feedback or Haptic Information

Haptic Technology is actualized through various kinds of associations with a Haptic gadget
speaking with the control system. Haptic information or Haptic Feedback that is provided by the
system is a combination of two types of information:

1. Tactile Feedback

2. Kinesthetic Feedback

Tactile Feedback

It is referred as the information obtained by the sensors when it comes in contact with the skin of
the human body i.e with the sense of touch.

Kinesthetic Feedback

Kinesthetic Feedback in Haptic Technology is related to awareness of the position and movement
of the parts of the body via sensory organs. It also refers to the information acquired by the
sensors in the joints and muscles. It allows a person to feel the force/ torques exercised upon
contact with a body through the receptors.

Building blocks of a haptic system

A typical Haptic Technology System is an assembly of several sub-blocks namely:

 Touch screen device with capacitive buttons


 Processor
 Driver circuit
 Actuator

The input to the Haptic Technology System may be a touch, a press on the capacitive buttons
on the touchscreen. This serves as an input or the trigger signal which is sent to the touch
screen controller. The sensors in the device sense the change in the amount of force applied,
change in the angle of the input and sends the information to the processor.

The information is further processed generating a waveform which could be either analog or
digital which acts as an input to the driver circuit and specific instructions are given to actuator
to generate a pattern that creates a vibration. This feedback from actuator which is given back
to the touchscreen device acts as a force feedback. The user thus feels this force feedback
virtually.

Applications of Haptic Technology

Haptic has not spared any field from its impact. The applications of Haptic Technology include:

1. Gaming Industry uses this technology widely in video gaming.

2. Medical Applications make use of Haptic interfaces which are designed for medical simulation
which helps in remote surgery and virtual medical training.

3. It is used in Military Applications where a virtual reality environment is simulated to provide


versatility in military field which includes training in virtual reality environments.

4. It serves as Assistive Technology for the blind and visually impaired where the visually disabled
person feels the maps that are displayed over the network. Learning mathematics is also made
simpler by tracing touchable mathematical source.

5. Haptic Technology is extensively used in Museums where the priceless artifacts displayed are
visualized in 3D manner and objects from their sculpture and decorative arts collection are made
available through CD-ROM.

6. Haptic Technology finds its diverse application in the field of Entertainment, Arts and Design,
Robot Design and Control, Neuroscience, Psychophysics, Mathematical modeling and simulation.

Type of Haptic Devices

In virtual reality (VR), haptic devices can be categorized into two main types based on how they
interact with the user: contact and non-contact haptic devices.

Contact Haptic Devices:

Contact haptic devices physically interact with the user's body, typically through direct contact
with the skin or other body parts. These devices provide tactile sensations by applying force,
vibration, or pressure feedback directly to the user's body.

Examples of contact haptic devices include:

Haptic Gloves/Gauntlets: These devices cover the user's hands and fingers, providing feedback
through actuators, vibration motors, or force feedback mechanisms. They allow users to feel
virtual objects and textures by applying force or vibration to their hands.
Haptic Vests/Suits: Haptic vests or suits cover the user's torso and provide feedback through
pressure, vibration, or motion. They simulate sensations such as impacts, collisions, or
environmental effects by applying pressure or vibrations to different parts of the body.

Haptic Controllers: Handheld controllers equipped with force feedback mechanisms, such as
vibration motors or pneumatic actuators, fall under this category. They provide tactile feedback
during interactions with virtual objects by applying force or vibration to the user's hands.

Applications:

Virtual Reality (VR) Gaming: Contact haptic devices like haptic gloves or controllers are
extensively used in VR gaming to enhance immersion by allowing users to feel the texture, weight,
and interactions with virtual objects.

Training Simulations: These devices find applications in various training simulations such as
medical training, where users can practice procedures like surgery or patient examination by
receiving realistic haptic feedback.

Remote Operations: Contact haptics enable users to remotely control robots or machinery with
precision, as they can feel the forces and feedback exerted by the remote environment.

Advancements:

Improved Sensory Feedback: Advancements in contact haptic devices focus on providing more
realistic tactile sensations, including texture, temperature, and shape recognition.

Enhanced Ergonomics: Developers are making strides in creating lightweight and ergonomic
designs for haptic gloves and controllers to ensure comfort during prolonged use.

Higher Precision Tracking: Innovations in sensor technology and motion tracking algorithms
enable more accurate tracking of hand movements and interactions, leading to more precise
haptic feedback.

Non-Contact Haptic Devices:

Non-contact haptic devices interact with the user without direct physical contact with the body.
Instead, they use technologies such as ultrasound, air vortex rings, or electrostatic fields to create
tactile sensations in mid-air or on the user's skin without requiring physical contact.

Examples of non-contact haptic devices include:

Ultrasound Haptics: These devices use focused ultrasound waves to create tactile sensations in
mid-air. By modulating the ultrasound waves, they can simulate sensations such as texture,
resistance, or even shape, without the need for physical contact.

Air Vortex Rings: Air vortex ring devices emit controlled bursts of air to create pressure waves
that users can feel on their skin. By precisely controlling the timing and intensity of the air bursts,
these devices can simulate tactile sensations such as tapping, pushing, or pulling without touching
the user.

Electrostatic Haptics: Electrostatic haptic devices use electric fields to create tactile sensations on
the user's skin. By applying varying electric fields to different parts of the skin, they can simulate
sensations such as tingling, buzzing, or pressure without physical contact.

Applications:
Public Displays and Interfaces: Non-contact haptics can be utilized in public displays or interactive
interfaces where multiple users interact with virtual content without the need for wearables or
physical contact.

Accessibility Tools: These devices can assist individuals with disabilities by providing tactile
feedback through air vortex rings or ultrasound waves, enabling them to interact with digital
interfaces and environments.

Augmented Reality (AR) Experiences: Non-contact haptics can enhance AR experiences by


overlaying virtual tactile sensations onto physical objects or surfaces, creating interactive and
immersive environments.

Advancements:

Advanced Ultrasound Techniques: Advancements in ultrasound haptics focus on improving the


precision and resolution of tactile feedback, allowing for more detailed sensations and
interactions in mid-air.

Miniaturization and Portability: Researchers are developing compact and portable non-contact
haptic devices that can be integrated into smartphones, wearable devices, or AR glasses,
expanding their accessibility and usability.

Cross-Modal Integration: Innovations in non-contact haptics involve integrating tactile feedback


with other sensory modalities such as visual and auditory cues to create multisensory experiences
that enhance immersion and realism.

Advantages of Haptic Technology

The advantages of Haptic Technology are:

Digital world can be experienced and perceived.

Easily accessible and user friendly.

Accuracy and precision is high.

Disadvantages of Haptic Technology

The disadvantages of Haptic Technology include:

Involves complex designing as Haptic devices requires precision of touch.

High initial cost involved.

Force Feedback Haptic System

Force feedback haptic systems are a crucial technology in virtual reality (VR), adding a powerful
layer of tactile interaction that significantly enhances immersion and realism. This goes beyond
simple vibrations, allowing users to truly "feel" virtual objects and environments.

How it Works:

orce feedback systems use motors, actuators, and other mechanisms to exert controlled forces
on the user's body, typically through gloves, vests, or exoskeletons. These forces can simulate:
Resistance: Feeling the weight and texture of virtual objects as you grab or manipulate them.

Impact: Experiencing the force of collisions, explosions, or being hit in VR games.

Motion: Simulating the feeling of walking on different surfaces, climbing, or interacting with
moving objects.

Benefits in VR:

Increased Immersion: Feeling the physicality of the virtual world makes it more believable and
engaging.

Enhanced Interaction: Force feedback allows for more natural and intuitive interactions with
virtual objects, improving manipulation and skill development.

Training and Education: Realistic tactile feedback can enhance learning and skill development in
training simulations for various fields.

Gaming and Entertainment: Adding a layer of touch sensation makes VR games and experiences
more exciting and immersive.

Types of Force Feedback Systems:

Exoskeletons: Full-body systems that provide the most comprehensive feedback but can be bulky
and expensive.

Haptic Gloves: Focus on hand interactions, offering varying levels of complexity and detail.

Haptic Vests: Simulate body sensations like movement, impacts, or even temperature changes.

Output Visual Devices

Head-mounted displays (HMDs) are the most immersive type of VR device. They completely cover
the user's face and ears, and use two screens, one for each eye, to create a stereoscopic 3D image.
This gives users a wide field of view (FOV) and allows them to look around the virtual world naturally
by tracking their head movements. HMDs can be tethered to a computer for more powerful graphics
or standalone, with their own processing power and display.

Types:

1. Head-Mounted Displays (HMDs):

Tethered HMDs: Connect to a computer for powerful graphics and processing. Offer the highest-
quality visuals but are less portable.

Standalone HMDs: Have built-in processing and display, making them portable but with potentially
lower graphics and processing power.

Advantages:

Highly immersive experience with wide field of view (FOV) and precise head tracking.

Can incorporate additional features like eye tracking and haptics for enhanced interaction.

Disadvantages:
Tethered models can be restrictive and expensive.

Standalone models may have lower resolution and processing power.

Can cause discomfort or nausea for some users.

2. VR Glasses/Goggles:

Description: These are lighter and less immersive than HMDs. They sit in front of your eyes and
display a 3D image, but don't completely block out the real world.

Types:

Mobile VR glasses: Use your smartphone as the display, offering affordability and accessibility but
limited performance.

VR arcades: High-end systems used in VR arcades for specific experiences.

Augmented Reality (AR) glasses: Overlap digital elements onto the real world, blurring the line
between VR and AR.

Advantages:

Lightweight and portable.

More affordable than most HMDs.

AR glasses offer unique mixed-reality experiences.

Disadvantages:

Less immersive than HMDs, with limited FOV and potential light leakage.

Tracking may not be as precise as HMDs.

AR glasses are still in early stages of development and can be expensive.

Choosing the right visual device:

Immersion: Prioritize HMDs for gaming, simulation, and entertainment.

Portability: Choose lighter glasses for travel or on-the-go experiences.

Cost: Set a budget and consider features within your range.

Content: Ensure the device is compatible with your desired VR content and platform.

Working of HMD

The working of a Head-Mounted Display (HMD) depends on several key components and processes,
ultimately aiming to create a convincing and immersive experience for the user in VR. Here's a
breakdown of the main steps involved:

1. Image Generation:

Content creation: VR applications or games provides 3D scenes and generate separate images for
each eye, taking into account perspective and depth information.
Processing (Standalone HMDs): Standalone HMDs have built-in processors that handle graphics
processing and rendering these images. Tethered HMDs rely on a connected computer for this task.

2. Display Delivery:

Dual screens: Each eye in the HMD has its own dedicated screen (LCD, OLED, or microLED),
displaying the corresponding image received from the content source.

Stereoscopic 3D: By presenting slightly different images to each eye, the brain perceives depth and
creates a 3D illusion.

Refresh rate: High refresh rates (90Hz or more) ensure smooth image transition and minimize
motion sickness.

3. Image Manipulation (Optics):

Lenses: Fresnel or pancake lenses magnify and focus the individual images from each screen onto
the user's eyes.

Field of View (FOV): Wider FOV lenses encompass a larger portion of the user's vision, enhancing
immersion.

IPD Adjustment (optional): Some HMDs allow adjusting the distance between lenses to match the
user's interpupillary distance for optimal clarity.

4. Head Tracking:

Sensors: Gyroscopes, accelerometers, and magnetometers within the HMD detect head movements
(orientation and rotation).

Tracking system: Inside-out tracking uses these sensors, while outside-in tracking relies on external
cameras tracking markers on the HMD.

Real-time updates: Based on head tracking data, the virtual scene updates accordingly, creating a
natural feeling of looking around the virtual world.

5. Additional Systems:

Audio: Integrated speakers or headphone jacks deliver spatial audio, mimicking sounds from specific
directions within VR.

User interaction: Buttons, joysticks, or hand tracking (advanced models) allow users to interact with
virtual objects and navigate the environment.

You might also like