0% found this document useful (0 votes)
2 views

Design Thinking Assignment 1

The document discusses the challenges faced by visually impaired individuals in navigation, digital access, and social inclusion, highlighting the need for affordable assistive technologies. It introduces an AI-based portable smart spectacles model designed to enhance navigation, object recognition, and digital access through various integrated technologies. The document also outlines the technical, financial, and user adoption challenges in developing these smart glasses, emphasizing the importance of making them accessible and secure for users.

Uploaded by

nandinirajput843
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Design Thinking Assignment 1

The document discusses the challenges faced by visually impaired individuals in navigation, digital access, and social inclusion, highlighting the need for affordable assistive technologies. It introduces an AI-based portable smart spectacles model designed to enhance navigation, object recognition, and digital access through various integrated technologies. The document also outlines the technical, financial, and user adoption challenges in developing these smart glasses, emphasizing the importance of making them accessible and secure for users.

Uploaded by

nandinirajput843
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Introduction

According to the rapid changes brought forth by technology, the accessibility issue is still a
challenge facing blind people. Actually, vision plays all-day roles in various fields, such as
carrying out certain transformations in environmental navigation and gaining access to
information digitally. Mobility has become practically impossible for millions, and severe
difficulties in reading anything digitally and ensuring affordable assistive aids seem to be the
three categories (or challenges) affecting people without eyesight across continents. The few
assistive technologies available, such as screen readers, smart canes, and guide dogs, although
provide minimal assistance, tend to be very expensive, limited in operation, and dependent on
other people.
It is, therefore, an AI-based portable smart spectacles model specifically designed to empower
the blind to apply streamlined pathways of enhancing their abilities in navigation, object
recognition, and digital access. This device, comprising a speaker, motion sensor, camera,
Bluetooth connection, and AI assistant, is the complete piece of user experience in providing
such features with time-enabled audio feedback on obstacle detection, object recognition, and
voice narration through text images printed or digitally stored.
Daily living for visually impaired persons is impeded by numerous barriers which stand in the
way of independence, mobility, and access to digital information. These pain points can be
classified into four basic areas:
 Navigation Challenges: Movement is a crucial challenge among blind and visually
impaired persons. They have to navigate their environments safely and frequently face
problems like Difficulty in detection of obstacles such as poles, stairs, potholes, and
moving vehicles. Such difficulties lead to a greater danger of accidents often.
In New Places they cannot independently navigated as they do not have spatial
awareness. Inability to Recognize Pedestrian Crossings. They cannot recognition of
traffic signals, crosswalks, and directions on the road makes walking in urban areas quite
hazardous.
 Limited Digital Access: Given that technology is part and parcel of life in the current
day, one can just imagine what difficulties are being faced by these people in accessing
anything digitally. Reading from a book, newspaper, menu, or packaging is not always
possible in braille or audio. While this helps in making an application accessible, it gets
inefficient on many apps, websites, or even contents that are handwritten. Some of its
limitations include very difficult use of smartphones and ATMs. Such interfaces are not
very accessible to users who depend on them for independence.
 Social and Economic Barriers: Visual impairment brings in various difficulties, among
which are navigation and access to digital services, social inclusion, and lack of
economic opportunities. The price at which some smart devices that assist visually
impaired persons are sold easily runs into thousands of dollars, making it almost
impossible for most low-income individuals to access them. Jobs can be tough to find due
to a lack of workplace accessibility and biases from many employers.
 Dependency on Others: Many visually impaired people may find dependence on routine
activities to undermine their confidence and independence. For simple daily activities,
such as shopping, crossing roads, or using technology, individuals with vision impairment
may require assistance from others. While guide dogs and mobility canes enable people
to walk, they do not provide information from digital screens or insight into the
environment in detail. On some occasions, asking for help from strangers can result in
inconveniences or provide unreliable solutions.

Empathy Map:
An Empathy Map is used in the design thinking process to reach that understanding about
visually impaired people's thoughts, emotions, and behaviors needed to design solutions
that effectively respond to their challenges and needs.

Category Insights
Says  "I always find it a challenge to find
ways with crowds."
 “Most assistive devices are
expensive."
 "I can't tell what is around me when
I walk alone."
Thinks  “Independence and moving around
needs nothing from others."
 "To easily read digital and printed
content would indeed be good."
 "One that closes the gap between
progress and affordability; that's
what I want."
Feels  “More accessible but not
necessarily affordable technology”.
 “Anxious about traveling alone and
being unfamiliar with it”.
 “Hopeful that AI and smart
technology can make their daily
life better”.
Does  “Uses a cane or guide dog to
navigate”.
 “Relies on voice assistants and
screen readers for accessing digital
information”.
 “Asks for help from family, friends,
or strangers when traveling”.
Technologies Used Inside the Smart Glasses Prototype:
The prototype of the AI-Augmented Smart Glasses features mobility, security, and technology
accessibility improvements for the visually impaired. It integrates several technologies that help
provide real-time assistance through the use of a speaker, a motion sensor, a camera, Bluetooth
integration, and an AI assistant. Now, below is a list of the various hardware and software
technologies harnessed for the development of this prototype.
Hardware Components: The hardware is developed in a lightweight and durable fashion
because it assures usability and economic feasibility for the blind.
 Smart Glasses Frame: Lightweight construction of glasses is achieved by means of
durable materials such as carbon fiber or reinforced plastics with a view to ensuring
comfort and long-term usage. The design of the frame accommodates all built-in sensors,
a speaker, and a small AI processing unit without compromising the aesthetics.
 Camera Module: Provides real-time image capturing of the surroundings and assists
object detection, text recognition, and face identification. A mini-HD camera designed
with a 120-degree field of view to encompass the user's perimeter.AI-powered image
processing for spotting objects, reading signs, and recognizing facial features.
 Motion Sensors: Detects obstacles and movement to prevent collisions. The LiDAR
sensors are stand-offs which go beyond measure-based distance indication, they assist in
navigation. Ultrasonic impeded sensors, on the other hand, sense any obstacle and relay
the found data to the user through vibrations or sounds at 2-3 meters.
 Bone Conduction Speaker: It can't be minding control-You can display particle
holograms in free space. Bone conduction technology transmits sound through the skull
bones instead of through the ear canal and enables the user to hear environmental sounds
while receiving instructions.
 Bluetooth Connectivity: It can interact wirelessly with your smartphone and other
devices. Bluetooth Low Energy (BLE) manages power leakage on appliances while it
remains strongly connected. These enable the smart glasses to pair and sync with mobile
apps for software upgrade and additional functionality.
 Battery and Power Management: Provides long-lasting performance whole day.
Battery supplied -charged with a lithium-ion battery with optimized power-saving
adjournments, maintaining massive usage of 8 to 12 hours without any intervals or
breaks. USB-C fast charging facilities with the electronic device aid the device in getting
charged at the earliest.
Software & AI Integration: The software is the brains and functionalities of the smart glass,
which, in real-time, processes visual, audio, and spatial data to be helpful for users.
 Computer Vision for Object & Text Recognition: Help users acquaint with objects,
detect faces, and read printed text.
OpenCV: An open-source computer vision library for undertaking image processing and
object recognition.
TensorFlow Lite: This is the on-device implementation of AI models to identify objects,
read text, and recognize familiar people.
Optical Character Recognition (OCR) with Tesseract: Transform actual or handwritten
text into speech.
 AI-Powered Voice Assistant: Gives interactive voice support for navigation and access
to information. Google Assistant / OpenAI Whisper for speech recognition and then
respond. Natural Language Processing (NLP): Interpret and replies to voice commands
like “What is in front of me?” “Read this story. “Where is the platform 2?”
 Real-Time GPS Navigation: A guide for users on streets, neighborhoods, public
buildings, and public spaces. Google Maps API plus OpenStreetMap for open-air
navigation; indoors, Bluetooth beacons are useful for guiding customers across shopping
malls or the like, even if airports.
 Machine Learning for Personalized Assistance: Improves consumer experience by
learning patterns of behavior. Adaptive AI designs adjust and incline efficiency as time
progresses. Personal preferences individualize settings in line with past interactions and
evaluations.

How The Smart Glasses Work


This high-tech smart glass works on the principle of artificial intelligence and would be
able to assist visually impaired persons in real time through guidance, recognition of
objects and giving them digital access. The device collects visual data, processes the
information by AI and tells the user about it through audio or haptic cues. The operation
of the device can be explained in a few steps as given below.
 Navigation Assistance: Assist toward mobility by detecting obstacles and providing
directional guidance. Continuous activity takes place in which environmental scans by
motion sensors (LiDAR and ultrasonic) take place to detect any obstacle present in a 2–3-
meter range. The AI system acts on the data to infer if the object is a static obstacle such
as a wall or a moving one such as a pedestrian or vehicle. The user is notified via bone
conduction audio alerts or haptic feedback in the form of vibrations felt in the frame.
Upon coming close to a hazardous object, the device aids the user step by step with the
appropriate voice commands (e.g., "Turn slightly right to avoid an obstacle"). The device
integrates GPS navigation so that the users are given immediate directions as they travel.
 Object & Face Recognition: To identify objects and recognize known faces in the user's
sigh Using the camera, a real-time image is captured that is transferred to the AI
processor onboard. The computer vision model (OpenCV + TensorFlow Lite) analyzes
the images and recognizes objects, such as doors, chairs, traffic lights, and vehicles.
When any recognized object appears in the field of view, the AI assistant announces a
verbal description of the object (e.g., "5 meters ahead there is a bus stop" or "You are
approaching the staircase"). The system can store and recognize previously known faces
(with user consent), thus assisting the visually impaired user to recognize those in their
social circles. On an approach of a stranger, the AI assistant can alert the user (e.g.,
"There is a person in front of you").
 Text-to-Speech (TTS) for Digital Access: Assist users to independently read printed and
digital texts. A user points to a sign, book, or smartphone screen while the camera
captures the text. The AI-powered Optical Character Recognition (OCR) converts the
captured printed or digital text into speech. The AI assistant now reads the text aloud by
means of the bone conduction speaker. The system can recognize multiple languages,
while readout speed can be adjusted based on user preference. Example: Comprehending
restaurant menus, newspaper articles, and even reading signboards. Grasping text
messages and emails or notifications that pop up on your smartphone. Using ATM or
ticket machine screens that lack accessibility features.
 Smart Voice Assistant Integration: In order to fulfill the requirement of interaction,
hands-free control must be a feature here. The user provides a spoken command (for
example, "Where is the nearest bus stop?" or "Where am I?"). The request is taken up by
the AI assistant along with a predictable voice reply. The assistant can fetch live feeds'
weather updates or bus timetable data and even point out where to get an item. Also, the
assistant can read messages but can describe the environment and translate text.
Challenges in the Development of Smart Glasses
Development of AI-enabled smart glasses intended for visually impaired persons faces
numerous technical, financial, and acceptance-based challenges. These challenges must
be overcome for the device to become affordable and handy for the users.
 Technical Challenges: To make the smart glasses run practically in the real world,
various cutting-edge technological hurdles have to be crossed:
Real-Time Processing: Image processing with the AI systems, object detection, and
feedback should happen instantaneously, for it to be useful for the user during navigation.
Here, any sort of delay will possibly end up in accidents or misinterpretations.
Battery Efficiency: The glasses must run AI computations, GPS, and Bluetooth for a
minimum of 8-12 hours on charge. Power optimization is important.
Voice Clarity in Noisy Environments: The AI assistant must differentiate between the
user commands and background noise and produce a response even in the crowd and loud
places.
Accurate Object and Face Recognition: The AI should be able to train and recognize
during different times of day, under different weather conditions and within different
lighting environments. This will constrain the chances of error.
 Cost & Affordability Challenges: The project has a dominant objective which is to
render the smart glasses affordable for ordinary purchases without abandoning the many
features that anybody would expect from the glasses. Possible challenges include:
Feature versus Price: In this category, most of the high-tech assistive devices cost about
$2,000–5,000. The question is how to put inside, for less than $300, an AI, sensors, and a
camera of types that could be described as decent.
Cutting Manufacturing Costs: High price tags for certain components (LIDAR sensors,
AI processors) are certainly stubborn. The solution will have to be either to source
cheaper alternatives or fund the ramp-up of mass manufacturing.
Financial and Production Feasibility: For mass production to work, financial backing
from investors, NGOs, or the government will be required in order for these devices to be
made widely available.
 User Adoption Challenges: The training and ease of use problem, tours of the AI-
powered glasses may be new to visually impaired set. The glasses must be provided with
a simple interface and short learning curve.
Every user for customizations: Contrarily, individual users may have different
preferences (e.g., voice speeds, contrast settings, or vibration intensity). Very important to
ensure adaptable options for customization.
Cultural and language barriers: The AI assistant has to support many languages for
assisting visually impaired people across different regions.
Data Privacy & Security Concerns: Since the glasses use cameras and AI-powered voice
assistants, there are privacy risks involved:
Ethics regarding facial scanning: Storing and recognizing faces creates privacy concerns.
Once cut into this functionality, the users must have its control completely.
Bluetooth and Cybersecurity Threats: Access to Bluetooth connectivity allows dangerous
unauthorized infiltration into user data. Strong encryption and security protocols are
essential here.
Data Privacy Legislation Compliance: The device should confirm to all the global data
protection act provisions while handling personal data about a global GDPR or HIPAA.

You might also like