Design Thinking Assignment 1
Design Thinking Assignment 1
According to the rapid changes brought forth by technology, the accessibility issue is still a
challenge facing blind people. Actually, vision plays all-day roles in various fields, such as
carrying out certain transformations in environmental navigation and gaining access to
information digitally. Mobility has become practically impossible for millions, and severe
difficulties in reading anything digitally and ensuring affordable assistive aids seem to be the
three categories (or challenges) affecting people without eyesight across continents. The few
assistive technologies available, such as screen readers, smart canes, and guide dogs, although
provide minimal assistance, tend to be very expensive, limited in operation, and dependent on
other people.
It is, therefore, an AI-based portable smart spectacles model specifically designed to empower
the blind to apply streamlined pathways of enhancing their abilities in navigation, object
recognition, and digital access. This device, comprising a speaker, motion sensor, camera,
Bluetooth connection, and AI assistant, is the complete piece of user experience in providing
such features with time-enabled audio feedback on obstacle detection, object recognition, and
voice narration through text images printed or digitally stored.
Daily living for visually impaired persons is impeded by numerous barriers which stand in the
way of independence, mobility, and access to digital information. These pain points can be
classified into four basic areas:
Navigation Challenges: Movement is a crucial challenge among blind and visually
impaired persons. They have to navigate their environments safely and frequently face
problems like Difficulty in detection of obstacles such as poles, stairs, potholes, and
moving vehicles. Such difficulties lead to a greater danger of accidents often.
In New Places they cannot independently navigated as they do not have spatial
awareness. Inability to Recognize Pedestrian Crossings. They cannot recognition of
traffic signals, crosswalks, and directions on the road makes walking in urban areas quite
hazardous.
Limited Digital Access: Given that technology is part and parcel of life in the current
day, one can just imagine what difficulties are being faced by these people in accessing
anything digitally. Reading from a book, newspaper, menu, or packaging is not always
possible in braille or audio. While this helps in making an application accessible, it gets
inefficient on many apps, websites, or even contents that are handwritten. Some of its
limitations include very difficult use of smartphones and ATMs. Such interfaces are not
very accessible to users who depend on them for independence.
Social and Economic Barriers: Visual impairment brings in various difficulties, among
which are navigation and access to digital services, social inclusion, and lack of
economic opportunities. The price at which some smart devices that assist visually
impaired persons are sold easily runs into thousands of dollars, making it almost
impossible for most low-income individuals to access them. Jobs can be tough to find due
to a lack of workplace accessibility and biases from many employers.
Dependency on Others: Many visually impaired people may find dependence on routine
activities to undermine their confidence and independence. For simple daily activities,
such as shopping, crossing roads, or using technology, individuals with vision impairment
may require assistance from others. While guide dogs and mobility canes enable people
to walk, they do not provide information from digital screens or insight into the
environment in detail. On some occasions, asking for help from strangers can result in
inconveniences or provide unreliable solutions.
Empathy Map:
An Empathy Map is used in the design thinking process to reach that understanding about
visually impaired people's thoughts, emotions, and behaviors needed to design solutions
that effectively respond to their challenges and needs.
Category Insights
Says "I always find it a challenge to find
ways with crowds."
“Most assistive devices are
expensive."
"I can't tell what is around me when
I walk alone."
Thinks “Independence and moving around
needs nothing from others."
"To easily read digital and printed
content would indeed be good."
"One that closes the gap between
progress and affordability; that's
what I want."
Feels “More accessible but not
necessarily affordable technology”.
“Anxious about traveling alone and
being unfamiliar with it”.
“Hopeful that AI and smart
technology can make their daily
life better”.
Does “Uses a cane or guide dog to
navigate”.
“Relies on voice assistants and
screen readers for accessing digital
information”.
“Asks for help from family, friends,
or strangers when traveling”.
Technologies Used Inside the Smart Glasses Prototype:
The prototype of the AI-Augmented Smart Glasses features mobility, security, and technology
accessibility improvements for the visually impaired. It integrates several technologies that help
provide real-time assistance through the use of a speaker, a motion sensor, a camera, Bluetooth
integration, and an AI assistant. Now, below is a list of the various hardware and software
technologies harnessed for the development of this prototype.
Hardware Components: The hardware is developed in a lightweight and durable fashion
because it assures usability and economic feasibility for the blind.
Smart Glasses Frame: Lightweight construction of glasses is achieved by means of
durable materials such as carbon fiber or reinforced plastics with a view to ensuring
comfort and long-term usage. The design of the frame accommodates all built-in sensors,
a speaker, and a small AI processing unit without compromising the aesthetics.
Camera Module: Provides real-time image capturing of the surroundings and assists
object detection, text recognition, and face identification. A mini-HD camera designed
with a 120-degree field of view to encompass the user's perimeter.AI-powered image
processing for spotting objects, reading signs, and recognizing facial features.
Motion Sensors: Detects obstacles and movement to prevent collisions. The LiDAR
sensors are stand-offs which go beyond measure-based distance indication, they assist in
navigation. Ultrasonic impeded sensors, on the other hand, sense any obstacle and relay
the found data to the user through vibrations or sounds at 2-3 meters.
Bone Conduction Speaker: It can't be minding control-You can display particle
holograms in free space. Bone conduction technology transmits sound through the skull
bones instead of through the ear canal and enables the user to hear environmental sounds
while receiving instructions.
Bluetooth Connectivity: It can interact wirelessly with your smartphone and other
devices. Bluetooth Low Energy (BLE) manages power leakage on appliances while it
remains strongly connected. These enable the smart glasses to pair and sync with mobile
apps for software upgrade and additional functionality.
Battery and Power Management: Provides long-lasting performance whole day.
Battery supplied -charged with a lithium-ion battery with optimized power-saving
adjournments, maintaining massive usage of 8 to 12 hours without any intervals or
breaks. USB-C fast charging facilities with the electronic device aid the device in getting
charged at the earliest.
Software & AI Integration: The software is the brains and functionalities of the smart glass,
which, in real-time, processes visual, audio, and spatial data to be helpful for users.
Computer Vision for Object & Text Recognition: Help users acquaint with objects,
detect faces, and read printed text.
OpenCV: An open-source computer vision library for undertaking image processing and
object recognition.
TensorFlow Lite: This is the on-device implementation of AI models to identify objects,
read text, and recognize familiar people.
Optical Character Recognition (OCR) with Tesseract: Transform actual or handwritten
text into speech.
AI-Powered Voice Assistant: Gives interactive voice support for navigation and access
to information. Google Assistant / OpenAI Whisper for speech recognition and then
respond. Natural Language Processing (NLP): Interpret and replies to voice commands
like “What is in front of me?” “Read this story. “Where is the platform 2?”
Real-Time GPS Navigation: A guide for users on streets, neighborhoods, public
buildings, and public spaces. Google Maps API plus OpenStreetMap for open-air
navigation; indoors, Bluetooth beacons are useful for guiding customers across shopping
malls or the like, even if airports.
Machine Learning for Personalized Assistance: Improves consumer experience by
learning patterns of behavior. Adaptive AI designs adjust and incline efficiency as time
progresses. Personal preferences individualize settings in line with past interactions and
evaluations.