0% found this document useful (0 votes)
115 views

"A S F D A D ": Inguistic IGN Ommunicator

This document is the final project report for a sign language interpreter called the Linguistic Sign Communicator. It was created by four students at Jinnah University for Women as their final year project. The interpreter uses a Microsoft Kinect sensor to recognize Pakistani sign language gestures and translate them into English and Urdu text or speech. It can track the skeletons of multiple users and translate individual letters, words, phrases and full sentences. The report includes an introduction, overall description, requirements analysis, system features, and integration details. It discusses the project scope, user classes, design constraints, and interfaces with the Kinect SDK and other technologies.

Uploaded by

Research Paper
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
115 views

"A S F D A D ": Inguistic IGN Ommunicator

This document is the final project report for a sign language interpreter called the Linguistic Sign Communicator. It was created by four students at Jinnah University for Women as their final year project. The interpreter uses a Microsoft Kinect sensor to recognize Pakistani sign language gestures and translate them into English and Urdu text or speech. It can track the skeletons of multiple users and translate individual letters, words, phrases and full sentences. The report includes an introduction, overall description, requirements analysis, system features, and integration details. It discusses the project scope, user classes, design constraints, and interfaces with the Kinect SDK and other technologies.

Uploaded by

Research Paper
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 56

LINGUISTIC SIGN COMMUNICATOR

“A SOLUTION FOR DEAF AND DUMB”

Final year Project Report

Submitted by:
Tabia Rashid 2012/comp/BS(SE)/14067 1214105
Maria Farooq 2012/comp/BS(SE)/14056 1214094
Anousha Khan 2012/comp/BS(SE)/14032 1214070
Maha Shakeel 2012/comp/BS(SE)/14055 1214093

January 2016

DEPARTMENT OF COMPUTER SCIENCE AND INFORMATION TECHNOLOGY


JINNAH UNIVERSITY FOR WOMEN
5-C NAZIMABAD, KARACHI 74600
JINNAH UNIVERSITY FOR WOMEN

PROJECT APPROVAL

Project Title: Linguistic Sign Communicator

By:
Tabia Rashid 2012/comp/BS(SE)/14067 1214105
Maria Farooq 2012/comp/BS(SE)/14056 1214094
Anousha Khan 2012/comp/BS(SE)/14032 1214070
Maha Shakeel 2012/comp/BS(SE)/14055 1214093

Approval Committee:

___________________________ ____________________________
Ms. Narmeen Shawoo Bawany Sir. Adeel
Designation: Senior software engineer
Organization: Synety Groups
(Internal Advisor) (External Advisor)

___________________________
(Head of the Department)
ABSTRACT

Linguistic Sign Communicator is an interpreter for deaf and dumb people in Pakistani
society. The aim of this interpreter is to present a system that can efficiently translate
Pakistani Sign Language gestures to both text and auditory speech with two languages i.e
English & Urdu. The interpreter here makes use of a Microsoft kinect device V1 which
possesses special sensor that takes depth image, extract human skeleton points and have
speakers which recognize voice. It can track skeleton of 4 people at a time. The
interpreter not only translates alphabets but can also form words, phrases and sentences
using performed gestures. It also possesses learning videos for special people to learn
easily along with testing their own gestures.

i
TABLE OF CONTENTS
ABSTRACT i

TABLE OF CONTENTS ii

LIST OF FIGURES v

LIST OF TABLES vi

ACKNOWLEDGEMENTS vii

Chapter 1 INTRODUCTION 1

1.1 Purpose 1
1.2 Project Overview 1
1.3 Related Studies 2
1.4 Project Boundaries 5
1.5 Scope 6
1.5.1 Scope-in 6
1.5.2 Scope-out 6
1.6 Intended Audience & Reading Suggestions 7

Chapter 2 OVERALL DESCRIPTION 8

2.1 Product Perspective 8


2.2 Product Functions 8
2.3 User Classes and Characteristics 9
2.3.1 Exceptional People 9
2.3.2 Deaf People 9
2.3.3 Dumb People 9
2.4 Tools and Technologies 10
2.5 Design and Implementation Constraints 10
2.6 Design Implications 11
2.7 Assumption and Dependencies 12
2.7.1 Assumptions 12
2.7.2 Dependencies 13

Chapter 3 REQUIREMENT ANALYSIS 14

3.1 Determining User Requirement 14


3.2 Problem Space 14
3.2.1 Design Alternatives 15
3.2.1.1 Google Gestures 15
3.2.1.2 Sign Language gesture rings 16

ii
3.2.2 Rationale behind project 16
3.3 Other non-Functional Requirements 16
3.3.1 Performance Requirements 16
3.3.1.1 Response Time 17
3.3.1.2 Workload 17
3.2.2 Safety Requirements 17
3.4 Software Quality Attributes 17
3.4.1 Correctness 17
3.4.2 Flexibility 17
3.4.3 Interperobality 17
3.4.4 Maintainability 18
3.4.5 Reliability 18
3.4.6 Robustness 18
3.4.7 Ease of Use 18
3.5 Business Constraints 18

Chapter 4 SYSTEM FEATURES 19

4.1 Main Features 19


4.1.1 Add Gesture 19
4.1.2 Capture Gesture 19
4.1.3 Store Gesture 19
4.1.4 Load File 19
4.1.5 Clear List 19
4.1.6 Slide up/Slide down 20
4.1.7 Distance from sensor 20
4.1.8 Repeat Text 20
4.1.9 Factor Recognition 20
4.1.10 Seconds to save 20
4.1.11 Skeleton canvas 20
4.1.12 Text Canvas 20
4.1.13 Image canvas 21
4.2 User Interfaces 21
4.2.1 Communication Mode 21
4.2.2 Testing Mode 23
4.2.3 Recording Mode 24
4.2.4 Sign Learner Mode 26
4.3 Web Application 26
4.3.1 Features 26
4.3.2.1 Download LSC 27
4.3.2.2 Working Demo Video 27
4.3.2.3 Contact Use 27
4.4 Hardware Interfaces 27
4.5 Software Interfaces 27
4.5.1 Kinect SDK 27
4.5.2 VS 2013 28

iii
4.5.3 C# 28
4.6 Memory Constraints 28
4.6.1 Hardware Constraints 28
4.6.2 Application Constraints 28

Chapter 5 INTEGRATION 29

5.1 Kinect Library 29


5.1.1 Contour Tracking 29
5.1.2 Curves 29
5.1.3 DTW Recognition 30
5.1.4 Fingers 30
5.1.5 Range Finder 30
5.2 Microsoft Kinect Toolkit 31
5.3 Bing Translator API 31
5.4 XML 31

Conclusion 32

Appendix A RESEARCH PAPER REFERENCES 33

Appendix B GLOSSARY 34

Appendix C ANALYSIS MODELS 36

C.1 Activity Diagram 36


C.2 Use Case Diagram 37
C.3 Class Diagram 38

iv
LIST OF FIGURES

Figure 1. 1 Signs in Urdu and Swedish……………..................................…....................6

Figure 2. 1 Flow of LSC ......................................................................….........................12

Figure 3. 1 Google Gestures ……………………………………….............................…15

Figure 3. 2 Gesture rings …………………….....................................................……….16

Figure 4. 1 LSC communication mode in idle mode ………………...………………….22

Figure 4. 2 LSC communication mode while performing gesture mode………………...22

Figure 4. 3 LSC communication completed…………………………......................….. 23

Figure 4. 4Select gesture to test …………………………........................…………….. 23

Figure 4. 5 Test saved gestures …………………………………………………..….......24

Figure 4. 6 Write name and speech to record gesture ..................................................…25

Figure 4. 7 Perform and save gestures ……….................................................................25

Figure 4. 8 Select signs and learn…………………………......................………………26

Figure C. 1 LSC Activity Diagram…………………………...................………………37

Figure C. 2 LSC Use Case Diagram…………..……………...................………………38

Figure C. 3 LSC Class Diagram…………………………........................………………39

v
LIST OF TABLES
TABLE 1.1 PREVIOUS PROJECTS .......................................................................................... 5

vi
ACKNOWLEDGEMENT

We would like to pay gratitude to all those people who assisted us and tolerated us
throughout the work. Firstly, we would like to acknowledge the Almighty for his
guidance and wisdom.
After Him, we take this opportunity to express our profound sense of appreciation,
respect and thankfulness to our mentor Ms Narmeen Shawoo Bawany for her valuable
time and guidance and also for expressing her confidence in us by, we work on a project
of this magnitude and using latest technologies and providing her support, help &
encouragement in deployment of this project, without whose help this piece of work
wouldn’t be possible.
Further, we would like to pay our gratefulness to Mr. Adeel who helped and motivated us

in all the difficult times, without his motivations and appreciations on achieving every
small milestone in this stressful period, we would have not been able to achieve this piece
of success.
We are highly indebted to MIC (Microsoft Innovation Centre) & Jinnah University for
Women for their guidance and constant supervision as well as for providing necessary
information regarding the project & also for their support in completing the project.
Then, we would like to pay our sincere gratitude to our parents, who suffered sleepless
nights just because the lights were on when we were working on the system. Last, but
not the least we are grateful to our friends who were the best source of motivation during
this hectic phase.
We have taken efforts in this project. However, it would not have been possible without
the kind support and help of many individuals and organizations. I would like to extend
my sincere thanks to all of them.
Our thanks and appreciations also go to our colleagues in developing the project and
people who have willingly helped us out with their abilities.

vii
Chapter 1. Introduction

Chapter 1

INTRODUCTION

1.1 PURPOSE
The main idea involves a project for deaf and dumb to ease their social lives
living in Pakistan. Sign language is a non-verbal form of intercourse which is found
amongst deaf communities in world. The languages do not have a common origin and
hence difficult to interpret. LSC is an interpreter that translates the hand gestures to
auditory speech.
The main aim of this interpreter is to present a system that can efficiently translate
Pakistani Sign Language gestures to both text and auditory voice. The interpreter makes
use of a skeleton based technique comprising of flex sensor, tactile sensors and
accelerometer. For each hand gesture made a signal is produced by the sensors
corresponding to the hand sign the controller matches the gesture with pre-stored inputs.
The device not only translates alphabets but can also form words and sentences using
those performed gestures that are saved by the user itself.

1.2 PROJECT OVERVIEW


LSC’s main idea involves a solution for deaf and dumb community to ease their
social lives. Sign language is a non-verbal form of intercourse which is found amongst
deaf communities in world. The languages do not have a common origin and hence
difficult to interpret. LSC is a solution that translates the hand gestures to auditory
speech/text and vice versa.
A gesture in a sign language is a particular movement of the hands with a specific
shape made out of them. Facial expressions also count towards the gesture, at the same
time. A posture on the other hand, is a static shape of the hand to indicate a sign. Gesture
recognition is classified into two main categories i.e. vision based and sensor based.
1
Chapter 1. Introduction

The disadvantage of vision based techniques includes complex algorithms for data
processing. Another challenge in image and video processing includes variant lighting
conditions, backgrounds and field of view constraints and occlusion. The sensor based
technique offers greater mobility.
The main aim of this software is to present a system that can efficiently translate
Pakistani Sign Language gestures to both text and auditory speech. The interpreter here
makes use of a sensor based technique comprising of flex sensor, tactile sensors and
accelerometer. For each hand gesture made a signal is produced by the sensors
corresponding to the hand sign the controller matches the gesture with pre-stored inputs.
The software not only translates alphabets but can also form words/phrases using
performed gestures. Training mode is offered in software so that it make easy to learn the
signs.

1.3 RELATED STUDIES


Gesture recognition has become an important research field with the current focus
on interactive Hand gesture recognition. Researches on computer vision and hand
detection have established solid groundwork for gesture recognition. Covariance
matching can serve as a robust and computationally feasible approach to action
recognition [1][2]. This approach, which involves computing the covariance matrices of
feature vectors that represent an action, can potentially be useful in our 2-D hand gesture
recognition problem as well.
In addition to covariance matching, another approach has been proposed in the
literature by Jmaa and Mahdi.[3] Instead of building a dictionary of covariance matrices,
this approach is based on analyzing three primary features extracted from an image:
location of fingers, height of fingers, and the distance between each pair of fingers. This
approach requires careful selection of feature vectors and its computational cost is
potentially high. On the other hand, it is limited to only hand-digit recognition. However,
the implementation of this approach is relatively easier and requires less processing
approach requires careful selection of features in order to achieve high classification rate.

2
Chapter 1. Introduction

The method described by Jonathan Hall [9] uses Markov Model. It is a typical
model for a stochastic sequence of a number of states. Which states are based on
observations or data. In this approach, the observation data used are sequential 3D points
(x, y, z) of Joints. A gesture is recognized based on the states as well as the transition
between these states. These states are hidden and hence this type of Markov model is
called a Hidden Markov Model (HMM).This method uses a skeleton-based gesture model
and also takes transition between states into consideration. The accuracy of the gesture
model depends on the initialization of the states by the user. Erroneous input from the
user could lead to poor performance. Thus, by averaging different sets of input states for
the same gesture could solve this problem.
Gesture Service for Kinect project using windows SDK in [4] considers gestures
to be made up of parts Each part of a gesture is a specific movement , when combined
with other gesture parts, makes up the whole gesture. This method uses a skeleton based
gesture model. Recognizing gesture parts are not sufficient to recognize a gesture. The
overall system comprises of three classes, namely, gesture controller, gesture and gesture
part. The method uses a Gesture Controller to control the transition between gesture parts
and updates the state of the gesture part. Though this method tries to incorporate
transitions, it is not efficient unless we consider a large number of gesture parts that are
close to each other.
The Kinetic Space described in [5] provides a tool which allows everybody to
record and automatically recognize customized gestures using the depth images and
skeleton data as provided by the Kinect sensors. This method is very similar to the
Hidden Markov Model [3] as discussed before. This method uses a skeleton based
gesture model and also takes transition between states into consideration. No code has to
be written by the trainer.The unique analysis routines allow to not only detect simple
gestures such as pushing, clicking, forming a circle or waving, but also to recognize more
complicated gestures as, for instance, used in dance performances or sign language. In
addition it provides a visual feedback how good individual body parts resemble a given
gesture.

3
Chapter 1. Introduction

This method does not consider breaking a gesture into segments or parts and as a
result a large amount of data is used to describe a gesture making it a memory inefficient
solution. By considering gesture segments and interpolating them would result in a more
memory efficient solution.
The project described in [6] allows developers to include fast, reliable and highly
customizable gesture recognition in Microsoft Kinect SDK C-sharp projects. This method
uses a skeleton based gesture model. It uses Dynamic time warping (DTW) [7] algorithm
for measuring similarity between two sequences which may vary in time or speed. It uses
skeletal tracking but the drawback with this software is that it currently supports only 2D
vectors and not 3D.
The software includes a gesture recorder that records the user's skeleton and
trains the system. The recognizer software then recognizes the gestures that have been
trained by the user. This method supports only 2D and not 3D vectors. It does not track
whether a user is correctly following a trajectory between poses. It does not give
incremental feedback along the way.
A method for contact-less Hand Gesture Recognition using Microsoft Kinect for
Xbox has been described in [8]. The system can detect the presence of gestures, to
identify fingers, and to recognize the meanings of nine gestures in a pre-defined Popular
Gesture scenario.
The accuracy of the system is from 84 percent to 99 percent with single-hand
gestures. Because the depth sensor of Kinect is an infrared camera, the lighting
conditions, signers' skin colors and clothing, and background have little impact on the
performance of this system. This method has a good accuracy rate. This method is limited
to Hand Gestures.
Further past projects are:

4
Chapter 1. Introduction

Paper Primary No. of Background to Additional No. of Ac FR


method of gestures gesture images markers training
recognition recognized required images
(such as wrist
bands)
[Bauer & Hidden Markov 97 General Multi-colored 7-hours 91.7 -
Hienz,20 models gloves training %
00]
[Stamer, Hidden Markov 40 General No 400 97.6 10
Weaver Models training %
& sentences
Pentland,
1998]
[Bowden Linear 26 Blue screen No 7441 - -
& approximation images
Sarhadi,2 to non-linear
000] point of
distribution
models
[Davis & Finite state 7 Static Markers on 10 =98 10
Shah machine gloves sequences %
,1994] of 200
frames
each

Table 1.1 Previous Projects

1.4 PROJECT BOUNDARIES


Deaf people use signs to communicate. When a deaf person wants to express
something, they can make some signs using their hands and fingers. Each particular sign
means a distinct letter, word or expression. One can compare a sign with a word in our
(hearing people) case e.g. the sign in the picture means what (“kiya” in Urdu and “vad” in
Swedish).

5
Chapter 1. Introduction

Figure 1.0.A Signs in Swedish and Urdu

The figure 1.1 shows what symbol in Pakistani Sign Language while second in
Swedish sign Language. Others issues could be:
 One semantic concept corresponds to a specific sign.
 Several semantic concepts are mapped onto a unique sign.
 One semantic concept generates several signs.
 Verbs, general and specific Nouns.

1.5 SCOPE

1.5.1 SCOPE-IN

 Real time gestures recording and storing in database.


 Sign language interprets into natural language (English + Urdu) with limited
vocabulary.
 Interpreter can be use for multi-person communication.
 Sign Learning Mode can help such people learning and studying PSL gestures.
 Signs can be tested in Testing module when saved as XML.
 Book named “Pakistan Sign Language” is implemented.
 Interpreter would convert the signs performed as a complete sentence.
 Gestures that are stored in XML are basic level gestures.
 Sentences made by combination of words are of basic level.
 Customizable gestures can be recorded.

6
Chapter 1. Introduction

1.5.2 SCOPE-OUT

 All languages for translations can be implemented.


 Kinect V2 can be implemented with the project for more accuracy in gestures.

1.6 INTENDED AUDIENCE AND READING SUGGESTIONS


The LCS is the interpreter to help persons who are deaf and dumb, to
communicate more effectively with others. It is assumed that the deaf and dumb person
should clearly know about the sign language and also understand the English or Urdu
language at very basic level at least. We have two types of users, one who are deaf and
dumb and other is a normal person. Normal person is the one who can understand
English/Urdu language.

7
Chapter 2 . Overall Description

Chapter 2

OVERALL DESCRIPTION

2.1 PRODUCT PERSPECTIVE


There is always a necessity of new products and technologies in every country
that helps people in their lives. Our perspective for this product is to achieve a better
communication between normal people and hearing impaired person within Pakistan. As
Pakistan has approximately 2 million people that are facing this problem. They are
deprived from various social activities. They are under-estimated to our society.
Suppose a deaf customer went to a shop. She is trying to express her demands to
the shopkeeper using sign language but the shop keeper cannot understand her demands.
LSC can be a desirable interpreter which can help both the community general and deaf.
The LSC main functionality is to maintain the flow of communication and recognizing
signs frequently.

2.2 PRODUCT FUNCTIONS


Features that are included in interpreter are:
 The LSC will interpret PSL gestures to understandable phrases.
 PSL signs will be interpreted in two different languages i.e English and
Urdu as text and speech.
 2D PSL gesture recognition using all the joints of the upper torso
(Skeleton Frame).
 Fast and customizable sign language gesture recognizer through DTW
algorithm.
 User has opportunity to record new Pakistan sign language gestures within
customized timings.

8
Chapter 2 . Overall Description

 User can learn new signs through learning mode.


 User has an opportunity to test recorded signs and to reproduce them.

2.3 USER CLASSES AND CHARACTERISTICS


We are considering two types of users, one who is deaf and dumb and other is a
normal person. Normal person is the one who can understand English and Urdu language.
There could be variations in deaf and dumb users which can be:

2.3.1 EXCEPTIONAL PEOPLE


Exceptional Children are those children who are unable to communicate due to
any disability or impairment
 Communication Disorder
 Receptive Disorder
 Expressive Disorder

2.3.2 DEAF PEOPLE


A person who is deficient in hearing power. Some of the types are:
 Pre-lingual
 Post-lingual s
 Hard-of-hearing.

2.3.3 DUMB PEOPLE

It is the individual who lacks the ability to speak. Some types are:
 Articulation Disorder
 Fluency Disorder

9
Chapter 2 . Overall Description

2.4 TOOLS AND TECHNOLOGIES


Tools and technologies that are being used in LSC are:
 Visual Studio 2012-2013
 C#, WPF Applications, XML
 Kinect V1 for Windows.
 SDK toolkit for Kinect Version 1.7.
 DTW algorithm.
 Kinect development studio.

2.5 DESIGN AND IMPLEMENTATION CONSTRAINTS

It is assumed that the deaf or dumb person should clearly know about the
Pakistani sign language and also understand the English and Urdu language at basic level.
There is some limitation within LSC:
 Brightness due to sun and contrast can sometimes make sensor hardly
detect the expected skin color.
 Secondly it is hard to take decision because of the similarity of tracking
environment background color and skin color the LSC gets unexpected
pixels and also semantic problem will arose.
 Different skeletons can be detected at same time.
 Nearest skeleton will be detected.
 Sensor should be place at proper height where it can scan whole body.
 Sensor should not be exposed to sunlight or high beams.
 The person who wants to use the system should record its own gestures.

10
Chapter 2 . Overall Description

2.6 DESIGN IMPLICATION


Initial design flow is illustrated in Figure 3. Our design is segmented into three
major sections; data acquisition; data analysis; and the graphic user interface.

The data acquisition phase begins once a user performs a gesture. Our intent is to
track their motion and capture an individual movement. As movement is detected we
determined the beginning and end of this motion and pass this data into our data analysis
phase.
Data analysis utilizes recognition algorithms that compare current movement data
against a predefined library. If a gesture is not found we return to the beginning of our
flow charts, and require the user to re-perform the gesture. If the gesture is found, the
translation is displayed on the user interface.
The user interface provides feedback to the user illustrating through text output
the translated word or phrase. The intended application of the LSC is to allow those that
use Pakistani Sign Language as their first language to communicate with those who are
unable to sign. A device such as this would allow them to seamlessly communicate
despite the language barrier.

11
Chapter 2 . Overall Description

Figure 2.0.A Flow of LSC

2.7 ASSUMPTIONS AND DEPENDENCIES


Some of the Assumptions and dependencies of LSC are given below:

2.7.1 ASSUMPTIONS

 User knows Pakistani Sign Language.


 User have kinect V1 sensor.
 User is capable to understand English and Urdu language.

12
Chapter 2 . Overall Description

 User will perform gestures correctly.


 User will have right to record new gestures.

2.7.2 DEPENDENCIES

 Internet access to use the service of Urdu translation.


 User must know how to record new gestures.
 Kinect V1 sensor should be available with windows adapter.
 Must have installed kinect SDK 1.7 and toolkit.

13
Chapter 3. Requirement Analysis

Chapter 3

REQUIREMENT ANALYSIS

3.1 DETERMINING USER REQUIREMENT


The members managed some customers who were deaf and dumb and took their
requirements and after justification of other same systems previously designed, members
organized a report through which they inspect all the problems that can arose during
project management procedure. All POS’s were almost same. But several POS’s gives
several kinds of facilities. When members talk with POS users, they told us to
organize the system simply. They intended for a system which is easy to use and works
efficiently. On other hand members studied different research papers regarding previous
inventions for the same requirements. Members discovered much guidance from those
papers. Some related studies are mentioned in above chapter under related studies
section. LSC is implementing a book named “PAKISTAN SIGN LANGUAGE” which
contains 1000 basic vocabulary words gestures.

3.2 PROBLEM SPACE


About two billion people in the Pakistan are deaf and dumb. How often one come
across these people communicating with the normal world? The communication between
a deaf and hearing person poses to be a serious problem compared to communication
between normal people. This creates a very little room for them with communication
being a fundamental aspect of human life.
Suppose a deaf/dumb customer went to a shop and tries to express their demands
to the shopkeeper using sign language. But the shopkeeper is a normal person and can’t
understand the sign language.LSC will help in this scenario. It will capture the video and
then convert it into a text which can be readable and auditory.

14
Chapter 3. Requirement Analysis

The most problems arise while real-time video conferencing with deaf and dumb
were the semantic problem for example a person poses a gesture and now he wants to
stop conversation how he will express the full stop? This is the most problematic
condition occurred while interpreters were made previously.

3.2.1 DESIGN ALTERNATIVES


Many designs have been made at present including hand gesture recognitions.
Few of them are discussed below:
 Google Gesture
 Sign language gesture rings.

3.2.1.1 GOOGLE GESTURES


The concept imagines an application called Google Gesture and two arm bands
that are worn near the middle of one's forearms. Using a variety of technologies, the arm
bands analyze the signer's movements through a process called electromyography. That
information would sent to the Google Gesture app, where it is then spoken in real-time
from one's smart phone or tablet. This allows others to hear what the individual is
signing, allowing communication to take place with those who don't know sign language.

Figure 3.1 Google gestures

15
Chapter 3. Requirement Analysis

3.2.1.2 SIGN LANGUAGE GESTURE RINGS:


The gesture-to-speak aspect works fine when the hearing-impaired person wants
to talk to someone else, but what about vice versa? The bracelet carries the double duty
of turning sound into text that runs across an LED display. It seems like the only thing
these guys have left to do is actually make people hear again.

Figure 3.2 Gesture Rings

3.2.2 RATIONALE BEHIND PROJECT


Above all the designs which we have discussed in that LSC can be a desirable
interpreter which can help both the community general and deaf. LSC is trying to solve
the communication problem which occurs between the normal and deaf/dumb people. It
can also increase literacy, make education more efficient, and enlarge employment
opportunities that promote independence - accessibility that can help schools and
employees to comply with federal mandates. LSC will provide the good interface and a
good working device which will be easy to understand for everyone.

3.3 OTHER NON-FUNCTIONAL REQUIREMENTS


Some of the non functional requirements are as follow:

3.3.1 PERFORMANCE REQUIREMENT


In order to assess the performance of a system the following are concerned:
 Response Time
 Workload

16
Chapter 3. Requirement Analysis

3.3.1.1 RESPONSE TIME


 The gestures will be saved in 40 frames per second.
 The system will recognize gestures in 1.0 seconds after the gesture
completely performed.

3.3.1.2 WORKLOAD
 Only single person can perform gestures at a time.
 Too much gestures continuously performed may lead the system
hanged.
 Kinect sensor should be protected and sometimes due to continuous
running it may be stopped showing the skeleton and then have to
restart it.

3.3.2 SAFETY REQUIREMENT
 User should be positioned 6 (1.8 m) to 8(2.4 m) feet away from the sensor.
 Protect the kinect sensor to sunlight.

3.4 SOFTWARE QUALITY ATTRIBUTES


Some of the major software quality attributes to be considered for the LSC are:

3.4.1 CORRECTNESS
Gestures must be performed correctly according to PSL book that is been
implemented in order to retrieve correct sentence and phrases from a system.

3.4.2 FLEXIBILITY
User is advantaged to store its required PSL gestures for the future use. User can
also override the pre-recorded gestures.

3.4.3 INTERPEROBILITY
Gesture data and phrase dictionary is stored as an xml format that is shareable to
other environments and can further be used for making similar projects.

17
Chapter 3. Requirement Analysis

3.4.4 MAINTAINABILITY
If any failure occur in hardware, it would be resolve or replace.

3.4.5 RELIABILITY
The system must be able to run for a long duration of time as much as the Kinect
sensor supports.

3.4.6 ROBUSTNESS
System must have able to continue operating even it found more than one
skeleton or no skeleton.

3.4.7 EASE OF USE


Technical and non-technical users can easily use this system by seeing the
visibility of interface or they can follow the guidelines which will be provided with the
software.

3.5 BUSINESS CONSTRAINTS


 The system will only recognize Pakistani Sign Language.
 The system will translate sign language into English and Urdu only.
 Only single user can perform gesture in front of the skeleton canvas at a
time.
 If the user wishes to record new gesture so he should capture the sign and
then save it to file.
 Position of the skeleton should be perfect.
 Only Shoulder, elbow, wrist, hand and fingers points will be detect by the
system.

18
Chapter 4. System Features

Chapter 4

SYSTEM FEATURES

4.1 MAIN FEATURES:


Following are the main features of LSC:

4.1.1 ADD GESTURE:


In this feature, a user will be able to add new gesture:
 When a user click on this button a new pop-up window will occur where
the user will mention the name of the gesture and then clicks ‘ok’.
 After that, a user will perform gesture and store it.
4.1.2 CAPTURE GESTURE:
Capture will allow the user to record gesture after adding gesture name.

4.1.3 STORE GESTURE:


A store button automatically store gestures into XML file at 32 frames and in
minimum 10 seconds.

4.1.4 LOAD FILE:


A load file feature will load XML file of the stored gestures where user can easily
see the points position with gesture names.

4.1.5 CLEAR LIST:


This feature will allow user to clear all the previously loaded files.

19
Chapter 4. System Features

4.1.6 SLIDE-UP/SLIDE-DOWN:
This feature is basically used for the Kinect sensor to up and down the sensor to
fit the position of the skeleton. This feature will help the user to fit its skeleton without
the user movement.

4.1.7 DISTANCE FROM SENSOR:


This will show the measurement between the user and distance from the sensor.

4.1.8 REPEAT TEXT:


Repeat text option will repeat the text until the user wants to repeat it.

4.1.9 FACTOR RECOGNITION:


The factor recognition features is that which will determine how precise the
gestures need to be.

4.1.10 SECONDS TO SAVE:


A user can easily decrease or increase the seconds to save the gesture by sliding
this left or right.

4.1.11 SKELETON CANVAS:


A skeleton canvas extract skeleton with body points of head to spine in which
shoulder left, shoulder right, elbow left, elbow right, wrist left, wrist right, and all finger
points of left and right hand.

4.1.12 TEXT CANVAS:


Text canvas will display the words, phrases or sentences in English and Urdu
both, so the normal person can understand the sign.

20
Chapter 4. System Features

4.1.13 IMAGE CANVAS:


An image canvas will appear on the main communication window with the
skeleton canvas.

4.2 USER INTERFACES


By using LSC, user can easily communicate with other people. This includes a
hand movement different signs in the database so when the user will perform the action,
LSC will display the output in the form of text. There is a less human effort to be
performed while using LSC and understanding it’s functionalities. Interface is divided
into four modes namely:
 Communication Mode.
 Testing Mode.
 Recording Mode.
 Sign Learner Mode.

4.2.1 COMMUNICATION MODE:


This is the main feature of LSC, in which we aimed to achieve a better
communication between the deaf/dumb and normal person. Our communication is based
on two-way communication.
First, a deaf/dumb person will perform gesture and LSC will translate them into
natural language (English & Urdu) both with proper sentences. Steps to be performed are:
 Stand in front of kinect with some distance.
 Let the skeleton come on screen.
 Perform some gestures.
 The screen will display the performed gesture speech in textbox.
 Translated text will be displayed in Textbox.
 Using these words a sentence will be made.
 Sentence will be shown when stop gesture is performed.

21
Chapter 4. System Features

 In case if some gestures are not found or recorded the system will take you
to the testing mode.
 A second user can reply with answering through a text writing in a box
then click enter that will show videos based on it.

Figure 4.1 LSC Communication Mode in idle mode

Figure 4.2 LSC communication mode when performed gesture

22
Chapter 4. System Features

Figure 4.3 LSC communication mode completed

4.2.2 TESTING MODE:


In this mode, a user can check previously recorded gestures that how they were
performed. We are assuming for this mode that, in case if a user forgets gesture of any
word that how he/she or someone else performed so he can easily view that gesture
through this feature. Simply, a user just have to click on this mode type the word of the
gesture our LSC will show you as it was performed.

Figure 4.4 Select gesture to test

23
Chapter 4. System Features

Figure 4.5 Test saved gesture

4.2.3 RECORDING MODE:


Steps to be followed are:

 Start performing some gestures. You can see the names of the gestures
from the select box, and hopefully most of them are obvious to perform. You
will see matches appear in the results text panel at the top of the skeleton
canvas.
 Try recording your own gestures. Make sure your skeleton is being tracked,
select the gesture name you want to record, then click the Capture button.
The gesture is currently hard-coded to look at 32 frames (which is actually
every other frame over 64 expended frames).
 When recording of each gesture is finished, then click Store button. Test your
new gesture a few times to see if you're happy with it. If not, re-record it and
try again.
 When you're happy with your results, save your gestures to file by clicking
Load File button.
 If you want to clear a list then click Clear list button for removing the list
which is showing you on your right hand.

24
Chapter 4. System Features

 Make your own gestures - simply amend or add to the select box items with
a unique name and record your gesture.
 Now a user can load the file when application runs and can perform its desired
gestures and can see proper sentences and Urdu translations of same at
communication mode.

Figure 4.6 Write the name and speech to record gesture

Figure 4.7 Perform and save gesture

25
Chapter 4. System Features

4.2.4 SIGN LEARNER MODE:


Sign Learner mode is made to guide the new user who is unknown to sign
language or want to learn some signs for their own. So our LSC provides you sign
language videos of Pakistan Sign Language. Search the videos in search bar u want to
learn and play it.

Figure 4.7 Select signs and learn

4.3 WEB APPLICATION


We have also introduced a single page website for the users from where they can
easily know about the LSC.

4.3.1 FEATURES:
Following features are available in website.

26
Chapter 4. System Features

4.3.1.1 DOWNLOAD LSC:


 Users can easily download the software from our website.
 All the system specifications and information have been provided and can
understand anyone before downloading the software.

4.3.1.2 WORKING DEMO VIDEO:


 On the main page, a working demo video is available for the people who wish
to see the working of a software.
 This will be helpful for everyone to know how the software actually works.

4.3.1.3 CONTACT US:


 If anyone will have any queries regarding our project, a contact form has to be
filled.
 As soon as the e-mail received they get reply from any of our group member.

4.4 HARDWARE INTERFACES


The main hardware device used in this project is kinect V1. Kinect is a special
sensor made by Microsoft corp. in past few years. User has to simply step in front of the
sensor and it will give you complete body Skelton points which will be stored for
gestures recognition.

4.5 SOFTWARE INTERFACES


Following are the software interfaces included in our project.

4.5.1 KINECT SOFTWARE DEVELOPMENT KIT


The Kinect for Windows Software Development Kit (SDK) enables developers to
create applications that support gesture and voice recognition, using Kinect sensor
technology on computers running Windows 7, Windows 8, and Windows Embedded
Standard 7.

27
Chapter 4. System Features

4.5.2 VISUAL STUDIO 2013


Visual studio 2013 is use to make this interpreter. WPF forms are used to make
several windows.

4.5.3 C#
C# language is used to code functionalities behind the WPF windows.

4.6 MEMORY CONSTRAINTS


Memory constraints for this LSC are as follow:

4.6.1 HARDWARE CONSTRAINTS:


 Minimum 6GB RAM required to run this application.
 500MB free space in Hard Drive + 2.6 GHZ processor.

4.6.2 APPLICATION CONSTRAINTS:


 Do not save more than 1000 gesture videos in the project as it will require
more processing to run the application.
 Before running the application, end task all the unwanted applications running
behind in your system so LSC could run fast and smoothly.

28
Chapter 5. Implementation

Chapter 5

IMPLEMENTATION
We have implemented different modes in LSC for the ease of users, which can be
helpful for them. These modes have been discussed before in Chapter 4 under User
Interfaces section. These interfaces has been implemented through using several different
classes. Some of them are discussed in this chapter.

5.1 KINECT LIBRARY:


Kinect library is a multiple C# class library provided by Microsoft Kinect itself to
implement new ideas using this library. Some of the classes present in library are as
follow:

5.1.1 CONTOUR TRACKING:


It contains an interface which posses function declarations and a class that
implements that interface with functions definition. Contour tracking is a fast and robust
approach to the detection and tracking of moving objects.

5.1.2 CURVES:
It contains an interface which posses function declarations and a class that
implements that interface with functions definition. These functions find curves using the
k-curvature algorithm. Curvature is any of a number of loosely related concepts in
different areas of geometry. Intuitively, curvature is the amount by which a geometric
object deviates from being flat, or straight in the case of a line, but this is defined in
different ways depending on the context.

29
Chapter 5. Implementation

5.1.3 DTW GESTURE RECOGNITION:


Dynamic Time Warping (DTW) is an algorithm for measuring similarity between
two temporal sequences which may vary in time or speed until an optimal match
(according to a suitable metrics) between the two sequences is found.

5.1.4 FINGERS:
This class contains functions to capture fingers position from depth image. This
tracks all the fingers from both hands.

5.1.5 RANGE FINDER:


This class finds out the maximum and minimum sensor range distance, in
millimeter. It also scans for a hand and sets its position as the minimum depth distance.
The maximum distance will then be the minimum distance plus the specified distance
interval. Scans for a hand and sets its position as the minimum depth distance. It will then
scan for a second hand further away and set its distance as the maximum depth distance.

5.2 MICROSOFT KINECT TOOLKIT:


Microsoft kinect toolkit is a toolkit provided by Microsoft company in which
several kinect accessories and tools are present which helps out in making a good looking
user interfaces. Few of them are as follow:
 Kinect Buttons
 Kinect Adapter
 Kinect Cursor
 Kinect region

30
Chapter 5. Implementation

5.3 BING TRANSLATOR API:


Provides a Java wrapper around the Microsoft Translator API aka Bing
Translator. The interpreter interprets and then translates sentences from English language
to Urdu through Bing translator API. This API is used in communication mode where
sentences are translated in Urdu language.

5.4 XML:
LSC saves the gestures in XML format with several nodes containing positions
and directions of skeleton performing gesture in front of kinect. It takes frame, finger
distances and positions, Kinect distances and positions from skeleton, joint types as array
list.LSC uses XML format dictionary to retrieve sentences. The format for sentences is as
follow:

<?xml version="1.0" encoding="utf-8" ?>


<sentences>
<s>how are you?</s>
<s>asslamoalaikum all !!</s>
<s>what about you?</s>
<s>what is your name?</s>
<s>what is your school name?</s>
<s>in which class are you?</s>
<s>how is life?</s>
<s>how is job?</s>
<s>i am fine.</s>
<s>i am waiting for you.</s>
<s>call the police.</s>
<s>best of luck</s>
</sentences>

31
Conclusion

CONCLUSION

Sign Language is used all around the Pakistan by dumb/deaf people. There is
often a barrier for those who use Pakistani Sign Language as a first language and those
who don’t know it and we have tried to bridge this gap in communication by designing
the Linguistic Sign Communicator. The LSC captures gestures in 2D data using the
Microsoft Kinect hardware to develop our software. Our program uses Dynamic Time
Wrapping algorithms to detect signs very accurately and display them onto a computer
screen in a user friendly manner.
The LSC is designed to be a proof of concept and the results of the final product
are very accurate. A library consisting of PSL and phrases was recorded and when the
LSC is tested, the desired sign was almost always detected by the software an converting
sign into two natural language English and Urdu. There are, however, improvements that
can be made easily using the core concepts derived in this project.
Although the Linguistic Sign Communicator is a proof of concept, there are many
improvements that could be made to make it better. Since Pakistani Sign Language also
includes facial gestures (i.e. raised eyebrows) that could also be incorporated into the
Linguistic Sign Communicator using imaging techniques. Adding facial gestures would
allow this project to be much more robust.
Pakistani Sign Language linguist would also be needed to work on sentence
concatenation correctly. There are also different dialects in different regions where
Pakistani Sign Language is used and this would need to be considered while recording
the signs.

32
Appendices

Appendix A

RESEARCH PAPER REFERENCES:


[1] - K. Guo, P. Ishwar, and J. Konrad. “Action Recognition in Video by Covariance
Matching of
Silhouette Tunnels”, Proc. of 22nd Brazilain Symposium on Computer Graphics and
Image
Processing, SIBGRAPI-2009,2009
[2] - O. Tuzel, F. Porikli, and P. Meer. “Region Covariance: A Fast Descriptor for
Detection and
Classification”. In Proc. ECCV (2) 2006, pp.589-600
[3] - A. B. Jmaa and W. Mahdi, “A New Approach For Digit Recognition Based On
Hand
Gesture Analysis”, International Journal of Computer Science and Information Security
(IJCSIS), Vol 2, No 2, 2009
[4] \Gesture service for the kinect with the windows sdk," MCS UK Solution Develop-
ment, 2011.
[5] M. Woefel, \Kinect space," Google COde, 2012.
[6] \Kinect sdk dynamic time warping(dtw) gesture recognition," CodePlex, 2011.
[7] \Dynamic time warping(dtw)," Wikipedia. [Online]. Available: http://en.wikipedia.
org/wiki/Dynamic time warping
[8]Hand gesture recognition using kinectHeng Du, TszHangTo Boston University
Department of Electrical and Computer Engineering
8 Saint Mary’s Street
[9] J. Hall, \How to do gesture recognition using kinect with hidden markov models
(hmms)," CreativeDistraction, 2011.

33
Appendices

Appendix B

GLOSSARY
LSC:
Linguistic Sign Communicator is a software application for deaf/ dumb community all
over Pakistan which will help them ease their social lives.

Kinect V1
Kinect is a hardware device made by Microsoft corporation company which possess
various sensors that detect the color, depth and skeleton images through which various
applications can be made using human NUI gestures.

DTW
Dynamic Time Warping is an algorithm for measuring similarity between two temporal
sequences which may vary in time or speed until an optimal match (according to a
suitable metrics) between the two sequences is found.

XML
Extensible Markup Language is a simple, very flexible text format derived from SGML
(ISO 8879). Originally designed to meet the challenges of large-scale electronic
publishing, XML is also playing an increasingly important role in the exchange of a wide
variety of data on the Web and elsewhere.

SDK
Software Development Kit is typically a set of software development tools that allows
the creation of applications for a certain software package, software framework, hardware
platform, computer system, video game console, operating system, or similar
development platform.

34
Appendices

PSL
Pakistani Sign Language is a book with 5,000 unique words and phrases, and growing.
Each word has a graphic illustration and voice over in English and Urdu. Learn 3
languages – PSL, English, Urdu. It’s fun and easy to use!. View by categories, or search
individual words

WPF
Windows Presentation Foundation is a graphical subsystem for rendering user interfaces
in Windows-based applications by Microsoft. WPF, previously known as "Avalon", was
initially released as part of .NET Framework 3.0. Rather than relying on the older GDI
subsystem, WPF uses DirectX.

API
Application Program Interface is a set of routines, protocols, and tools for building
software applications. The API specifies how software components should interact
and APIs are used when programming graphical user interface (GUI) components.

2D
Two-Dimensional is the computer-based generation of digital images—mostly from two-
dimensional models (such as 2D geometric models, text, and digital images) and by
techniques specific to them. The word may stand for the branch of computer science that
comprises such techniques, or for the models themselves.

35
Appendices

Appendix C

ANALYSIS MODEL

C1 ACTIVITY DIAGRAM
A UML Activity diagram showing the process is given below:

Figure C.1 LSC Activity Diagram

36
Appendices

C2 USE CASE DIAGRAM


A UML Use Case diagram showing the process is given below:

Figure C.2 LSC Use case Diagram

37
Appendices

C4 CLASS DIAGRAM

Figure C.3 LSC Class Diagram

38
Resume’s
Tabia Rashid
House#B-95,Architect & Engineering Housing Society,Gulistan-e-jauhar,block-
08,Karachi,Pakistan
Cell no: (+92)346-2570782
Phone: 021-346643156
Email: [email protected]
DOB: 09th October 1993
Sex: Female
Marital Status: Single

Objective
To excel in the field of web developing and software engineering relating to business and
academic industry with highly proven leadership skills involving developing projects, managing
projects and ability to work as a part of a team. I am willing to dedicate myself strictly to adhere
the employment ethics and to give my best to the respective company.

Programming Skills
 .NET framework
 PHP + AJAX+MYSQL
 Languages :C,C++, Java, C#
 Html 5
 CSS3
 Android
 XML
 Database applications using Java
 SQL management System
 Visual Basic
 MS OFFICE

Recent Projects
Linguistic Sign Communicator (Year 2015, FYP, Jinnah University For Women)
Adda Fashion (Year 2015,Made for client)
Fun MP4 Tube (Year 2014, Jinnah University For Women)
Edible arts by Nadia(Year 2014, Jinnah University For Women)
Clinical Management System (Year 2012, Jinnah University For Women)
Import Shipping Module(Year 2013, Jinnah University For Women)
Sunlight Inventory System(Year 2013, Jinnah University For Women)
Achievements
Research Paper (Evaluation of Smart phone Applications Accessibility for Blind Users)

Best Poster Award (Linguistic Sign Communicator)

Technologies
Software: Visual Studio 2013,.NET framework, Netbeans, eclipse, Notepad++, MS
Office (Word, Access, Excel, PowerPoint).

Qualification:

Education Institute year


BS(Software Engineering) Jinnah University For Women 2012-2015
Intermediate H.S.C S.R.E Majeed college 2010-2011
Matriculation (Karachi Board) The American Foundation 2008-2009
S.S.C School

Portfolio on Request

“I allow university authorities to publish my resume online and to submit/send resume to


any organization”.
Maha Shakeel
House# R_398 15A4 Bufferzone ,Karachi, Pakistan.
Cell no: (+92)304 2345971
Phone: 021- 36920763
Email: [email protected]
DOB: 12th November1992
Sex: Female
Marital Status: Single

Objective
To excel in the field of web developing and software engineering relating to business and
academic industry with highly proven leadership skills involving developing projects, managing
projects and ability to work as a part of a team. I am willing to dedicate myself strictly to adhere
the employment ethics and to give my best to the respective company.

Programming Skills
 .NET framework
 PHP + AJAX+MYSQL
 Languages :C,C++, Java, C#
 Html 5
 CSS3
 Android
 XML
 Database applications using Java
 SQL management System
 Visual Basic
 MS OFFICE

Recent Projects
Linguistic Sign Communicator (Year 2015, FYP, Jinnah University For Women)
Fun MP4 Tube (Year 2014, Jinnah University For Women)
Clinical Management System (Year 2012, Jinnah University For Women)

Achievements
Best Poster Award (Linguistic Sign Communicator)
Technologies
Software: Visual Studio 2013,.NET framework, Netbeans, eclipse, Notepad++, MS
Office (Word, Access, Excel, PowerPoint).

Qualification:

Education Institute year


BS(Software Engineering) Jinnah University For Women 2012-2015
Intermediate H.S.C Science and commerce govt. 2010-2011
degree college
Matriculation (Karachi Board) Metropolice academy 2008-2009
S.S.C

Portfolio on Request
“I allow university authorities to publish my resume online and to submit/send resume to
any organization”.
Anousha Khan
Flat#A1-316,Unique Classic,Block#15,Gulistan-e-jauhar,Karachi,Pakistan
Cell no: (+92)332-8233858
Phone: 021-34012349
Email: [email protected]
DOB: 18th September 1993
Sex: Female
Marital Status: Married

Objective
To excel in the field of web developing and software engineering relating to business and
academic industry with highly proven leadership skills involving developing projects, managing
projects and ability to work as a part of a team. I am willing to dedicate myself strictly to adhere
the employment ethics and to give my best to the respective company.

Programming Skills
 .NET framework
 PHP + AJAX+MYSQL
 Languages :C,C++, Java, C#
 Html 5
 CSS3
 Android
 XML
 Database applications using Java
 SQL management System
 Visual Basic
 MS OFFICE

Recent Projects
Linguistic Sign Communicator (Year 2015, FYP, Jinnah University For Women)
Adda Fashion (Year 2015,Made for client)
Fun MP4 Tube (Year 2014, Jinnah University For Women)
Edible arts by Nadia(Year 2014, Jinnah University For Women)
Clinical Management System (Year 2012, Jinnah University For Women)
Import Shipping Module(Year 2013, Jinnah University For Women)
Sunlight Inventory System(Year 2013, Jinnah University For Women)
Achievements
Best Poster Award (Linguistic Sign Communicator)

Technologies
Software: Visual Studio 2013,.NET framework, Netbeans, eclipse, Notepad++, MS
Office (Word, Access, Excel, PowerPoint).

Qualification:

Education Institute year


BS(Software Engineering) Jinnah University For Women 2012-2015
Intermediate H.S.C S.R.E Majeed college 2010-2011
Matriculation (Karachi Board) The American Foundation 2008-2009
S.S.C School

Portfolio on Request
“I allow university authorities to publish my resume online and to submit/send resume to
any organization”.
Maria Farooq
House#14, block C Police headquarter garden, Karachi,Pakistan
Cell no: (+92)321-5236705
Email: [email protected]
DOB: 25th March 1994
Sex: Female
Marital Status: Single

Objective
To be a part of a progressive environment for career advancement, Professional growth and
which will help me gain sufficient knowledge.
.

Programming Skills
 .NET framework
 PHP +MYSQL
 Languages :C,C++, Java, C#
 Html 5
 CSS3
 Android
 XML
 Database applications using Java
 SQL management System
 Visual Basic
 MS OFFICE

Recent Projects
Linguistic Sign Communicator (Year 2015, FYP, Jinnah University For Women)
Fun MP4 Tube (Year 2014, Jinnah University For Women)
Edible arts by Nadia(Year 2014, Jinnah University For Women)
Car Showroom (Year 2012, Jinnah University For Women)
Import Shipping Module(Year 2013, Jinnah University For Women)
Sunlight Inventory System(Year 2013, Jinnah University For Women)

Achievements
Research Paper (Evaluation of Smart phone Applications Accessibility for Blind Users)

Best Poster Award (Linguistic Sign Communicator)


Technologies
Software: Visual Studio 2013,.NET framework, Netbeans, eclipse, Notepad++, MS
Office (Word, Access, Excel, PowerPoint).

Qualification:

Education Institute year


BS(Software Engineering) Jinnah University For Women 2012-2015
Intermediate H.S.C WOMEN College SHARAH - 2010-2011
E- LIAQUAT
Matriculation (Karachi Board) The Citizen Foundation 2008-2009
S.S.C School

Portfolio on Request
“I allow university authorities to publish my resume online and to submit/send resume to
any organization”.

You might also like