Project Report On Attendance System Using Face Recognition
Project Report On Attendance System Using Face Recognition
INTRODUCTION
1.1 Introduction
1
algorithms. The report encompasses a detailed description of the
system architecture, functionalities, methodologies, testing
procedures, and performance evaluation.
Problem Statement
2
management. The face recognition attendance system seeks
to address these shortcomings by harnessing the capabilities
of computer vision and machine learning technologies. By
automating the attendance tracking process through facial
recognition algorithms, the system aims to streamline
operations, eliminate manual intervention, and enhance the
integrity and security of attendance records.
3
1.4 Need of Project
The need for the Attendance System using Face Recognition arises
from the limitations and inefficiencies of traditional attendance
tracking methods. Here are the primary needs driving the
development of this project:
4
social distancing guidelines. The system enables attendance tracking
without the need for physical contact or shared surfaces, contributing
to a safer environment.
1.5 Scope:
The scope of the proposed attendance system using Haar Cascade and
LBPH face recognition algorithms encompasses several key
components. Firstly, the project will involve the development and
implementation of software capable of capturing, processing, and
recognizing faces in real-time using the Haar Cascade and LBPH
algorithms. This software will be designed to integrate seamlessly
with existing attendance management systems or operate as a
standalone solution. Additionally, the project will include the design
and deployment of a user-friendly interface for administrators to
manage attendance records, view reports, and configure system
settings. The system will be tested rigorously to evaluate its accuracy,
efficiency, and reliability under various environmental conditions and
with diverse user demographics. Furthermore, the scope extends to
5
documentation encompassing user manuals, technical specifications,
and guidelines for system maintenance and troubleshooting. Lastly,
the project will consider scalability and potential future
enhancements, such as incorporating advanced machine learning
techniques or integrating with biometric authentication systems for
enhanced security.
1.6 Objectives:
Provide security
6
periods. This facilitates data analysis, trend identification, and
decision-making related to resource allocation, scheduling, and
performance evaluation.
7
CHAPTER 2. LITERATURE SURVEY
8
technology, highlighting the need for a more sophisticated solution like
the proposed attendance system utilizing Haar Cascade&lbph.
Limitations
9
• Environmental Factors: Environmental factors such as
ambient lighting, background clutter, and crowd density can
impact the performance of face detection and recognition
algorithms, potentially leading to decreased accuracy or
reliability in crowded or poorly lit environments. Organizations
may need to optimize environmental conditions or implement
supplementary measures to mitigate these challenges.
10
CHAPTER 3. SYSTEM DEVELOPMENT
3.2 Benefits:
11
CHAPTER 4. PROJECT REQUIREMENTS.
12
3. Memory (RAM): Minimum 4GB RAM for efficient processing,
though higher RAM capacity may be beneficial for handling larger
datasets and multiple concurrent users.
13
4.2.1 Tools &Technologies
1. Python:
3. SQLite Database:
14
SQLite databases are stored in a single file, making them easy
to manage and transfer.
4. GUI Development:
Tkinter:
5. NumPy:
15
NumPy is utilized for handling image data as arrays, enabling
efficient manipulation and processing.
6. Matplotlib:
16
CHAPTER 5. SYSTEM DESIGN
5.1 System Architecture
In this system user interact with system using GUI (graphical user
interface) from their system. User will enter details like name and number
and then click on capture then system will capture images of the user using
webcam and then captured images will be processed using OpenCV and
then it will be stored in the Training image folder in local system then the
model is trained on the images stored in Training image folder. If training
get successful then then user details will be stored in SQLite database.
When user mark attendance then image of user is captured using webcam
and then it is compared with all the faces trained the model if user is present
then it will fetch user details from the database and mark the user
attendance in Attendance.csv in local system with date and time which will
be processed using pandas and then user will be given a successful
notification when attendance is marked. If in case an unknown user tries to
mark attendance then system will capture the images and store it in Images
unknown folder.
17
Fig 5.1.1 system Architecture diagram.
After starting the project, the GUI will appear on the screen and it
will show the buttons/options like take attendance, Register new
student, view attendance, etc.
2. Registration:
Initially, the user has to register his details with the application for
the first time. This is a one- time registration. The user has to enter
details like Enrollment no, student name to register its face with the
enrollment no and ID.
3. Face Registration:
Once the user registers, the details like student name & enrollment
no. now the system will take the picture of the student to make
variations and store it.
4. Taking attendance:
Now it’s time to take attendance the camera will detect the face of
student and will display its ID and name with which we have
registered the student.
5. Mark Attendance:
After detecting the face with the right name and ID we can mark the
attendance by pressing the ‘Q’ key. It will mark the attendance with
the exact time.
18
6. View Attendance:
19
Fig 5.2.1 Workflow of the system
def TakeImages():
global alphaerror, numerror, trained, done, invalidentry
Id = (entry0.get())
name = (entry1.get())
if (is_number(Id) and name.replace(" ", "").isalpha()):
cam = cv2.VideoCapture(0, cv2.CAP_DSHOW)
harcascadePath = "haarcascade_frontalface_default.xml"
detector = cv2.CascadeClassifier(harcascadePath)
sampleNum = 0
while (True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0),
sampleNum = sampleNum + 1
cv2.imwrite("TrainingImage\\" + name + "." + Id + '.'
+ str(sampleNum) + ".jpg", gray[y:y + h, x:x + w])
cv2.imshow('frame', img)
if cv2.waitKey(100) & 0xFF == ord('q'):
20
break
elif sampleNum > 100:
break
cam.release()
cv2.destroyAllWindows()
conn = sqlite3.connect('Face.db')
c = conn.cursor()
c.execute("INSERT INTO StudentDetails (Id, Name)
VALUES (?, ?)", (Id, name))
conn.commit()
conn.close()
c5 = canvas.create_image(293.0, 485.5,
image=imagesaved)
canvas.after(6000, lambda: canvas.itemconfig(c5,
state='hidden'))
TrainImages()
else:
if (is_number(Id)):
c1 = canvas.create_image(293.0, 485.5,
image=alphaerror)
canvas.after(6000, lambda: canvas.itemconfig(c1,
state='hidden'))
if (name.replace(" ", "").isalpha()):
c2 = canvas.create_image(293.0, 485.5,
image=numerror)
canvas.after(6000, lambda: canvas.itemconfig(c2,
state='hidden'))
if (name.strip() == "" and Id.strip() == ""):
21
c3 = canvas.create_image(293.0, 485.5,
image=invalidentry)
canvas.after(6000, lambda: canvas.itemconfig(c3,
state='hidden'))
if (is_number(name) and Id.replace(" ", "").isalpha()):
c4 = canvas.create_image(293.0, 485.5,
image=invalidentry)
canvas.after(6000, lambda: canvas.itemconfig(c4,
state='hidden'))
def TrainImages():
recognizer = cv2.face.LBPHFaceRecognizer_create()
harcascadePath = "haarcascade_frontalface_default.xml"
detector = cv2.CascadeClassifier(harcascadePath)
faces, Id = getImagesAndLabels("TrainingImage")
recognizer.train(faces, np.array(Id))
recognizer.save("TrainingImageLabel\Trainner.yml")
conn = sqlite3.connect('Face.db')
c = conn.cursor()
c.execute("INSERT INTO TrainingDetails (ModelPath,
Date) VALUES (?, ?)", ("TrainingImageLabel\
Trainner.yml", datetime.datetime.now()))
conn.commit()
conn.close()
22
canvas.after(6000, lambda: canvas.itemconfig(c4,
state='hidden'))
def markattendance():
extra_window2 = tk.Toplevel()
WIN_WIDTH = 416
WIN_HEIGHT = 500
extra_window2.geometry(
f"{WIN_WIDTH}x{WIN_HEIGHT}+{(get_monitors()
[0].width - WIN_WIDTH) // 2}+{(get_monitors()[0].height -
WIN_HEIGHT) // 2}")
extra_window2.configure(bg="#FFFFFF")
extra_window2.title("Mark Attendance")
conn = sqlite3.connect('Face.db')
c = conn.cursor()
c.execute("SELECT * FROM StudentDetails")
df = pd.DataFrame(c.fetchall(), columns=['Id', 'Name'])
23
cam = cv2.VideoCapture(0, cv2.CAP_DSHOW)
font = cv2.FONT_HERSHEY_SIMPLEX
col_names = ['Id', 'Name', 'Date', 'Time']
attendance = pd.DataFrame(columns=col_names)
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%Y-
%m-%d')
custom_name = entrycustomname.get().strip() # Get custom
name from entrycustomname
if custom_name != "":
fileName = f"Attendance\
Attendance_{date}_{custom_name}.csv"
else:
fileName = f"Attendance\Attendance_{date}.csv"
while True:
ret, im = cam.read()
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray, 1.2, 5)
for (x, y, w, h) in faces:
cv2.rectangle(im, (x, y), (x + w, y + h), (225, 0, 0), 2)
Id, conf = recognizer.predict(gray[y:y + h, x:x + w])
if conf < 50:
timeStamp =
datetime.datetime.fromtimestamp(ts).strftime('%I:%M:%S %p')
# Include AM/PM in the timestamp
aa = df.loc[df['Id'] == Id]['Name'].values
24
tt = str(Id) + "-" + aa[0] # Assuming aa is an array,
select the first element which is the name
attendance.loc[len(attendance)] = [Id, aa[0], date,
timeStamp] # Use aa[0] to get the name without brackets
else:
Id = ' '
tt = str(Id)
if conf > 75:
noOfFile = len(os.listdir("ImagesUnknown")) + 1
cv2.imwrite(f"ImagesUnknown\
Image{noOfFile}.jpg", im[y:y + h, x:x + w])
cv2.putText(im, str(tt), (x, y + h), font, 1, (255, 255,
255), 2)
attendance = attendance.drop_duplicates(subset=['Id'],
keep='first')
cv2.imshow('Face Recognition Window (Press q to
close)', im) # Set the name of the window
key = cv2.waitKey(1)
if key == ord('q') or key == ord('Q'): # Close the window
if 'q' or 'Q' is pressed
break
25
c6 = canvas.create_image(207, 368, image=done)
canvas.after(6000, lambda: canvas.itemconfig(c6,
state='hidden'))
conn.close()
To remove Face from the system
def removeface():
global removed, deleteinvalid
def btn_clicked():
student_id = entry0.get()
if is_number(student_id):
conn = sqlite3.connect('Face.db')
c = conn.cursor()
c.execute("DELETE FROM StudentDetails WHERE Id=?",
(student_id,))
conn.commit()
conn.close()
images = glob.glob(f"TrainingImage\\.{student_id}..jpg")
for image in images:
os.remove(image)
print(f"User with ID {student_id} deleted successfully.")
c6 = canvas.create_image(209,380, image=removed)
canvas.after(6000, lambda: canvas.itemconfig(c6, state='hidden'))
TrainImages()
else:
c6 = canvas.create_image(211,380, image=deleteinvalid)
canvas.after(6000, lambda: canvas.itemconfig(c6, state='hidden'))
print("Invalid input. Please enter a valid ID.")
26
CHAPTER 7. IMPLEMENTATION
1. Graphical Interface
View attendance
Delete users
View info
27
2. User Registeration
To Register User:
28
The new window will capture the images of person whom we want to
register.
After clicking 75 images the window will closed and then model will
be trained on captured images
If you want to give custom name to the file just enter the name in
custom name format or it will be saved as default in date format
And then click on start a new window will appear in which it will
detect the face and show the name and no of registered user
29
View Attendance
30
Click on View registered button to view registered user
Delete users
After you click on remove face button a new window will pop up.
Then enter the ID of the user whom you want to remove from the
system and then click on the remove button to remove user it will
delete every record like details, images, attendance.
To remove all user from at once just click on clear all and everything
like user, images and everything will be deleted
31
CHAPTER 8. PERFORMANCE ANALYSIS
8.1 Testing
A strategy for software testing integrates software test case design methods
into a well- planned series of steps that result in the successful construction
of software. Testing is the set of activities that can be planned in advance
and conducted systematically. The underlying motivation of program
testing is to affirm software quality with methods that can economically and
effectively apply to both strategic to both large and small-scale systems
A strategy for software testing may also be viewed in the context of the
spiral. Unit testing begins at the vertex of the spiral and concentrates on
each unit of the software as implemented in source code. Testing progress
by moving outward along the spiral to integration testing, where the focus is
on the design and the construction of the software architecture. Talking
another turn on outward on the spiral we encounter validation testing where
requirements established as part of software requirements analysis are
validated against the software that has been constructed. Finally, we arrive
at system testing, where the software and other system elements are tested
as a whole.
White box testing, on the other hand, delves into the internal structure
and code of the system to evaluate its logic, algorithms, and data flow.
32
Testers analyze the source code and design tests based on the system's
implementation details, aiming to achieve thorough coverage of code
paths and functionalities. In the case of the face recognition attendance
system, white box testing would involve examining the algorithms used
for face detection and recognition, verifying the accuracy of database
operations, and assessing error handling mechanisms within the code.
Testers would write test cases targeting specific functions, methods, and
branches of the code to ensure they execute correctly and handle edge
cases appropriately. White box testing helps uncover bugs, logical errors,
and performance bottlenecks that may not be apparent through black box
testing alone, thus improving the overall quality and reliability of the
system.
33
8.2.3 PERFORMANCE TESTING
34
8.3 Test cases
35
specifi records
c date correctly.
or time
range.
36
CHAPTER 9. CONCLUSION
37
CHAPTER 10. FUTURE SCOPE
38
4. Mobile and Cloud Integration: Expanding the system's
accessibility and flexibility by developing mobile applications and
cloud-based solutions would allow users to mark attendance remotely
and access attendance records from any device or location.
Leveraging cloud infrastructure can also enhance scalability, data
redundancy, and disaster recovery capabilities, ensuring uninterrupted
service availability.
39
and adapt to meet the evolving needs and challenges of attendance
management in diverse settings.
40