0% found this document useful (0 votes)
2 views

Hci Lab Aim3706

The document outlines the implementation of user-interface design principles in Human-Computer Interaction (HCI) and Cognitive Technology Labs, focusing on psychological aspects, UI architecture, interaction styles, and software design patterns. It includes Python program examples demonstrating MVC architecture, direct manipulation, and multimodal interfaces, emphasizing usability and cognitive load management. Key features include user tracking, feedback mechanisms, and support for various input methods like keyboard, mouse, and voice.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Hci Lab Aim3706

The document outlines the implementation of user-interface design principles in Human-Computer Interaction (HCI) and Cognitive Technology Labs, focusing on psychological aspects, UI architecture, interaction styles, and software design patterns. It includes Python program examples demonstrating MVC architecture, direct manipulation, and multimodal interfaces, emphasizing usability and cognitive load management. Key features include user tracking, feedback mechanisms, and support for various input methods like keyboard, mouse, and voice.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Human-Computer Interface

and Cognitive Technology Lab


1. Implementation of Psychological aspects pertaining to user-interface
design.
Designing user interfaces that take into account psychological aspects can greatly improve user
experience. This is often explored in Human-Computer Interaction (HCI) and Cognitive
Technology Lab settings. The goal is to design interfaces that are intuitive, easy to use, and that
reduce cognitive load on users.

Here is a Python program that could serve as a basic framework to implement psychological
aspects of user-interface design, such as usability testing, understanding user attention, and
cognitive load management:

Program Outline:

 User Attention and Focus: Track user interaction with the interface to see where the user’s
attention is most focused.

 Cognitive Load Monitoring: Use a timer to monitor how long users spend on different tasks
to measure complexity and difficulty.

 Feedback and Learning: Provide feedback based on user actions to help users learn how to
interact with the system better.

 Customization: Adjust user interface elements based on user behavior or preferences.

Requirements:

 Python

 Tkinter for GUI

 Some tracking (logging of user behavior)

Program Code:

import tkinter as tk
from tkinter import messagebox
import time
import random
class CognitiveInterface:
def __init__(self, root):
self.root = root
self.root.title("HCI - Cognitive Technology Lab")
self.root.geometry("600x400")
self.task_label = tk.Label(root, text="Task: Click the buttons as quickly as possible",
font=("Arial", 14))
self.task_label.pack(pady=20)
self.buttons_frame = tk.Frame(root)
self.buttons_frame.pack(pady=20)

# List of buttons to simulate tasks


self.buttons = []
for i in range(3):
button = tk.Button(self.buttons_frame, text=f"Task {i+1}", font=("Arial", 12),
command=lambda i=i: self.track_user_behavior(i))
button.pack(side=tk.LEFT, padx=10)
self.buttons.append(button)
self.feedback_label = tk.Label(root, text="Feedback: Perform the task", font=("Arial", 12))
self.feedback_label.pack(pady=10)
self.start_time = time.time()

def track_user_behavior(self, task_id):


"""
Track user interaction time and provide feedback based on the task completion time.
"""
end_time = time.time()
reaction_time = end_time - self.start_time

# Cognitive Load Monitoring (simplified)


feedback = ""
if reaction_time < 2:
feedback = "Great job! Quick reaction."
elif reaction_time < 5:
feedback = "Good, but you can be faster."
else:
feedback = "That took quite a while. Try again."

self.feedback_label.config(text=f"Feedback: {feedback}")
self.log_user_action(task_id, reaction_time)

# Reset timer for the next task


self.start_time = time.time()
def log_user_action(self, task_id, reaction_time):
"""
Simulate logging user interaction data (user attention & cognitive load).
"""
print(f"User completed Task {task_id+1} in {reaction_time:.2f} seconds.")

# Simulate task complexity increase


if random.random() < 0.3:
messagebox.showinfo("Cognitive Load Increase", "The next task will be slightly more
complex!")

if __name__ == "__main__":
root = tk.Tk()
app = CognitiveInterface(root)
root.mainloop()

2. Implementation of User interface architecture and design.

Implementing user interface (UI) architecture and design requires planning and structuring
different layers of interaction between the user and the system. In a Human-Computer
Interface (HCI) and Cognitive Technology Lab context, the goal is to ensure the design is
intuitive, modular, scalable, and cognitively friendly. This involves setting up a clean
separation of UI elements, logic, and data.
For this example, we’ll create a program that uses an MVC (Model-View-Controller)
architecture. This separation of concerns allows for clean design, modularity, and ease of
testing different parts of the UI independently. The program will simulate a task interface
where users perform operations, similar to applications in cognitive technology labs.
MVC Architecture Overview:
 Model: Handles data and business logic.
 View: Handles the graphical layout and UI elements.
 Controller: Acts as an intermediary, controlling the logic and interaction between the Model
and View.
Features of the Program:
 View Layer: The user interface with buttons and labels.
 Model Layer: Handles user data and task tracking.
 Controller Layer: Controls the flow of data and user interaction.

Program Code: Implementation


import tkinter as tk
from tkinter import messagebox
import time

# Model: Handles user data and tasks


class TaskModel:
def __init__(self):
self.tasks_completed = 0
self.start_time = time.time()

def get_reaction_time(self):
"""Return the reaction time for a task."""
end_time = time.time()
reaction_time = end_time - self.start_time
self.start_time = time.time() # Reset start time for next task
return reaction_time

def complete_task(self):
"""Track the number of tasks completed."""
self.tasks_completed += 1

def reset_tasks(self):
"""Reset task completion data."""
self.tasks_completed = 0
self.start_time = time.time()

# View: Handles the graphical user interface


class TaskView:
def __init__(self, root):
self.root = root
self.root.title("HCI - Cognitive Technology Lab")
self.root.geometry("600x400")

self.task_label = tk.Label(root, text="Task: Click the buttons to perform tasks",


font=("Arial", 14))
self.task_label.pack(pady=20)

self.buttons_frame = tk.Frame(root)
self.buttons_frame.pack(pady=20)

# Task buttons
self.buttons = []
for i in range(3):
button = tk.Button(self.buttons_frame, text=f"Task {i+1}", font=("Arial", 12))
button.pack(side=tk.LEFT, padx=10)
self.buttons.append(button)

self.feedback_label = tk.Label(root, text="Feedback: Ready to start", font=("Arial", 12))


self.feedback_label.pack(pady=10)

# Reset Button
self.reset_button = tk.Button(root, text="Reset Tasks", font=("Arial", 12), bg="red",
fg="white")
self.reset_button.pack(pady=10)

# Controller: Controls interaction between Model and View


class TaskController:
def __init__(self, model, view):
self.model = model
self.view = view
self.bind_events()

def bind_events(self):
"""Bind the buttons to the event handlers."""
for idx, button in enumerate(self.view.buttons):
button.config(command=lambda i=idx: self.handle_task(i))
self.view.reset_button.config(command=self.reset_tasks)

def handle_task(self, task_id):


"""Handle task completion and provide feedback."""
reaction_time = self.model.get_reaction_time()
self.model.complete_task()

# Provide feedback based on reaction time


if reaction_time < 2:
feedback = f"Task {task_id+1}: Quick response! {reaction_time:.2f}s"
elif reaction_time < 5:
feedback = f"Task {task_id+1}: Good, but you can be faster! {reaction_time:.2f}s"
else:
feedback = f"Task {task_id+1}: That took too long. {reaction_time:.2f}s"

self.view.feedback_label.config(text=f"Feedback: {feedback}")
print(f"Task {task_id+1} completed in {reaction_time:.2f} seconds.")

def reset_tasks(self):
"""Reset all tasks and user data."""
self.model.reset_tasks()
self.view.feedback_label.config(text="Feedback: All tasks reset")
print("Tasks have been reset.")

# Main Application Setup


if __name__ == "__main__":
root = tk.Tk()

# Model-View-Controller Setup
model = TaskModel()
view = TaskView(root)
controller = TaskController(model, view)

root.mainloop()

3. Implementation of Interaction styles and design principles.


In the context of Human-Computer Interface (HCI) and Cognitive Technology Labs,
implementing interaction styles and adhering to design principles is crucial for creating user-
friendly and efficient interfaces. Interaction styles refer to the different ways users can interact with
the system (e.g., direct manipulation, form fill-in, command line, etc.), while design principles guide
the structure of the interface to improve usability.

Example Program
This program demonstrates different interaction styles like direct manipulation, menu selection, and
form fill-in, while incorporating core design principles such as feedback, affordance, and
consistency.

We'll build a simple UI where users can:

1. Interact with items using direct manipulation (drag-and-drop).

2. Use a menu to perform certain actions.

3. Fill out a form for submitting data.

The program will use Tkinter for building the user interface and show how different interaction styles
can be implemented while following good design practices.

Core Interaction Styles

 Direct Manipulation: Users will be able to drag and drop objects.

 Menu Selection: Users can select actions via a menu.

 Form Fill-in: Users can enter data in a form and submit it.

Design Principles

 Feedback: Immediate visual feedback after every interaction.

 Affordance: UI elements clearly indicate how they can be used (e.g., buttons look clickable).

 Consistency: Elements behave in a consistent manner across the interface.

Program Code

import tkinter as tk

from tkinter import messagebox

# Direct Manipulation Handler: Drag-and-drop items

class DraggableWidget(tk.Label):

def __init__(self, master, **kwargs):

super().__init__(master, **kwargs)

self.bind("<Button-1>", self.on_click)

self.bind("<B1-Motion>", self.on_drag)

def on_click(self, event):

"""Remember initial click position."""


self.start_x = event.x

self.start_y = event.y

def on_drag(self, event):

"""Move the widget based on mouse drag."""

x = self.winfo_x() - self.start_x + event.x

y = self.winfo_y() - self.start_y + event.y

self.place(x=x, y=y)

# Main Application with different interaction styles

class HCIApp:

def __init__(self, root):

self.root = root

self.root.title("Interaction Styles & Design Principles")

self.root.geometry("600x400")

# Create Menu Interaction (Menu Selection)

self.create_menu()

# Create a Direct Manipulation Widget

self.create_drag_and_drop_area()

# Create a Form (Form Fill-In Interaction)

self.create_form()

def create_menu(self):

"""Create the main menu with actions."""

menubar = tk.Menu(self.root)
# File Menu

file_menu = tk.Menu(menubar, tearoff=0)

file_menu.add_command(label="New", command=self.on_new)

file_menu.add_command(label="Save", command=self.on_save)

file_menu.add_separator()

file_menu.add_command(label="Exit", command=self.root.quit)

menubar.add_cascade(label="File", menu=file_menu)

# Help Menu

help_menu = tk.Menu(menubar, tearoff=0)

help_menu.add_command(label="About", command=self.on_about)

menubar.add_cascade(label="Help", menu=help_menu)

self.root.config(menu=menubar)

def create_drag_and_drop_area(self):

"""Create a draggable widget (Direct Manipulation)."""

self.drag_label = DraggableWidget(self.root, text="Drag Me", bg="lightblue", width=10)

self.drag_label.place(x=50, y=50)

def create_form(self):

"""Create a form for user input (Form Fill-In Interaction)."""

form_frame = tk.Frame(self.root)

form_frame.pack(pady=20)

tk.Label(form_frame, text="Name: ").grid(row=0, column=0, padx=5, pady=5)

self.name_entry = tk.Entry(form_frame)
self.name_entry.grid(row=0, column=1, padx=5, pady=5)

tk.Label(form_frame, text="Age: ").grid(row=1, column=0, padx=5, pady=5)

self.age_entry = tk.Entry(form_frame)

self.age_entry.grid(row=1, column=1, padx=5, pady=5)

submit_button = tk.Button(form_frame, text="Submit", command=self.on_form_submit)

submit_button.grid(row=2, column=0, columnspan=2, pady=10)

def on_new(self):

"""Callback for 'New' menu item."""

messagebox.showinfo("New", "New file created!")

def on_save(self):

"""Callback for 'Save' menu item."""

messagebox.showinfo("Save", "File saved!")

def on_about(self):

"""Callback for 'About' menu item."""

messagebox.showinfo("About", "HCI - Interaction Styles Demo\nDesigned for Cognitive


Technology Lab.")

def on_form_submit(self):

"""Submit form data."""

name = self.name_entry.get()

age = self.age_entry.get()

if name and age.isdigit():

messagebox.showinfo("Form Submitted", f"Name: {name}, Age: {age}")

else:
messagebox.showwarning("Input Error", "Please enter valid name and age.")

# Main Application Setup

if __name__ == "__main__":

root = tk.Tk()

app = HCIApp(root)

root.mainloop()

4. Implementation of Software design patterns and multimodal interfaces.

In the context of Human-Computer Interface (HCI) and Cognitive Technology Labs,


Software Design Patterns and Multimodal Interfaces play a significant role in designing
flexible, scalable, and user-friendly systems. This program will implement common software
design patterns (such as Model-View-Controller (MVC)) and combine them with
multimodal interfaces (which allow the user to interact with the system using multiple
modalities like voice, mouse, and keyboard inputs).

Key Concepts

1. Software Design Patterns: Commonly reusable solutions to design problems, such as the
MVC pattern, help separate concerns by dividing the application logic into:

 Model: Handles data and business logic.


 View: Manages what is displayed to the user.
 Controller: Handles user input and interacts with the model.

2. Multimodal Interfaces: These interfaces support more than one method of input, such as
keyboard, mouse, and voice commands. They enhance usability by allowing users to interact with the
system in different ways.

Example Program

We will implement a basic MVC architecture and a multimodal interface using the
following:

 Keyboard input for typing commands or data.


 Mouse input for button clicks.
 Voice input using the speech_recognition library for simple commands.

The program will be a basic application where users can input text (via keyboard or voice),
and the system will display that text. The application will demonstrate the MVC design
pattern, along with multimodal input handling.
Requirements:

 Tkinter for the graphical interface (View).


 speech_recognition library to process voice commands (part of Controller).

Program Code
import tkinter as tk
from tkinter import messagebox
import speech_recognition as sr

# Model: Handles the data and business logic


class TextModel:
def __init__(self):
self.text = ""

def set_text(self, new_text):


self.text = new_text

def get_text(self):
return self.text

# View: Handles what is displayed to the user (GUI)


class TextView:
def __init__(self, root):
self.root = root
self.root.title("MVC with Multimodal Interface")
self.root.geometry("500x300")

# Label to display text


self.display_label = tk.Label(self.root, text="Your text will appear here.", font=("Arial", 14))
self.display_label.pack(pady=20)

# Entry for keyboard input


self.input_entry = tk.Entry(self.root, font=("Arial", 12))
self.input_entry.pack(pady=10)

# Button for mouse input


self.submit_button = tk.Button(self.root, text="Submit", command=self.submit_text,
font=("Arial", 12))
self.submit_button.pack(pady=10)

# Button to trigger voice input


self.voice_button = tk.Button(self.root, text="Speak", command=self.speak_text,
font=("Arial", 12))
self.voice_button.pack(pady=10)

def update_display(self, text):


"""Update the label with new text."""
self.display_label.config(text=text)

def submit_text(self):
"""Submit text from keyboard input."""
self.root.event_generate("<<TextSubmit>>")

def speak_text(self):
"""Trigger the voice input event."""
self.root.event_generate("<<VoiceInput>>")

# Controller: Handles user input and updates Model and View


class TextController:
def __init__(self, model, view):
self.model = model
self.view = view

# Bind keyboard input submission


self.view.root.bind("<<TextSubmit>>", self.handle_text_submit)
self.view.root.bind("<<VoiceInput>>", self.handle_voice_input)

def handle_text_submit(self, event):


"""Handle the text submission from the entry field."""
user_input = self.view.input_entry.get()
if user_input:
self.model.set_text(user_input)
self.view.update_display(self.model.get_text())
else:
messagebox.showwarning("Input Error", "Please enter some text.")

def handle_voice_input(self, event):


"""Handle voice input."""
recognizer = sr.Recognizer()
with sr.Microphone() as source:
print("Listening...")
audio = recognizer.listen(source)

try:
recognized_text = recognizer.recognize_google(audio)
self.model.set_text(recognized_text)
self.view.update_display(self.model.get_text())
except sr.UnknownValueError:
messagebox.showerror("Voice Recognition Error", "Could not understand the voice
input.")
except sr.RequestError:
messagebox.showerror("Service Error", "Could not request results from the speech
recognition service.")
# Main Application Setup
if __name__ == "__main__":
root = tk.Tk()

# Initialize the MVC components


model = TextModel()
view = TextView(root)
controller = TextController(model, view)

root.mainloop()

5. Implementation of design evaluation.

Program: Design Evaluation in Human-Computer Interaction (HCI)

Design evaluation is a critical part of Human-Computer Interaction (HCI), where a system's


usability and performance are assessed to determine how well it meets user needs. This
program simulates a basic design evaluation tool that gathers user feedback and evaluates a
user interface design based on specific metrics such as usability, efficiency, and satisfaction.

Key Concepts in the Program:

1. Usability Evaluation Metrics:

 Effectiveness: How accurately and completely the user achieves their goals.
 Efficiency: How quickly the user can accomplish tasks.
 Satisfaction: How pleasant the experience is for the user.

2. Survey-Based Feedback:

 The program will use a simple Likert scale (1-5 rating) to gather user feedback on the above
metrics.

3. GUI-Based Design Evaluation:

 Using Tkinter, the program will display a graphical user interface where users can rate their
experience, and the program will calculate an overall score based on their input.

Example Program

In this program, users can rate a sample interface based on effectiveness, efficiency, and
satisfaction. The system will then compute an average score and display the evaluation
results.
Program Code
import tkinter as tk
from tkinter import messagebox

# Model: Handles the feedback data and calculations


class EvaluationModel:
def __init__(self):
self.scores = {'effectiveness': 0, 'efficiency': 0, 'satisfaction': 0}

def set_score(self, category, score):


"""Set the score for a specific category."""
self.scores[category] = score

def calculate_average(self):
"""Calculate the average score across all categories."""
return sum(self.scores.values()) / len(self.scores)

# View: Handles the GUI and user input


class EvaluationView:
def __init__(self, root):
self.root = root
self.root.title("Design Evaluation")
self.root.geometry("400x300")

# Introduction label
self.intro_label = tk.Label(self.root, text="Please rate the design on the following metrics (1-
5):", font=("Arial", 12))
self.intro_label.pack(pady=10)

# Labels and dropdowns for each metric


self.eff_label = tk.Label(self.root, text="Effectiveness:", font=("Arial", 10))
self.eff_label.pack(pady=5)
self.eff_var = tk.IntVar()
self.eff_menu = tk.OptionMenu(self.root, self.eff_var, *[1, 2, 3, 4, 5])
self.eff_menu.pack()

self.effy_label = tk.Label(self.root, text="Efficiency:", font=("Arial", 10))


self.effy_label.pack(pady=5)
self.effy_var = tk.IntVar()
self.effy_menu = tk.OptionMenu(self.root, self.effy_var, *[1, 2, 3, 4, 5])
self.effy_menu.pack()

self.sat_label = tk.Label(self.root, text="Satisfaction:", font=("Arial", 10))


self.sat_label.pack(pady=5)
self.sat_var = tk.IntVar()
self.sat_menu = tk.OptionMenu(self.root, self.sat_var, *[1, 2, 3, 4, 5])
self.sat_menu.pack()
# Submit button
self.submit_button = tk.Button(self.root, text="Submit Evaluation",
command=self.submit_evaluation, font=("Arial", 12))
self.submit_button.pack(pady=20)

# Result label (empty initially)


self.result_label = tk.Label(self.root, text="", font=("Arial", 12))
self.result_label.pack(pady=10)

def submit_evaluation(self):
"""Submit the evaluation and trigger the custom event."""
self.root.event_generate("<<SubmitEvaluation>>")

def show_result(self, average_score):


"""Display the average score in the result label."""
self.result_label.config(text=f"Average Score: {average_score:.2f} / 5")

# Controller: Manages the interaction between model and view


class EvaluationController:
def __init__(self, model, view):
self.model = model
self.view = view

# Bind the submit event to the handler


self.view.root.bind("<<SubmitEvaluation>>", self.handle_submit)

def handle_submit(self, event):


"""Handle the evaluation submission."""
try:
# Get the scores from the view
eff_score = self.view.eff_var.get()
effy_score = self.view.effy_var.get()
sat_score = self.view.sat_var.get()

# Ensure all scores are valid


if eff_score == 0 or effy_score == 0 or sat_score == 0:
messagebox.showerror("Input Error", "Please provide a rating for all categories.")
return

# Update the model with the scores


self.model.set_score('effectiveness', eff_score)
self.model.set_score('efficiency', effy_score)
self.model.set_score('satisfaction', sat_score)

# Calculate the average score


average_score = self.model.calculate_average()
# Show the result in the view
self.view.show_result(average_score)

except ValueError:
messagebox.showerror("Input Error", "Please provide valid ratings for all categories.")

# Main Application Setup


if __name__ == "__main__":
root = tk.Tk()

# Initialize the MVC components


model = EvaluationModel()
view = EvaluationView(root)
controller = EvaluationController(model, view)

# Start the main application loop


root.mainloop()

6. Implementation of Emerging paradigms for user interaction.

Program: Implementation of Emerging Paradigms for User Interaction

Emerging paradigms for user interaction refer to novel ways in which users interact with
computer systems and digital environments. These paradigms are often driven by
advancements in technology, including voice-based interfaces, gesture-based interaction,
multimodal interaction, and augmented reality (AR). In this program, we will focus on an
implementation of a gesture-based user interface using a webcam, allowing users to control
a GUI with simple hand gestures.

We will use the OpenCV library to capture video from a webcam and detect hand gestures.
For simplicity, we will detect a single hand gesture (like raising a hand) to control a graphical
element, such as opening or closing a window.

Key Concepts:

1. Gesture-Based Interaction: Detects specific hand gestures using a webcam and translates
them into actions in the user interface.
2. OpenCV: A library for real-time computer vision, used to process webcam input and detect
hand gestures.
3. Tkinter: A Python library for creating GUI elements.
4. Basic Machine Learning/Computer Vision Techniques: Simple color thresholding for hand
detection (using HSV color space).
Program Code: Gesture-Based Control with OpenCV and Tkinter

This program captures video from the webcam, detects a raised hand gesture, and performs a
corresponding action in the GUI, such as opening or closing a window.

import cv2

import numpy as np

import tkinter as tk

from tkinter import messagebox

import threading

# Gesture detection function using OpenCV

def detect_gesture(frame):

"""Detects a simple hand gesture by checking for a region of a specific color (skin tone)."""

hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

# Define skin color range in HSV

lower_skin = np.array([0, 20, 70], dtype=np.uint8)

upper_skin = np.array([20, 255, 255], dtype=np.uint8)

# Threshold the HSV image to get only skin color

mask = cv2.inRange(hsv, lower_skin, upper_skin)

# Find contours in the mask

contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

# If any contours are detected, return True (gesture detected)

if len(contours) > 0:

return True
return False

# Thread to continuously capture video

def video_capture(controller):

cap = cv2.VideoCapture(0)

while cap.isOpened():

ret, frame = cap.read()

if ret:

gesture_detected = detect_gesture(frame)

if gesture_detected:

controller.gesture_detected()

# Display the camera feed (for debugging purposes)

cv2.imshow("Gesture Detection", frame)

# Exit on 'q' key

if cv2.waitKey(1) & 0xFF == ord('q'):

break

cap.release()

cv2.destroyAllWindows()

# GUI Controller

class InteractionController:

def __init__(self, root):


self.root = root

self.root.title("Gesture-Based User Interaction")

self.root.geometry("400x200")

# Instruction label

self.label = tk.Label(self.root, text="Raise your hand to open a new window", font=("Arial", 12))

self.label.pack(pady=20)

# Button to start gesture recognition

self.start_button = tk.Button(self.root, text="Start Gesture Recognition",


command=self.start_gesture_recognition)

self.start_button.pack(pady=20)

# Button to stop the gesture recognition

self.stop_button = tk.Button(self.root, text="Stop Gesture Recognition",


command=self.stop_gesture_recognition)

self.stop_button.pack(pady=20)

# Keep track of whether the window is open or closed

self.window_open = False

def gesture_detected(self):

"""Callback when a gesture is detected (hand raised)."""

if not self.window_open:

self.open_new_window()

else:

messagebox.showinfo("Gesture Detected", "Window is already open.")

def open_new_window(self):
"""Opens a new window when a gesture is detected."""

self.new_window = tk.Toplevel(self.root)

self.new_window.geometry("300x150")

self.new_window.title("New Window")

label = tk.Label(self.new_window, text="New window opened by gesture!", font=("Arial", 12))

label.pack(pady=20)

self.window_open = True

# Close the window after 5 seconds

self.new_window.after(5000, self.close_window)

def close_window(self):

"""Closes the opened window automatically."""

self.new_window.destroy()

self.window_open = False

def start_gesture_recognition(self):

"""Starts the video capture thread."""

self.capture_thread = threading.Thread(target=video_capture, args=(self,))

self.capture_thread.start()

def stop_gesture_recognition(self):

"""Stops the video capture thread."""

messagebox.showinfo("Gesture Recognition", "Stopping gesture recognition.")

# Terminating the OpenCV window manually

cv2.destroyAllWindows()

# Main Application
if __name__ == "__main__":

root = tk.Tk()

app = InteractionController(root)

root.mainloop()

You might also like