Hci Lab Aim3706
Hci Lab Aim3706
Here is a Python program that could serve as a basic framework to implement psychological
aspects of user-interface design, such as usability testing, understanding user attention, and
cognitive load management:
Program Outline:
User Attention and Focus: Track user interaction with the interface to see where the user’s
attention is most focused.
Cognitive Load Monitoring: Use a timer to monitor how long users spend on different tasks
to measure complexity and difficulty.
Feedback and Learning: Provide feedback based on user actions to help users learn how to
interact with the system better.
Requirements:
Python
Program Code:
import tkinter as tk
from tkinter import messagebox
import time
import random
class CognitiveInterface:
def __init__(self, root):
self.root = root
self.root.title("HCI - Cognitive Technology Lab")
self.root.geometry("600x400")
self.task_label = tk.Label(root, text="Task: Click the buttons as quickly as possible",
font=("Arial", 14))
self.task_label.pack(pady=20)
self.buttons_frame = tk.Frame(root)
self.buttons_frame.pack(pady=20)
self.feedback_label.config(text=f"Feedback: {feedback}")
self.log_user_action(task_id, reaction_time)
if __name__ == "__main__":
root = tk.Tk()
app = CognitiveInterface(root)
root.mainloop()
Implementing user interface (UI) architecture and design requires planning and structuring
different layers of interaction between the user and the system. In a Human-Computer
Interface (HCI) and Cognitive Technology Lab context, the goal is to ensure the design is
intuitive, modular, scalable, and cognitively friendly. This involves setting up a clean
separation of UI elements, logic, and data.
For this example, we’ll create a program that uses an MVC (Model-View-Controller)
architecture. This separation of concerns allows for clean design, modularity, and ease of
testing different parts of the UI independently. The program will simulate a task interface
where users perform operations, similar to applications in cognitive technology labs.
MVC Architecture Overview:
Model: Handles data and business logic.
View: Handles the graphical layout and UI elements.
Controller: Acts as an intermediary, controlling the logic and interaction between the Model
and View.
Features of the Program:
View Layer: The user interface with buttons and labels.
Model Layer: Handles user data and task tracking.
Controller Layer: Controls the flow of data and user interaction.
def get_reaction_time(self):
"""Return the reaction time for a task."""
end_time = time.time()
reaction_time = end_time - self.start_time
self.start_time = time.time() # Reset start time for next task
return reaction_time
def complete_task(self):
"""Track the number of tasks completed."""
self.tasks_completed += 1
def reset_tasks(self):
"""Reset task completion data."""
self.tasks_completed = 0
self.start_time = time.time()
self.buttons_frame = tk.Frame(root)
self.buttons_frame.pack(pady=20)
# Task buttons
self.buttons = []
for i in range(3):
button = tk.Button(self.buttons_frame, text=f"Task {i+1}", font=("Arial", 12))
button.pack(side=tk.LEFT, padx=10)
self.buttons.append(button)
# Reset Button
self.reset_button = tk.Button(root, text="Reset Tasks", font=("Arial", 12), bg="red",
fg="white")
self.reset_button.pack(pady=10)
def bind_events(self):
"""Bind the buttons to the event handlers."""
for idx, button in enumerate(self.view.buttons):
button.config(command=lambda i=idx: self.handle_task(i))
self.view.reset_button.config(command=self.reset_tasks)
self.view.feedback_label.config(text=f"Feedback: {feedback}")
print(f"Task {task_id+1} completed in {reaction_time:.2f} seconds.")
def reset_tasks(self):
"""Reset all tasks and user data."""
self.model.reset_tasks()
self.view.feedback_label.config(text="Feedback: All tasks reset")
print("Tasks have been reset.")
# Model-View-Controller Setup
model = TaskModel()
view = TaskView(root)
controller = TaskController(model, view)
root.mainloop()
Example Program
This program demonstrates different interaction styles like direct manipulation, menu selection, and
form fill-in, while incorporating core design principles such as feedback, affordance, and
consistency.
The program will use Tkinter for building the user interface and show how different interaction styles
can be implemented while following good design practices.
Form Fill-in: Users can enter data in a form and submit it.
Design Principles
Affordance: UI elements clearly indicate how they can be used (e.g., buttons look clickable).
Program Code
import tkinter as tk
class DraggableWidget(tk.Label):
super().__init__(master, **kwargs)
self.bind("<Button-1>", self.on_click)
self.bind("<B1-Motion>", self.on_drag)
self.start_y = event.y
self.place(x=x, y=y)
class HCIApp:
self.root = root
self.root.geometry("600x400")
self.create_menu()
self.create_drag_and_drop_area()
self.create_form()
def create_menu(self):
menubar = tk.Menu(self.root)
# File Menu
file_menu.add_command(label="New", command=self.on_new)
file_menu.add_command(label="Save", command=self.on_save)
file_menu.add_separator()
file_menu.add_command(label="Exit", command=self.root.quit)
menubar.add_cascade(label="File", menu=file_menu)
# Help Menu
help_menu.add_command(label="About", command=self.on_about)
menubar.add_cascade(label="Help", menu=help_menu)
self.root.config(menu=menubar)
def create_drag_and_drop_area(self):
self.drag_label.place(x=50, y=50)
def create_form(self):
form_frame = tk.Frame(self.root)
form_frame.pack(pady=20)
self.name_entry = tk.Entry(form_frame)
self.name_entry.grid(row=0, column=1, padx=5, pady=5)
self.age_entry = tk.Entry(form_frame)
def on_new(self):
def on_save(self):
def on_about(self):
def on_form_submit(self):
name = self.name_entry.get()
age = self.age_entry.get()
else:
messagebox.showwarning("Input Error", "Please enter valid name and age.")
if __name__ == "__main__":
root = tk.Tk()
app = HCIApp(root)
root.mainloop()
Key Concepts
1. Software Design Patterns: Commonly reusable solutions to design problems, such as the
MVC pattern, help separate concerns by dividing the application logic into:
2. Multimodal Interfaces: These interfaces support more than one method of input, such as
keyboard, mouse, and voice commands. They enhance usability by allowing users to interact with the
system in different ways.
Example Program
We will implement a basic MVC architecture and a multimodal interface using the
following:
The program will be a basic application where users can input text (via keyboard or voice),
and the system will display that text. The application will demonstrate the MVC design
pattern, along with multimodal input handling.
Requirements:
Program Code
import tkinter as tk
from tkinter import messagebox
import speech_recognition as sr
def get_text(self):
return self.text
def submit_text(self):
"""Submit text from keyboard input."""
self.root.event_generate("<<TextSubmit>>")
def speak_text(self):
"""Trigger the voice input event."""
self.root.event_generate("<<VoiceInput>>")
try:
recognized_text = recognizer.recognize_google(audio)
self.model.set_text(recognized_text)
self.view.update_display(self.model.get_text())
except sr.UnknownValueError:
messagebox.showerror("Voice Recognition Error", "Could not understand the voice
input.")
except sr.RequestError:
messagebox.showerror("Service Error", "Could not request results from the speech
recognition service.")
# Main Application Setup
if __name__ == "__main__":
root = tk.Tk()
root.mainloop()
Effectiveness: How accurately and completely the user achieves their goals.
Efficiency: How quickly the user can accomplish tasks.
Satisfaction: How pleasant the experience is for the user.
2. Survey-Based Feedback:
The program will use a simple Likert scale (1-5 rating) to gather user feedback on the above
metrics.
Using Tkinter, the program will display a graphical user interface where users can rate their
experience, and the program will calculate an overall score based on their input.
Example Program
In this program, users can rate a sample interface based on effectiveness, efficiency, and
satisfaction. The system will then compute an average score and display the evaluation
results.
Program Code
import tkinter as tk
from tkinter import messagebox
def calculate_average(self):
"""Calculate the average score across all categories."""
return sum(self.scores.values()) / len(self.scores)
# Introduction label
self.intro_label = tk.Label(self.root, text="Please rate the design on the following metrics (1-
5):", font=("Arial", 12))
self.intro_label.pack(pady=10)
def submit_evaluation(self):
"""Submit the evaluation and trigger the custom event."""
self.root.event_generate("<<SubmitEvaluation>>")
except ValueError:
messagebox.showerror("Input Error", "Please provide valid ratings for all categories.")
Emerging paradigms for user interaction refer to novel ways in which users interact with
computer systems and digital environments. These paradigms are often driven by
advancements in technology, including voice-based interfaces, gesture-based interaction,
multimodal interaction, and augmented reality (AR). In this program, we will focus on an
implementation of a gesture-based user interface using a webcam, allowing users to control
a GUI with simple hand gestures.
We will use the OpenCV library to capture video from a webcam and detect hand gestures.
For simplicity, we will detect a single hand gesture (like raising a hand) to control a graphical
element, such as opening or closing a window.
Key Concepts:
1. Gesture-Based Interaction: Detects specific hand gestures using a webcam and translates
them into actions in the user interface.
2. OpenCV: A library for real-time computer vision, used to process webcam input and detect
hand gestures.
3. Tkinter: A Python library for creating GUI elements.
4. Basic Machine Learning/Computer Vision Techniques: Simple color thresholding for hand
detection (using HSV color space).
Program Code: Gesture-Based Control with OpenCV and Tkinter
This program captures video from the webcam, detects a raised hand gesture, and performs a
corresponding action in the GUI, such as opening or closing a window.
import cv2
import numpy as np
import tkinter as tk
import threading
def detect_gesture(frame):
"""Detects a simple hand gesture by checking for a region of a specific color (skin tone)."""
if len(contours) > 0:
return True
return False
def video_capture(controller):
cap = cv2.VideoCapture(0)
while cap.isOpened():
if ret:
gesture_detected = detect_gesture(frame)
if gesture_detected:
controller.gesture_detected()
break
cap.release()
cv2.destroyAllWindows()
# GUI Controller
class InteractionController:
self.root.geometry("400x200")
# Instruction label
self.label = tk.Label(self.root, text="Raise your hand to open a new window", font=("Arial", 12))
self.label.pack(pady=20)
self.start_button.pack(pady=20)
self.stop_button.pack(pady=20)
self.window_open = False
def gesture_detected(self):
if not self.window_open:
self.open_new_window()
else:
def open_new_window(self):
"""Opens a new window when a gesture is detected."""
self.new_window = tk.Toplevel(self.root)
self.new_window.geometry("300x150")
self.new_window.title("New Window")
label.pack(pady=20)
self.window_open = True
self.new_window.after(5000, self.close_window)
def close_window(self):
self.new_window.destroy()
self.window_open = False
def start_gesture_recognition(self):
self.capture_thread.start()
def stop_gesture_recognition(self):
cv2.destroyAllWindows()
# Main Application
if __name__ == "__main__":
root = tk.Tk()
app = InteractionController(root)
root.mainloop()