Human-Computer Interaction in Game Development with Python: Design and Develop a Game Interface Using HCI Technologies and Techniques 1st Edition Joseph Thachil George download
Human-Computer Interaction in Game Development with Python: Design and Develop a Game Interface Using HCI Technologies and Techniques 1st Edition Joseph Thachil George download
https://ebookmeta.com/product/human-movements-in-human-computer-
interaction-hci-biele/
https://ebookmeta.com/product/game-development-with-monogame-
build-a-2d-game-using-your-own-reusable-and-performant-game-
engine-1st-edition-louis-salin/
https://ebookmeta.com/product/beginning-software-engineering-2nd-
edition-rod-stephens/
Works of John Dryden: Volume 2 Poems, 1681–1684
https://ebookmeta.com/product/works-of-john-dryden-
volume-2-poems-1681-1684/
https://ebookmeta.com/product/roar-deal-1st-edition-milly-taiden/
https://ebookmeta.com/product/the-literary-legacy-of-the-
macmillan-company-of-canada-making-books-and-mapping-culture-1st-
edition-ruth-panofsky/
https://ebookmeta.com/product/murdock-1st-edition-elle-james/
https://ebookmeta.com/product/sensors-and-probes-for-
bioimaging-1st-edition-young-tae-chang/
Disciplinary Literacy in Action How to Create and
Sustain a School Wide Culture of Deep Reading Writing
and Thinking 1st Edition Releah Cossett Lent
https://ebookmeta.com/product/disciplinary-literacy-in-action-
how-to-create-and-sustain-a-school-wide-culture-of-deep-reading-
writing-and-thinking-1st-edition-releah-cossett-lent/
Joseph Thachil George and Meghna Joseph George
Standard Apress
Trademarked names, logos, and images may appear in this book. Rather
than use a trademark symbol with every occurrence of a trademarked
name, logo, or image we use the names, logos, and images only in an
editorial fashion and to the benefit of the trademark owner, with no
intention of infringement of the trademark. The use in this publication
of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of
opinion as to whether or not they are subject to proprietary rights.
The publisher, the authors and the editors are safe to assume that the
advice and information in this book are believed to be true and accurate
at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, express or implied, with respect to the material
contained herein or for any errors or omissions that may have been
made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
Source Code
All source code used in this book can be downloaded from
github.com/apress/hci-gamedev-python.
Any source code or other supplementary material referenced by the
author in this book is available to readers on GitHub
(https://github.com/Apress). For more detailed information, please
visit http://www.apress.com/source-code.
Table of Contents
Chapter 1:Human-Computer Interaction Tools and Methodologies
Fundamentals of Human-Computer Interaction
Digging Deeper
Designing the Interface
Adaption and Interfaces
Interfaces of Multi-Device
Evolutionary Trends
Evaluation of Usability
Bringing Usability and Accessibility Together
Analysis of Task Situations
Techniques and Tools for Human-Computer Interaction
Development
Techniques for Defining Specifications
The Cycle of Tool Life and Methodologies Taxonomy
Selecting Instruments, Techniques, and Resources
The Eye Tracking Technique and Usability
Eye Tracking Studies
User Control
Usability Testing
Why Eye Tracking?
Creating an Effective Interface
Graphical User Interfaces
Characteristics of User Interfaces
Summary
Chapter 2:Human-Computer Interaction Tools and Game
Development
Tools and Techniques for General Game Development
The Video Game Interface
Video Game Development and Interaction
Video Game Users’ Requirements and Needs
Interactive UI Design for a Game
Panel Design
Window Architecture
Icon Design
Color Development
Eye-Tracking Techniques
The Impact of Eye Tracking in Games
Eye Tracking in Games
Face and Eye Recognition
Modeling and Development
Conclusions and Problems
Creating the Data Structure
Modeling and Development
Conclusions and Problems
Applying Photographic Filters
Modeling and Development
Conclusions
Recognizing the Iris
Modeling and Development
Conclusions and Problems
Edge Detection
Modeling and Development
Conclusions and Problems
Parameter Analysis on Blur, CLAHE, and CANNY Filters
Modeling and Development
Analysis
Iris Recognition (2)
Modeling and Development
Conclusions and Problems
“Average Color” Recognition
Modeling and Development
Conclusions
Project Analysis
Data Analysis
Project Conclusions
Summary
Chapter 3:Developing a Video Game
Roles in the Video Game Industry
Producers
Publishers
Game Developers
Roles and Processes of Game Development
Game Design
Game Art Design
Game Programming
Game Testing
Software Development
Game Development Phases
Pre-Production Phase
Outsourcing
Production Phase
Milestones:The Cornerstones of Development
Post-Production Phase
Localization
Fan Translation
Summary
Chapter 4:Turning Points in Game Development
Game Engines
Rendering Engine
Indie Video Games
Crowdfunding
The Case of Dreams:Developing a Game Within a Video Game
Current Problems in the Development of Video Games
Crunch Time
Piracy
Programming Stages
Paradigms and Programming Languages
Visual Programming
Summary
Chapter 5:Developing a Game in Python
Python and Pygame
Designing the Video Game
Development Team
Game Design Document and Production
Game Menu
Short Introduction to Pygame
Game Interface
The Player
Powering Up
The Enemies
The Bosses
Collision Management
The Levels
Summary
Chapter 6:Game Development – Industry Standards
Game Terminology
Overall Design of the Game
Frontend and Backend in Game Development
Verify the Token
General Description of the Game’s Services
Network Interfaces and Sequence Diagram for the Game
Development Cycle
Game Network Interfaces
Sequence Diagrams
Security of Online Games Through a Web Portal
Secure Code for Games
Secure by Design
Security Control
Summary
Chapter 7:Gamification in Human-Computer Interaction
Gamification Strategy
Gamification Examples
Common Risks and Mistakes
Gamification in Education
Aspects of the Game’s Foundation
The Different Game Categories
Psychology and Motivation in Gamification
The Two Different Types of Motivation
Playing and Learning
Gamification in the Classroom
Factors that Make Gamification in the Classroom Easier
How Can Gamification Help with Learning?
Games-Based Learning vs Gamification
Solutions for an Educational Game
Designing a Gamified Application
Math Games for Kids
Gamified Applications Dedicated to Training
Methodology for Creating Gamified Applications
Web Application
Native Application
Native App vs Web App
The PhoneGap Framework
Why PhoneGap?
PhoneGap’s Architecture
Anaconda Python and the PyQT5 GUI Framework
Anaconda Installation
PyQT5 Installation
PyQT Events
Drawbacks to Gamification
Avoiding the Drawbacks
Summary
Chapter 8:Human-Computer Interaction Research and
Development
Human-Computer Interaction with a Head-Mounted Display
Human-Machine Interfaces:Future Development
The Touchscreen Revolution
Direct Communication with the Mind
Gesture Engagement Taken to a New Level
Applications of Spatial Cognition Human Contact Research
Interaction with the Voice
Interactions Between the Brain and the Computer
Summary
Chapter 9:Recommendations and Concluding Comments
Recommendations
Broad HCI Assessment Criteria
Information and Communication Technology (ICT)
Development
New Trends
Promising HCI Technologies
Important Considerations for Building a User-Friendly
Interface
Final Thoughts on Game Design and HCI
Summary
Index
About the Authors
Joseph Thachil George
is an IT security engineer based in
Germany. He also worked as a technical
consultant for International Game
Technology (IGT) in Italy. Joseph is
currently pursuing his PhD in computer
science and engineering at the University
of Lisbon, Portugal. He has an MS in
cybersecurity from the University of
Florence, Italy. He is also part of the
DISIA research group at the University of
Florence, Italy, and the research group
(INESC-ID Lisbon) at the University of
Lisbon, Portugal. His research interests
cover automatic exploit generation,
exploitation of vulnerabilities, chaining of vulnerabilities, security of
web applications, and JavaScript code exploits. At IGT, he has been a
part of various projects related to game configuration and integration
in various platforms, specializing in Java and Spring Boot–based
projects. He has also worked for various companies in India, Angola,
Portugal, and the UK and has seven years of experience with various IT
companies.
Digging Deeper
A fairly intuitive example is a game that consists of the numbers 1 to 9,
which are all initially available to each of the two players. The players
play one at a time. During each turn, the players choose one of the
remaining numbers (making it unavailable). If a player has three
numbers whose sum is 15, they win.
First you need to understand the problem. Both players share a
common goal, which is to win the game. There is also another objective:
“If at a certain point I can’t win, then I want to prevent the other player
from winning”. One possible strategy is to choose a number from the
remaining numbers that might prevent the other player from winning.
So the “background” activity is remembering the numbers that you
already chose, remembering the remaining numbers (and those taken
by your opponent), and remembering whose turn it is. This game
becomes non-trivial. Suppose you need to design a user interface that
makes it easier to play this game. One solution is represented by the
interface shown in Figure 1-2.
Figure 1-2 Interface for the game that consists of a choice between numbers to add
up to 15
As you can see, it is clearly highlighted who has to play. It also shows
which numbers have been selected (in red) and which are available (in
green), as well as who has selected them. However, players still have to
understand which number to choose to prevent their opponent from
winning. There is a considerable cognitive distance between choosing
suitable actions and the user’s initial objective. An interface that limits
this cognitive load, and so is more usable, is shown in Figure 1-3.
The idea is that the players use a substantially different interface: A
3×3 matrix where one player can place Xs and the other Os. Assuming
that the matrix corresponds to numbering, as that indicated by the
small matrix on the left, the game becomes like the Tic Tac Toe (known
in Italy as Three of a Kind), whereby the aim of the players is to place
three elements in a row or diagonally. Understanding if your opponent
is about to win now becomes very intuitive, detectable at a glance, and
doesn’t require particularly complicated processing.
Figure 1-6 Handhelds are used as a support for visiting museums. The visitor’s
position is detected with infrared devices
This solution was adopted by the Carrara Marble Museum[2] and it
depends on the user’s location, which is automatically detected. This is
achieved through infrared devices on the ceiling at the entrance to each
room. They emit a signal that contains a room identifier (see Figure 1-
6). In fact, each device is composed of multiple infrared signal emitters
to increase the ease of detection. When the device detects the signal, it
identifies the room and automatically emits sound feedback. It shows
on the screen where the user is, after which the first selection shows
the map of the new room with icons for each work of art.2
There are icons for each type and, by selecting an icon, the user
receives additional information in a vocal way and can access videos on
related topics (if any). This solution is made possible due to the
availability of 1GB handhelds that record rich multimedia information.
This limits the interaction with the outside world to detect signals
that allow the museum to identify the environment the user is in.
Another possible solution would have been to identify the nearest work
of art and automatically activate a corresponding voice comment. The
limitation of this solution is that it can, in some cases, become too
intrusive and provide unwanted comments. Sometimes you need to use
multiple devices for such interfaces.
As an example, you can see how an app’s UI varies depending on
whether it’s in landscape or portrait mode. First of all, you need to open
the Settings app. Go to Accessibility ➤ AssistiveTouch. Make sure the
toggle at the top of the screen is in the On position. Tap one of the four
options (Single Tap, Double Tap, Long Press, or 3D Touch) and set it to
Open Menu (https://support.apple.com/en-ca/HT202658).
Interfaces of Multi-Device
One of the main issues currently impacting user interfaces is the
continuous introduction of new types of interactive devices: from
interactive digital wall-mounted telephones to PDAs, telephones UMTS,
and tablet PCs, to name a few. Interacting with interactive services
becomes a multi-device experience. It is important to understand the
new issues that are introduced into this context. The first thing to
understand is that it isn’t possible to do everything through all devices.
There are features of the devices that can make them suitable to
support tasks but are inadequate for others. For example, most users
would never use (let alone pay for) a service that allows them to use a
telephone to watch a movie or a whole game of football, as the
experience would be somewhat cramped and would not allow them to
appreciate detail. Conversely, if you were stuck in traffic and wanted to
find an alternative route, a mobile device is key. In other cases,
supported activities can be accessed across different devices, but the
modes change.
For example, a hotel booking made via mobile phones with web or
WAP access enables users to communicate arrival and departure dates.
Using a desktop system, you can comfortably provide an evening of
additional information, for example, to express your preferences in
terms of rooms, meals, etc. Or the desktop system can present extensive
booking forms where various fields can be filled in for different orders,
while the mobile site can impose some sequence in providing the
request parameters due to the smaller screen.
There are activities through one type of device that can enable or
disable activities through another. For example, you can make an airline
reservation via a desktop system, and then you can access real-time
information via your mobile phone related to the flight you booked.
There are also activities that remain the same regardless of the device.
For example, logging in remains more or less the same across different
types of devices. In adapting to the type of device, it is also necessary to
consider the supported modes, because this influences the possibilities
of interaction.
Tasks can be influenced by the interaction mode: a set of inputs can
require separate interactions through a graphic device, while such
information can be provided through a single interaction using a voice
interface. There are inherent differences between the various modes.
For example, voice channels are best suited for short messages, to
report events and immediate actions, to avoid visual overload, and
when users are on the move. The visual channel is more useful for
complex or long messages, for identifying spatial relationships, and
when actions need to be performed in multiple, noisy environments or
when users are stationary.
When a system supports multiple modes (for example, graphical
and vocal interaction), the possible implementation techniques is wide.
You must consider different ways to combine the modalities:
complementary (both modalities are used synergistically to
complement interaction), assignment (a specific method is used to
create a certain purpose), redundancy (multiple modes are used to
achieve the same effect), and equivalence (users choose between
multiple modes to achieve the same effect).
This has been an introduction to the fascinating world of the
human-computer interaction, explaining its objectives and fundamental
concepts and showing application examples. It has witnessed a real
explosion of interest and has substantially evolved.
Evolutionary Trends
This evolution continues, driven by the evolution of interaction
technologies and the constantly changing user requirements. The
continued introduction of new interactive computer devices in our
homes, offices, cars, and places of commerce and tourism implies the
need to plan a pervasive usability that can guarantee satisfaction in the
different contexts of use. This opens up the possibility of creating
migration services in the future—interactive services that follow users
in their movements and adapt to the new devices available in these new
environments. The goal is to allow users to continue the interaction
where they left off with the device in the previous environment.
Consider a person who is registering for a service through the
system desktop. They suddenly realize that they are late and so take
their PDA and continue the registration as they exit the office. When
they get into the car, they complete the registration using a voice
interaction system, all without having to redo any transactions carried
out through previously used devices. The interfaces adapt to the new
devices used. This level of multimodality will increase significantly for
many reasons.
Some technologies are substantially improving, like those related to
voice interaction. They show a growing ability to interpret human input
and so have begun to be supported, in a stable way, for interaction via
the web. Technologies that detect user presence are diversifying and
improving. The improvement of shape recognition techniques and
elements in the images is increasing the possibility of interaction
through gestures, whereby different functions are activated depending
on the recognized gesture. These and other possibilities have the aim of
making the interaction with computers similar to that between human
beings. This can lead to the affirmation of the paradigm of natural
interaction, which guarantees usability that’s extremely immediate and
spontaneous.
Evaluation of Usability
The usability assessment can be carried out for different purposes.
There may be precise goals, such as wanting users to be able to perform
a task with a certain number of interactions or in a certain period of
time.
There are various methods that are considered when evaluating
usability:
Inspection -based evaluation: In these cases an expert evaluates
the prototype or final implementation of the user interface according
to predefined criteria, which can be a series of properties that must
be met (such as providing continuous feedback of the status of the
interaction) or indications of aspects to be considered by simulating
user interaction (such as what occurs with the cognitive
walkthrough).
Evaluation based on user tests in the laboratory: In this case,
laboratories equipped with cameras store user sessions, in an
environment that tries to be as unobtrusive as possible.
Evaluation based on user feedback: In this case, feedback is
collected informally through questionnaires, workshops, focus
groups, and interviews.
Remote assessment: The user and the assessor are separated in
time and/or space, for example, log files of interactions with users
are automatically created and then analyzed using specific tools.
Model-based assessment (simulation): A model is created to
predict and analyze how tasks are performed in a certain interface.
Evaluation based on user observation in the field: Users are
observed for long periods when interacting with the system in their
daily environment.
Choosing the method for evaluating usability may depend on
various factors, such as the number and type of users available. It may
be useful to combine multiple methods. For example, starting with an
analysis of users and tasks, then creating prototypes that they may be
subjected to heuristic evaluation, and then using empirical tests until
satisfactory results are achieved.
Delivery and upkeep are the fourth and fifth steps, as shown in
Figure 1-9.
Figure 1-9 Product development steps
These are arbitrary categorizations that do not represent iterations
or progress, but they serve as a starting point for tools and processes.
Specific tools and/or procedures associated with each phase have been
created specifically for use in that sector. Samples of significant tools
and procedures related to each area will be included in the discussion.
The majority of instruments are applications that are useful for a
range of hardware, including IBM PCs and motherboard chipsets, as
well as Unix workstations. Apple Macs and NeXT computers are two
different types of computers. Overall, HCI software-based solutions
have been designed for commercial usage and, as a result, have been
evaluated in real-world scenarios.3
When compared to tools, HCI techniques often entail pencil and
paper procedures (although some methods may use computer
software) and are frequently more exploratory and scholarly in
character. HCI methods are frequently designed by professionals for use
in labs, making them more difficult to apply unless you have experience
with HCI and usability engineering.
Depending on the system being developed and the stage of the
software development cycle, different tools and approaches will be
used.
User Control
We are often referred to be operating in the economic market, as you
may have heard. Interest, according to marketing methods like the
AIDA model, is the important first step in the process of purchasing a
product. The activity is at the bottom of the funnel. This is the point at
which we make a commitment by buying a product or subscribing to a
magazine.
Interest and desire are the phases between attention and action in
the AIDA paradigm (see Figure 1-10). This is when we look into it and
decide whether it’s what we really want. To put it another way, this is
the point at which the users make a decision.
Figure 1-10 The AIDA analysis is a good way of describing the phases that go from
attention to action
Letting people make their own decisions means giving them control.
Allowing this sense of power, according to recent UX trends, seems to
be something we’ll see much more in the future. People have become
more aware of how their digital experiences affect them, and their
(often harmful) digital conduct is being called into question. Customers
no longer want to be controlled by their smartphones and digital
services, according to tools that limit screen time or include snooze
functionality.
The problem with commerce is that getting people to take action
(this usually means purchasing a product) is clearly relevant to the
organization’s growth. Relevant stakeholders, unsurprisingly, want
their users to take action as soon as feasible. The issue is that if we
aggressively shrink the space between attention and action, we start to
take away user control. For example, one-click checkout buttons are
convenient and beneficial to your organization, but they also raise the
chance of your customers buying stuff they aren’t convinced about, or
worse, purchasing things by accident.
Usability Testing
Testing via eye tracking is extremely valuable, providing you with an
additional layer of information about your users’ behavior. Even so, it’s
critical to understand when you should employ eye tracking and when
you can’t do anything without it. Regardless of how advanced the
technology is, it should be considered a tool in your usability testing
toolbox rather than a UX panacea. When it comes to user testing, a
general rule of thumb is to try to get your answers as quickly as
possible. Don’t go overboard.