Final Project
Final Project
AI
BY:
OLATUNJI FEMI EMMANUEL
MATRIC NUMBER: NDFT23/COM/010319
OCTOBER 2024
VOICE ACTIVATED ELECTRONIC DISPLAY BOARD USING
AI
BY
CERTIFICATION
This is to certify that this research project was written and compiled by Olatunji
Femi Emmanuel with Matric Number: NDFT23/COM/010319 of the Department
Of Hardware Technology at Eko college of Management and Technology in
partial fulfillment of the requirements for the award of National Diploma in
Hardware Technology
------------------------ -----------------------
Project Supervisor
---------------------------- --------------------
Project Coordinator
------------------------------ -------------------
Head Of Department
DECLARATION
this project was researched and written by myself under the supervision of MR.
OLATOSI OLUWADAMI of the department of computer hardware/technology, eko
college of management and technology, ikotun. the ideals are mine, except
were stated otherwise; all materials cited in this work have been duly
acknowledged.
--------------------------
OCTOBER, 2024
DEDICATION
I dedicate this project to The Almighty Creator who has being there from the
beginning of my Life to this point. He has been the source of my strength
throughout this program and on his Voice and Wings only I have roared and
Sailed through my rough storms. Special dedication to my loving parents and
boss MR IBRAHIM for their restless support and compassion throughout this
project. My gratitude goes to the one and only person who is no ordinary person.
A motivator, an inspiration to me, a senior brother to me, through my academics
the person of Engineer MR. OLATOSI OLUWADAMI
ACKNOWLEDGEMENT
I appreciate my supervisor, MR. OLATOSI OLUWADAMI. For his objective’s
guidance. I acknowledge my loveliest parents, MR. & MRS OLATUNJI, yes and not
only my parents but also my mentor. Thanks too to the lecturers of Eko College
of Management and Technology and also my course mates for each time spent
together these memories are golden moment in my life. Above all special thanks
The Almighty Creator for his favor and Divine protection, his Mercies and
Strength each new day in this academic journey till post Graduate.
Thank you all and God Bless.
ABSTRACT
The objective of this work is to take a step further in this direction by incorporating voice control and
artificial intelligence (AI) into internet of things based smart home systems to create more efficient
automated smart home systems. Accordingly, a home automation system proposal is presented, in
which the related functions can be controlled by voice commands using an android or web application
via a chat form. The user issues a voice command, which is decided by natural language processing(NLP).
To accommodate the user’s request, the NLP classifies it into operation commands extracted from NLP
into reality. Based on this, home applications can be controlled. Also, the utilities consumption could be
calculated, saved , and paid on time. This is in addition to the introduction of a machine learning(ML)-
based recommendation system for automated home appliance control. In this approach, the mobile or
web application is considered as the central controller, deciding the appropriate actions to fulfill the
user’s desires. The presented work has been put into practice and tested. It proved to be applicable, as
well as having the potential for making home life more comfortable, economic and safe. Main objective
behind Voice operated electronic notice board using display is to show messages and to control them by
using our own voice. It is the time to change old style notice board to smart digital notice board. For that
we proposed voice control notice board. It gives us more comfort and a better user interface. We use a
Bluetooth receiver to receive Android-transmitted messages, send them to the microcontroller for
decode and further into the process. The microcontroller then displays the message on the LCD screen.
Use of this notice board system can be used in various places including railway stations, schools,
colleges, offices to display emergency announcements on screen instantly, instead of typing the
message at all times.
TABLE OF CONTENT
TITLE PAGE………………….……………………………………………………….I
CERTIFICATION...............................................................Error! Bookmark not defined.
DECLEARATION...............................................................Error! Bookmark not defined.
DEDICATION...................................................................Error! Bookmark not defined.
ACKNOWLEDGEMENT.....................................................Error! Bookmark not defined.
ABSTRACT.......................................................................Error! Bookmark not defined.
CHAPTER ONE: INTRODUCTION.......................................Error! Bookmark not defined.
1.1. Background of the study:.......................................Error! Bookmark not defined.
1.2. Problem of the Statement.....................................Error! Bookmark not defined.
1.3. Aim/objectives of the study...................................Error! Bookmark not defined.
1.4. Significance of the study........................................Error! Bookmark not defined.
1.5 Scope of the study.................................................Error! Bookmark not defined.
CHAPTER TWO: LITERATURE REVIEW...............................Error! Bookmark not defined.
2.1 Introduction..........................................................Error! Bookmark not defined.
2.2 Origin of the project...............................................Error! Bookmark not defined.
2.3 Review of related literature....................................Error! Bookmark not defined.
CHAPTER THREE: METHODOLOGY....................................Error! Bookmark not defined.
3.1. Research approach...............................................Error! Bookmark not defined.
3.2. System Architecture..............................................Error! Bookmark not defined.
3.3. The Process of Design and Consideration...............Error! Bookmark not defined.
6. Maintenance...........................................................Error! Bookmark not defined.
3.4. Circuit Board and IC Chips......................................Error! Bookmark not defined.
3.5. Loading the rom....................................................Error! Bookmark not defined.
3.6. Fabrication...........................................................Error! Bookmark not defined.
3.7. The power supply.................................................Error! Bookmark not defined.
3.9. The software........................................................Error! Bookmark not defined.
3.10 System Block Diagram/Flowchart..........................Error! Bookmark not defined.
CHAPTER FOUR: SYSTEM IMPLEMENTATION AND RESULTS ANALYSIS Error! Bookmark not defined.
4.1. Construction procedure........................................Error! Bookmark not defined.
4.2 System installation requirements............................Error! Bookmark not defined.
4.3 Assembling of sections...........................................Error! Bookmark not defined.
4.4 Testing of system operation...................................Error! Bookmark not defined.
4.5 Results analysis and discussion...............................Error! Bookmark not defined.
4.6 System Documentation..........................................Error! Bookmark not defined.
4.7 Cost analysis..........................................................Error! Bookmark not defined.
CHAPTER FIVE : SUMMARY AND CONCLUSION..............Error! Bookmark not defined.
5.1 Research findings:..................................................Error! Bookmark not defined.
5.3 Research Conclusion..............................................Error! Bookmark not defined.
APPENDICES...................................................................Error! Bookmark not defined.
1. References..............................................................Error! Bookmark not defined.
Chapter 1: Introduction
Background and Motivation
Electronic display boards are widely used in public areas such as schools,
shopping malls, train stations, and airports to display real-time information.
Traditionally, these boards are controlled manually, either through physical
interaction with a keypad or through a dedicated software interface. This
method can be time-consuming and inconvenient, especially when frequent
updates are required. Advancements in AI and voice recognition technology
have led to the possibility of controlling such systems using natural language
commands. This project aims to leverage these advancements by creating a
voice-activated display board that updates based on spoken commands. By
integrating AI for voice recognition, the system offers a more user-friendly and
efficient solution for updating and managing display information.
1.1 Problem Statement
The current manual approach to updating electronic display boards is inefficient
and often cumbersome, particularly when frequent changes are necessary. In
addition, it does not cater to individuals with physical disabilities who might find
manual interfaces challenging. There is a need for an automated system that
allows users to update display content seamlessly through voice commands,
making the process quicker, more efficient, and accessible.
1.2 Aim and Objectives
The primary aim of this project is to design and develop a voice-activated
electronic display board using AI. The specific objectives of this project include: -
Developing a voice recognition system that accurately converts spoken words
into text. - Integrating the voice recognition module with a microcontroller to
control an electronic display board. - Ensuring real-time display updates with
minimal delay after receiving voice commands. - Testing the system's
performance in different environments, including noisy settings, to evaluate its
accuracy and reliability.
1.3 Scope of the Project
The project focuses on designing a prototype for a voice-activated display board
system. The scope includes selecting suitable hardware (e.g., microcontroller,
display screen, microphone), choosing an AI model for voice recognition, and
implementing a software interface to integrate the components. Although the
current project will be developed in a specific language (e.g., English), it lays the
groundwork for future extensions in multilingual support and additional
functionalities.
Chapter 2: Literature Review
2.1 Overview of Electronic Display Boards
Electronic display boards have been an essential communication tool for
decades, serving industries, educational institutions, and transportation hubs.
These boards typically display static or dynamic information such as
announcements, advertisements, or schedules. Various types of display boards
are used, including Light Emitting Diode (LED) displays, Liquid Crystal Display
(LCD) panels, and e-ink displays. In recent years, there has been a shift towards
making these boards more interactive, allowing users to change the displayed
information remotely through internet-based control systems. However,
integrating voice activation technology into these boards remains a relatively
unexplored area of development.
2.2 Evolution of Voice Recognition Technology
Voice recognition technology has its roots in the 1950s, with early systems
capable of recognizing a limited set of words. Over time, advancements in
machine learning and natural language processing (NLP) have vastly improved
the accuracy and speed of speech recognition systems. Modern systems, such
as those used by Google, Amazon Alexa, and Apple's Siri, employ deep learning
techniques to convert spoken words into text accurately. Recurrent neural
networks (RNNs) and Long Short-Term Memory (LSTM) networks are commonly
used to process sequential data like speech. These models enable systems to
handle continuous, natural speech, allowing for more fluid interactions between
humans and machines.
2.3 Role of AI in Speech Recognition
Artificial intelligence, particularly machine learning algorithms, has been pivotal
in improving speech recognition accuracy. AI models are trained on large
datasets of spoken language, enabling them to learn the intricacies of different
accents, speech patterns, and languages. In the context of this project, AI-
powered voice recognition enables the system to convert voice commands into
text, which is then processed to control the display board. The system must be
trained to recognize specific commands relevant to display board functionality,
such as "update message" or "clear screen."
2.4 Previous Work on Voice Controlled Devices
Numerous studies and projects have explored voice-controlled devices in
various applications, including home automation, industrial control systems, and
mobile applications. Voice-controlled lighting systems, smart thermostats, and
voice-activated assistants (e.g., Alexa and Google Assistant) are some examples
where voice recognition technology has been successfully implemented.
However, there is limited research specifically focusing on voice-activated
display boards.
2.5 Summary of Gaps and Opportunities in Existing Systems
While voice recognition technology has been widely used in consumer
electronics, its application in public display systems is still underdeveloped. Most
current systems rely on manual input, lacking the convenience and accessibility
offered by voice interfaces. This project seeks to address these gaps by
developing an AI-powered voice-activated display board.
Chapter 3: METHODOLY
3.1 Research Approach
The research approach adopted for the Voice Activated Electronic Display Board
using AI is a combination of experimental and design-based methodologies. The
system was developed by integrating speech recognition technologies with
display systems. The project was built on understanding the principles of voice-
to-text conversion and its application in controlling electronic displays, enabling
seamless interaction between users and the display system via voice
commands. The integration of voice activation technology with electronic
display boards presents a significant advancement in user interaction and
accessibility. This project aims to develop a voice-activated electronic display
board utilizing artificial intelligence (AI) to enhance communication, streamline
information dissemination, and cater to diverse user needs. This research
approach outlines the methodology, data collection techniques, technology
stack, and evaluation metrics for developing this innovative system. A
comprehensive literature review will be conducted to examine existing
technologies in electronic display boards and voice recognition systems. This
review will cover (Current applications of electronic display boards in various
industries, State-of-the-art voice recognition technologies, including AI models
like Google’s Dialogflow, Amazon Alexa, and open-source alternatives and User
interface design principles for accessibility and user experience.)The project will
follow a systematic design approach, which includes the following phases: User
Interviews: Conduct interviews with potential users to identify their needs and
preferences. Use Case Development: Create use cases to outline various
scenarios where the display board would be utilized. Hardware Components:
Determine the necessary hardware, including: Microcontroller, (e.g., Raspberry
Pi, Arduino) Voice recognition module, (e.g., Google Voice Recognition API)
Display module, (e.g., LED or LCD screen) Connectivity options (Wi-Fi or
Bluetooth) Software Components: Select appropriate software tools and
languages, including: Programming languages, (Python, JavaScript) Libraries for
AI and machine learning, (TensorFlow, PyTorch, NLTK) Frameworks for web
development (Flask, React). Prototype: develop a prototype of the voice-
activated display board. This phase will involve: Hardware Assembly:
Integrating all hardware components. Software Development: Writing code
for voice recognition, command processing, and display functionalities. Voice
Recognition: Implement an AI-driven voice recognition system capable of
understanding and processing user commands. This will include:(Training an
NLP model using relevant datasets to enhance command accuracy and
Incorporating speech-to-text conversion to enable users to interact with the
display board using natural language.)
The read-only memory (ROM) is loaded with the necessary firmware and AI
algorithms that handle voice-to-text conversion and system management. The
ROM stores instructions that allow the microcontroller to interpret voice input,
process it, and deliver output to the display unit. Read-Only Memory (ROM)
plays a crucial role in the functioning of various electronic devices by storing
firmware, which is essential for booting up hardware and facilitating various
functionalities. Loading the ROM into a system is a fundamental process in
computer architecture, microcontroller programming, and embedded systems.
This guide will provide an in-depth overview of the ROM loading process,
discussing its significance, types of ROM, steps involved, and considerations
during the loading process, discussing its significance, types of ROM, steps
involved, and considerations during the loading process. Loading ROM involves
several systematic steps, which can vary depending on the specific architecture
of the device and type of ROM used. Below is a detailed process:
Before beginning the ROM loading process, ensure that you have the following
tools:
ROM Programming Software: Depending on the type of ROM, you will need
software that can interface with the ROM chip. Common tools include:
(integrated development environments (IDEs) for microcontrollers, specific
software for EEPROM and flash memory programming.)
Hardware interface: This might involve using programmer that connects the
ROM to your computer or microcontroller, such as SPI (Serial Peripheral
Interface) or 12c (inter-integrated circuit).
Connect the ROM chip to the programming device using the appropriate pins.
This usually involves:
Refer to the ROM datasheet to identify pin configurations and ensure correct
connections.
For types of ROM that support erasure (like EPROM and EEPROM), you may
need to erase the existing data before loading new firmware. This can be done
using the following methods:
a. EPROM: Place the chip under a UV light for a specified period, as per the
manufacturer’s instructions.
b. EEPROM/Flash: Use the programming software’s erase function to wipe
the existing contents.
Using the programming software, initiate the loading process. This typically
involves selecting the firmware file and executing a command to write the
data to the ROM.
Read Back: Use the programming software to read back the contents of the
ROM and compare it against the original firmware file.
Once the ROM is loaded and securely connected to the system, power on the
device to test functionality. Check for:
a. Concept Development
b. Detailed Design
Once the concept is approved, detailed designs are created using computer-
aided design (CAD) software. This involves creating 2D or 3D models that
illustrate dimensions, tolerances, and material specifications. Key considerations
during this stage include:
c. Prototyping
a. Sourcing Materials
Once the design is finalized, the next step is to source the necessary materials.
This involves identifying suppliers and obtaining quotes for the required
quantities. Factors to consider include:
b. Material Inspection
Upon receipt, materials should undergo thorough inspection to verify quality and
specifications. This may involve checking for:
a. Cutting Techniques
b. Shaping Techniques
After cutting, the next step involves shaping the materials to the desired form.
Common shaping techniques include:
Bending: Altering the angle of the material using press brakes or rollers.
Machining: Removing material from a workpiece using tools like lathes
and mills to achieve precise dimensions and finishes.
Forming: Techniques like stamping and forging to create complex shapes
from metal or plastic.
a. Welding
Welding is a process that fuses materials together using heat. Common welding
methods include:
MIG Welding: Metal Inert Gas welding, which uses a continuous wire feed
and an inert gas to protect the weld pool.
TIG Welding: Tungsten Inert Gas welding, which provides greater control
for thinner materials and more complex welds.
Spot Welding: Used primarily for sheet metal, where two metal sheets
are joined by applying heat and pressure at specific points.
b. Adhesive Bonding
c. Mechanical Fastening
Mechanical fasteners such as bolts, screws, and rivets can also be used to join
components. This method allows for disassembly and reassembly, which can be
advantageous for maintenance.
a. Surface Treatment
b. Quality Control
Step 6: Assembly
Once fabrication and assembly are complete, the final products must be
packaged for delivery. This involves:
The software used in the system involves both embedded software for the
microcontroller and AI-based speech recognition algorithms. This software is
responsible for: Capturing voice input Processing voice data into text using AI
Sending display instructions to the microcontroller Displaying the processed
output on the electronic board The software also includes error handling to
manage voice recognition failures and miscommunication between components.
Software is a fundamental component of modern computing, enabling hardware
to perform specific tasks and providing users with tools to accomplish a wide
variety of objectives. From operating systems that manage computer hardware
to applications that help users complete tasks, software plays an integral role in
the functioning of technology in our daily lives. This note will explore the
definition, types, components, development processes, and future trends of
software, providing a thorough understanding of its significance and impact.
Software development process
The software development process involves several stages, often referred to as
the software development life cycle (SDLC). Common methodologies include
Agile, Waterfall, and DevOps. The typical stages are:
1. Planning
In this initial phase, stakeholders identify the need for software and define
project goals. Feasibility studies and market research may be conducted to
outline the project scope.
2. Requirements Gathering
Developers and stakeholders collaborate to gather detailed requirements. This
phase involves understanding user needs, functional specifications, and
constraints.
3. Design
In the design phase, developers create architectural plans for the software. This
includes UI/UX design, database schema, and system architecture. Prototypes
may be developed for early feedback.
4. Development
The actual coding takes place in this phase. Developers write code according to
the design specifications, integrating components as needed. Version control
systems are typically used to manage code changes.
5. Testing
Testing is critical for ensuring software quality. Various testing methods
include:
Unit Testing: Testing individual components for functionality.
Integration Testing: Ensuring that components work together as intended.
User Acceptance Testing(UAT): End-users test the software to ensure it
meets their needs and expectations.
6. Deployment
Once testing is complete, the software is deployed to a live environment. This
can involve installing the software on user machines or making it available
online.
7. Maintenance
After deployment, software requires ongoing maintenance to fix bugs, update
features, and ensure compatibility with new hardware or operating systems.
Regular updates and user support are critical in this phase.
The construction procedure for the Voice Activated Electronic Display Board
using AI involves several key steps that ensure the successful development
and deployment of the system. This chapter outlines the methodical
approach taken to build and implement the system, covering hardware and
software requirements, assembly, testing, and analysis of results.
4.2 System Installation
The hardware components needed for the Voice Activated Electronic Display
Board include:
With the hardware and software in place, the next step involves the physical
assembly of the system:
1. Mounting Components: All hardware components are securely mounted on
a chassis or housing to ensure stability. This includes the display,
microcontroller, and any additional modules.
2. Wiring: Proper connections are made between components, following the
schematic design to avoid short circuits and ensure functional integrity.
3. Software Installation: The necessary software is installed on the
microcontroller, including libraries for voice recognition and display control.
4. Configuration: System settings are configured according to project
requirements, such as network settings for internet connectivity and display
parameters.
The results of the testing phase are analyzed to evaluate the system's
performance and identify any issues:
The discussion of results will include insights gained from user feedback and
testing metrics, providing a comprehensive understanding of the system's
strengths and weaknesses.
4.6 System Documentation
User Manual: A guide for end-users detailing how to operate the system,
including common commands and troubleshooting steps.
Technical Documentation: A detailed report on system architecture,
hardware specifications, software installation procedures, and programming
code. This will assist developers in future modifications or enhancements.
Maintenance Guide: Instructions for regular maintenance tasks to ensure
the longevity and reliability of the system.
1. Hardware Costs
Microcontroller: 35,000
Display Module: 25,000
Voice Recognition Module: 50,000
Power Supply: 10,000
Connectivity Modules: 15,000
Sensors: 20,000
Miscellaneous (wires, connectors, housing): 20,000
2. Software Costs
3. Labor Costs