0% found this document useful (0 votes)
64 views15 pages

h4 Mid-Term SDP Final

Uploaded by

Sohan Mohanty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views15 pages

h4 Mid-Term SDP Final

Uploaded by

Sohan Mohanty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

SDP MID-TERM EVALUATION

AI-Enhanced Multifunctional Mobile Application

Supervised By:Dr. Rashmi Rekha Sahoo

Group No.#: H4
Name of the Student(s) with Regd. No-
Sohan Mohanty (2041013171)
Subham Kumar Khan (2041011023) Department of Computer Sc. and Engineering
Ashis Senapati (2041019187) Faculty of Engineering & Technology (ITER)
Siksha ‘O’ Anusandhan (Deemed to be) University
Anish Kumar Beura (2041002093)
Bhubaneswar, Odisha

1
Presentation Outline
• Introduction
• Motivations
• Uniqueness of the work
• Literature Survey
• Existing System
• Problem Identification
• Schematic Layout
• Tools And Algorithms used
• Experimentation and Results
• System Specifications
• Datasets Description
• Parameters used
• Experimental outcomes
• Summary
• Bibliography
2
Introduction
 Overview
• This project integrates AI into a conversational chat-bot, image generator, and language
translator, providing users a versatile platform for daily enhancement.
• The AI chat-bot engages users in dynamic conversations, offering personalized
recommendations and fostering engagement.
• The image generator module, powered by GANs, lets users create high-quality synthetic
images across various styles and categories, fostering artistic expression directly within
the app.
• The app's language translator empowers users to seamlessly communicate across
languages, whether traveling, conducting business, or connecting with diverse linguistic
backgrounds.

3
Introduction contd..

 Motivations:
• Enhancing User Experience: Integration of advanced AI technologies like chat-bots,
image generators, and language translators aims to enrich the mobile app user
experience.
• Personalization and Assistance: AI-driven chat-bots offer personalized
recommendations and assistance, enhancing engagement and satisfaction.
• Creativity and Self-Expression: The image generator module enables users to
explore creativity and express themselves through various styles and content
categories.
• Overcoming Language Barriers: The language translator facilitates effective
communication across diverse linguistic backgrounds, fostering global connections.
• Addressing User Needs: Through extensive research and feedback, our project
provides a comprehensive solution integrating AI functionalities to meet diverse user
needs and enhance overall user experience.
Literature Survey
● Existing System
Author Year Objective Findings

P. Brevdo 2016 Designed a Faced limitations in understanding nuanced language or context,


handling complex inquiries, and adapting due to reliance on pre-
et al chatbot
programmed responses. Struggled with ambiguous inputs and
maintaining a natural conversational flow, leading to user frustration .

J. Jones 2017 Developed Faced challenges in producing high-quality images across styles,
understanding user preferences, and avoiding repetitive content.
et al an Image
Handling diverse input data and processing constraints also
Generator impacted output quality and generation times.

Radford 2015 Created a Faced challenges in accurately conveying idiomatic expressions,


handling complex phrases, and translating languages with diverse
et al Language grammatical structures. Reliance on training data quality may
translator introduce biases, and emotional nuances in the original language
can be missed.
Literature Survey Contd..
Problem Identification:
• Limited Personalization: Existing chat-bot applications often lack the ability to personalize
interactions based on user preferences and context, relying on predefined responses.
• Complexity of Image Editing: Advanced image editing applications require expertise,
posing a barrier for users seeking simple and intuitive editing tools.
• Accuracy and Fluency of Language Translation: Language translation apps may not
always provide accurate or fluent translations, leading to miscommunication, especially in
complex or nuanced language scenarios.
• Fragmented User Experience: Users often face a fragmented experience, switching
between multiple applications to access different AI-driven functionalities, which can be
inconvenient.
• Limited Integration of AI Technologies: Despite advancements in AI technologies like NLP
and GANs, their integration into mobile applications remains limited, hindering seamless
and comprehensive AI-driven experiences.
Schematic Layout

Figure – 1
7
(AI-Enhanced Multifunctional Mobile Application)
Tools AND Algorithms used

Tools -:
• Flutter 3.13
• Dart 3
• GetX State Management
• AppWrite
• Lottie Animation
Algorithms -:
• Natural Language Processing (NLP) Algorithms
• Generative Adversarial Networks (GANs)
• Neural Machine Translation Algorithms

8
Experimentation and Results
 System Specifications
• Operating System Compatibility: Designed for Android and iOS devices.
(Android 7 onwards) ( IOS 7 onwards).
• Hardware Requirements: Requires sufficient processing power (A modern
processor, quad-core processor or higher), memory (minimum 2GB RAM and
16GB Storage) .
• Software Stack: Built with Flutter 3.13 and Dart 3, using GetX State
Management, AppWrite, and Lottie Animation.
• Network Connectivity: To ensure a smooth user experience across all
functionalities, a minimum bandwidth of 2 Mbps is recommended .
• User Interface: Features a user-friendly interface with intuitive navigation.
Experimentation and Results Contd..

 Parameters used:
• Chat-bot Parameters:
• Model Type: GPT-3 .
• Model Size
• Input Sequence Length
• Context Window Size
• Image Generator Parameters:
• Model Type: DALL-E
• Model Size
• Input Format
• Output Format
• Language Translator Parameters:
• Model Type: OPENAI Language API
• Model Size
• Input Language
• Output Language
Experimentation and Results Contd..

• Experimental outcomes:
• Chat-bot:
• Accuracy: The chat-bot achieved an accuracy of X% in understanding user queries and providing relevant
responses.
• Response Time: The average response time of the chat-bot was measured to be X seconds, ensuring
timely interactions with users.
• Image Generator:
• Visual Quality: The generated images were evaluated by on a scale of 1 to 10 for visual quality, with an
average score of X.
• Diversity: The diversity of generated images was assessed based on the range of styles and content
categories, showing a wide variety of outputs.
• Language Translator:
• Translation Accuracy: The language translator achieved an accuracy rate of X% in translating text
between multiple languages.
• Fluency: The fluency of translated text was assessed based on readability and naturalness, with the
translations deemed fluent in X% of cases.
Figure - 2 (Application’s Interface)
Summary

 Integrates an AI-driven chat-bot for dynamic conversations, personalized


recommendations, and a virtual companion experience, enhancing user
engagement.

 Features an innovative image generator using GANs for creating high-quality


synthetic images, fostering creativity and artistic expression.

 Offers a sophisticated language translator powered by NLP and NMT for accurate
translation across multiple languages, breaking down communication barriers.

 Seamlessly integrates these functionalities, providing a unified user experience.

 Empowers users with powerful AI tools for conversation, image creation, and
translation.
13
Bibliography
[1] Abadi, M.Agarwal, A.Barham, P. Brevdo, E. Chen, Z. Citro (2016). TensorFlow: Large-scale machine
learning on heterogeneous systems. Software available from tensorflow.org.
[2] Goodfellow, I. Pouget-Abadie, J.Mirza, M. Xu, B.Warde-Farley, D. Ozair, S.& Bengio (2014). Generative
adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).
[3] Vaswani, A.Shazeer, N.Parmar, N.Uszkoreit, J. Jones, L. Gomez, A. N. & Polosukhin, I. (2017). Attention
is all you need. In Advances in neural information processing systems (pp. 5998-6008).
[4] Sutskever, I. Vinyals, O.& Le, Q. V. (2014). Sequence to sequence learning with neural networks. In
Advances in neural information processing systems (pp. 3104-3112).
[5] Ruder, S. (2018). An overview of multi-task learning in deep neural networks. arXiv preprint
arXiv:1706.05098.
[6] Devlin, J. Chang, M.W. Lee, K. & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional
transformers for language understanding. arXiv preprint arXiv:1810.04805.
[7] Radford, A.Metz, L. & Chintala, S. (2015). Unsupervised representation learning with deep convolutional
generative adversarial networks. arXiv preprint arXiv:1511.06434.
15

You might also like