Final Project Report_r
Final Project Report_r
A
PROJECT REPORT
ON
Full-Time Industrial Internship
BY
Rahul M Sonawane
PRN NO- 1132230085
IN PARTIAL FULFILLMENT OF
MASTER OF COMPUTER APPLICATION- SCIENCE
DR. VISHWANATH KARAD MIT WORLD PEACE
UNIVERSITY, PUNE-411038
1
DEPARTMENT OF COMPUTER SCIENCE
AND APPLICATIONS
Certificate
This is to certify that, Rahul M Sonawane student of MSc. (CS) Semester IV has
successfully / partially completed Industrial Training at Iauro Systems Pvt Ltd in partial
fulfilment of MCA- Science under Dr. Vishwanath Karad MIT World Peace University,
for the academic year 2023-2024.
Date: ___/___/______
External Examiners:
1. _________________________ 2. __________________________
2
Internship Offer Letter
3
ACKNOWLEDGEMENT
First and foremost, I wish to extend my sincere thanks Prof. Dr. C.H.Patil,
School of Computer Science, MIT-WPU, Pune, for their invaluable
guidance, insightful feedback, and constant encouragement. Their expertise
and mentorship were instrumental in completing this project successfully.
I also wish to thank Dr. Vishwanath Karad MIT World Peace University for
providing me with the necessary resources and facilities to carry out this
project seamlessly. The conducive environment and access to tools were
integral to achieving the project goals.
Rahul M Sonawane.
MCA (Science)
Place: Pune
Date:
4
DECLARATION
The results embodied in this project report have not been submitted to any
other University or Institute for the award of any degree or diploma. I have
contributed myself for this project going on in our company.
Sign
Rahul M Sonawane
PRN- 1132230085
5
INDEX
Sr No Content Page No
Chapter 1 Abstract 9
Introduction
Node.js Training
Chapter 3
3.1 Overview of the training program 13
4.1 Introduction 17
4.5 Screenshots 21
Blog-Post API
5.1 Introduction 24
Chapter 5
5.2 Key features 25
6
5.4 Folder structure & key code snippets 27
5.5 Screenshots 30
Documentation of IO Flow
6.1 Introduction 32
HRMS Project
7.1 Introduction 34
7.2 Objectives 34
Chapter 7
7.3 Technology stack 35
8.1 Introduction 38
7
Current work
9.1 Introduction 43
10.1 Introduction 45
Chapter 10
10.2 Topics covered 45
Chapter 11 Conclusion 48
8
1. Abstract
This report summarizes my ongoing six-month internship at iauro Systems Pvt. Ltd., a
company in the IT and AI domain that delivers smart, scalable digital solutions across
industries such as Edutech, Travel & Hospitality, Financial Services, and more. The
internship began on February 17, 2025, and will conclude on August 17, 2025. It has
offered me extensive hands-on experience in full-stack development, backend
architecture, documentation practices, and real-world software engineering processes.
In the initial phase, I completed a structured Node.js training program, during which I
developed two RESTful APIs—an Expense Tracker API and a Blog Post API—to
solidify my understanding of backend development concepts such as authentication,
CRUD operations, and RESTful principles.
Following the training, I contributed to active company projects. One key contribution
has been to the HRMS project, where I developed the Role-Based Access Control
(RBAC) module using Java, MySQL, and Spring Boot, implementing scalable and secure
role and permission management.
I also built a Twitter sentiment analysis module that fetches tweets tagging specific
handles, analyzes their sentiment using Python-based NLP libraries, and stores the results
in the database. This helped me gain experience with third-party APIs, text processing,
and real-time data handling.
Another major responsibility has been preparing detailed documentation for the IO Flow
service, using tools like Swagger and Postman. Additionally, I explored and analyzed
internal services like the Widget Service, Page Service, and User Management Service,
further enhancing my ability to work with production-level codebases.
As part of my internal upskilling, I also completed a Prompt Engineering course under the
Consultant 1.0 track, gaining foundational skills in crafting effective AI prompts.
The technologies and tools I have worked with include: Node.js, NestJS, Java, Redis,
GRPC, MongoDB, Typescript, Swagger, Postman, Python, JavaScript, and Git.
9
Key learning outcomes:
• Mastering API development and documentation tools like Swagger and Postman
10
2. Introduction
This report outlines my ongoing six-month internship at iauro Systems Pvt. Ltd., Pune, a
forward-thinking company operating in the IT and Artificial Intelligence (AI) domain.
The internship commenced on February 17, 2025, and will conclude on August 17, 2025.
This internship opportunity has allowed me to work closely with production-grade
systems, real-world backend architectures, internal service documentation, and end-to-
end module development across various business domains. The experience has helped me
build essential technical skills, understand industry workflows, and apply theoretical
knowledge in practical, enterprise environments.
At its core, iauro fosters a culture of continuous evolution and agile development. The
organization emphasizes a product mindset, cross-functional collaboration, and client
empathy. Their teams integrate AI, Machine Learning, Cloud-Native technologies, and
Open Source tools to build cutting-edge, user-centric solutions.
iauro also leverages internal accelerators such as IO Craft, IO Flow, IO Data, and IO
Market—platforms designed to streamline design-to-code workflows, data management,
and reusable asset deployment. The company’s values are built on pillars of integrity,
accountability, empathy, and an evolving mindset, ensuring a transparent, ethical, and
inclusive work environment.
11
2.2 Duration and Goals of the Internship
The internship spans a duration of six months, from February 17, 2025 to August 17,
2025. The key goals of this internship include:
Throughout the internship, I have worked with a diverse set of technologies that support
scalable, enterprise-grade application development. These include:
This exposure to modern tools and platforms has equipped me with practical skills that
are crucial for professional software development and collaborative team projects.
12
3. Node.js Training
This training served as a critical stepping stone in my transition towards full-scale project
work at iauro, ensuring that I became familiar with the complete backend development
stack and the essential tools required for professional development.
The Node.js training followed a progressive structure, beginning with core computer
networking concepts and gradually advancing to modern JavaScript backend frameworks
and tools. The comprehensive list of topics covered includes:
• Client-server architecture
• IP address, DNS
13
3.2.2 JavaScript & TypeScript Foundations
• Server-side JavaScript/TypeScript
• What is Node.js?
• NPM usage
• Path module
• Environment variables
14
• Buffers, streams, and events
• MongoDB basics
• Introduction to Redis
15
3.2.9 Tools and Practices Followed
During the training period, several industry-standard tools and techniques were used to
simulate a real-world development environment. These included:
16
4. Expense Tracker API
4.1 Introduction
The primary objective of the Expense Tracker API was to design a simple yet functional
backend service capable of handling real-world use cases such as expense management.
The system enables users to add, retrieve, update, and delete expense entries, while also
providing the ability to categorize and filter them. The inclusion of JWT-based
authentication ensures that the API is secure and follows current industry best practices.
The purpose of this project was to implement a full-featured REST API with proper
routing, data validation, secure authentication, and error handling. The scope was limited
to individual user use, focusing on simplicity, clean structure, and usability for personal
financial tracking. In addition to RESTful endpoints, the API architecture was extended to
support gRPC for inter-service communication, reflecting industry trends toward high-
performance, scalable microservices. The gRPC implementation was achieved using
protocol buffers (.proto files), with dedicated grpcClient.js and grpcServer.js modules to
enable future expansion of the system into a distributed environment.
17
4.2 Features of the Expense Tracker API
Allows a user to create a new expense entry by providing details like amount, description,
category, and date.
Returns expenses that match a specific category name provided as a query parameter.
18
4.3 Technology Stack
19
• controller/controller.js: Contains core logic for handling CRUD operations on
expenses.
20
Sample Route Snippet
The API was tested using Postman and documented with Swagger UI. Screenshots
include the Swagger endpoint overview and a Postman test of the Signup API with
request and response details.
21
Figure 4: Swagger UI for Expenses
22
Figure 5: Postman tool testing
23
5. Blog-Post API
5.1 Introduction
This project served as both a learning milestone and a proof of concept for building
scalable, maintainable APIs using the Node.js ecosystem. It allowed me to apply practical
knowledge in a structured way, gaining confidence in designing backend services from
scratch.
The primary goal of the Blog Post API was to build a functional REST API for managing
blog posts, focusing on clean design, modularity, and secure access control. Through this
self-guided build, I aimed to strengthen my understanding of:
The purpose of the Blog Post API is to provide a structured and secure backend service
for managing blog content in a scalable and modular manner. This API enables users to
perform essential content management operations such as creating, reading, updating, and
deleting blog posts. It serves as a foundational learning project, allowing practical
implementation of RESTful principles, middleware integration, authentication, and
database interaction.
24
The scope of the API is intentionally focused on core CRUD functionalities within a
single-user or small-scale blogging platform context. Security is enforced through JWT-
based authentication to ensure protected access to write operations. While the current
implementation is streamlined for simplicity and educational clarity, it lays the
groundwork for potential future enhancements such as user roles, commenting systems,
content categorization, and public-facing read-only endpoints.
The Blog Post API was designed with a focus on both user authentication and content
management functionality. Below are the core features that were implemented as part of
the API development:
• POST /logout – Logs the user out by invalidating the session using an identifier.
25
• PATCH /send-forgot-password-code – Sends a password reset code to the user's
registered email.
26
5.4 Folder Structure and Key Code Snippets
The Blog Post API project is organized following a modular and scalable folder structure,
which promotes separation of concerns, code maintainability, and clarity. Below is an
overview of the key components and their responsibilities:
27
• controllers/authControllers.js: Handles user authentication, including signup,
login, verification, and password reset functionalities.
28
Sample Controller Snippet- Create Post
29
5.5 Postman test case screenshots
The Blog Post API was tested using Postman. The following screenshots showcase the
API endpoints and demonstrate request and response details, including a test of the
signup, login, get all posts and delete post functionality.
30
Figure 10: Get all posts feature testing using Postman
31
6. Documentation of IO Flow
6.1 Introduction
The documentation was prepared for an internal enterprise service named IO Flow. IO
Flow is a proprietary platform developed to automate and manage workflow processes
within the organization. It plays a critical role in orchestrating tasks, streamlining
communication between services, and ensuring the consistency and reliability of business
operations.
To gain a comprehensive understanding of the IO Flow service, the following tool was
utilized:
• Swagger UI: Employed to explore available API endpoints, examine request and
response structures, and analyze the overall functionality of the service.
The following tools were used to create and organize the technical documentation:
The primary objective of the documentation was to produce a clear, structured, and
comprehensive reference guide for the IO Flow service. The documentation was intended
to:
32
• Enhance team communication and alignment by providing a shared understanding
of the system.
The documentation provided the following benefits to the project and team:
33
7. HRMS Project (RBAC Authentication Service)
7.1 Introduction
For the HRMS (Human Resource Management System), the development of an RBAC
authentication service was essential. It ensures that users access the system securely and
in alignment with their responsibilities. This authentication service helps maintain a
structured, secure, and scalable system by ensuring that users can only interact with
relevant features based on their roles.
7.2 Objectives
The RBAC authentication service was developed with the following objectives:
• Secure Access Control: The primary purpose of the RBAC authentication service
was to implement a secure way of managing user access. By creating different
roles (Admin, HR, Employee, etc.), each role is associated with specific
permissions, ensuring that only authorized users can access sensitive parts of the
HRMS system.
• Ensure Compliance and Security: The RBAC service helps the organization
comply with security standards and regulations by preventing unauthorized access
to sensitive data and ensuring that users only have access to the parts of the
system that are necessary for their job.
34
• Enhance Scalability: As the HRMS system grows and more users are added, the
RBAC authentication service is designed to scale efficiently. New roles and
permissions can be easily integrated into the system as the organization's needs
evolve.
The RBAC authentication service was developed using the following technical stack:
• Java Spring Boot: Spring Boot was chosen for its robust, scalable, and secure
features, making it the ideal choice for building the authentication service. Spring
Security was integrated to handle user authentication and authorization, ensuring
users are properly authenticated and their access rights are assigned based on their
roles.
• MySQL: The database layer is built using MySQL, which provides reliable and
efficient relational data storage for user, role, permission, and module information.
MySQL’s relational structure supports the entities and relationships necessary for
a dynamic and scalable RBAC system.
The development of the RBAC authentication service has had a substantial impact on the
HRMS project by laying the foundation for secure user management and role-based
access control:
• Enhanced Security: By implementing RBAC, the service ensures that each user
has access only to the parts of the HRMS system relevant to their role. This
reduces the risk of unauthorized data access and strengthens the security of the
system.
• Scalability and Flexibility: As the HRMS system continues to develop and grow,
the RBAC authentication service allows for easy addition of new roles and
35
permissions without disrupting the system’s functionality. This flexibility ensures
the HRMS can evolve as the organization’s needs change.
• Compliance and Auditing: The service helps the HRMS meet security and
compliance standards by providing a structured method for controlling access and
ensuring that sensitive data is protected. It also facilitates auditing by tracking
which users have access to specific features within the system.
Through the development of the RBAC authentication service, the project has gained a
secure, scalable, and flexible user management system, which is essential for the further
development of the HRMS platform.
Login Controller
36
This snippet demonstrates the controller responsible for handling user login requests. It
includes endpoint mappings for authenticating users and issuing tokens based on their
credentials and roles. This forms the entry point to the secured HRMS system, ensuring
only authorized users can access protected resources.
This snippet represents the Data Transfer Object (DTO) used for handling role creation or
modification requests. It encapsulates role data passed from the client and ensures a clean
separation between the API layer and internal data models, improving code
maintainability and clarity.
37
8. Twitter (X) Sentiment Analysis
8.1 Introduction
This chapter describes the development of a Twitter Sentiment Analysis service created
as part of a larger project. The objective was to automate the process of fetching tweets
mentioning a specific handle, analyze the sentiment of each post using Natural Language
Processing (NLP) techniques, and store the processed data in a structured database. The
system was designed to operate on a scheduled basis to ensure continuous sentiment
monitoring.
The primary purpose of the Twitter Sentiment Analysis service was to build an automated
system capable of:
• Storing the original tweet content along with its sentiment result into a local
database.
This service was developed as a supporting component for a larger system, with an
emphasis on automation, accuracy, and efficiency.
• Database Storage: All tweet content and sentiment outcomes were stored in a
SQLite database for easy access and traceability.
38
8.3 Project Structure and key code snippets
39
• main.py: Entry point of the application. Initializes and coordinates the execution
of tweet fetching, sentiment analysis, and database storage.
• scheduler/job.py: Contains the scheduled task logic that periodically triggers the
entire data fetching and analysis pipeline.
• db/models.py: Defines the database schema for storing tweet content and
corresponding sentiment labels using SQLite.
• tweets.db: The SQLite database file where tweet data and sentiment analysis
results are stored.
• Database: SQLite3
40
• Job Scheduling: Built-in Python scheduler using time and loop logic or
APScheduler
41
Job Snippet: Run the project at finite intervals.
42
9. Current Work
9.1 Introduction
As part of the ongoing responsibilities during the internship engagement, current efforts
are focused on comprehensively understanding three important internal microservices
within the enterprise system architecture — the User Management Service, the Page
Service, and the Widget Service. These services are essential components of the broader
product ecosystem, playing distinct roles in user handling, dynamic content generation,
and visual composition within the platform.
The primary objective of this phase is to develop a clear and practical understanding of
how these services function individually and interact with each other. This includes
reviewing the codebase, exploring the API structure, and participating in real tasks under
the guidance of an industry mentor. This preparation is essential for contributing
effectively to the ongoing development and maintenance of the system.
This service is responsible for handling user-related data and functionality, it manages
information such as:
• Structured user profile data used across various modules in the application
This service is responsible for managing the creation and configuration of dynamic
"pages" within the platform. These pages often act as containers that host various
functional or visual components.
43
9.3.3 Widget Service
Closely linked with the Page Service, the Widget Service handles the creation,
configuration, and rendering logic for individual widgets. Widgets are the modular
elements that populate pages, providing both content and interactivity.
To ensure a systematic and effective learning process, the following methods are being
utilized:
Understanding these services in depth enables better alignment with the organization’s
product goals. Since they represent critical infrastructure components, mastering them
enhances the ability to take on future implementation and debugging tasks more
confidently. It also ensures that any feature development or customization work is built
on a solid understanding of the system’s architecture and data flows.
44
10. EncureIt Internship Overview
10.1 Introduction
Over the course of the training, I acquired skills in the following key areas:
• HTML Topics: Introduction to HTML, tags, forms, media embedding, tables, and
webpage structure.
• CSS Topics: Types of CSS (inline, internal, external), layout management using
Flexbox and Grid, and application of various style properties.
• Built real-world web page replications including the EncureIT and Netflix
websites using responsive techniques.
• Explored JavaScript syntax, variables, data types, functions, objects, loops, and
events.
45
10.2.4 SQL and MySQL
• Learned database fundamentals and how to interact with MySQL using DDL,
DML, DCL, TCL, and DQL commands.
• Applied clauses such as WHERE, GROUP BY, HAVING, ORDER BY, and
JOINs (INNER, LEFT, RIGHT, etc.).
• Implemented PHP features like variables, data types, loops, functions, arrays,
and file handling operations.
This training program played a crucial role in strengthening my understanding of the core
building blocks of web development. I gained:
46
• An introductory experience in PHP for backend scripting
47
11. Conclusion
I learned how to navigate live codebases, understand API documentation, and work with
development tools in a production-like setting. Exposure to modular software design,
version control practices, and regular code reviews enhanced my ability to write cleaner,
maintainable code and understand software systems at a broader level.
This internship has not only expanded my technical toolkit but also shaped my mindset
toward continuous learning and collaboration. It has played a pivotal role in aligning my
career direction with full-stack software development and has given me the confidence to
contribute meaningfully in a professional software engineering environment.
48