Mini project Main Documentation
Mini project Main Documentation
CHAPTER 1
INTRODUCTION
1.1 GENERAL
Because of the promising market prospect, an increasing number of companies (e.g., Microsoft,
Amazon, Alibaba) offer data owners cloud storage service with different prices, security, access
speed, etc. To enjoy more suitable cloud storage service, the data owners might change the cloud
storage service providers. Hence, they might migrate their outsourced data from one cloud to
another, and then delete the transferred data from the original cloud. According to Cisco [7], the
cloud traffic is expected to be 95% of the total traffic by the end of 2021, and almost 14% of the
total cloud traffic will be the traffic between different cloud data centers. Foreseeably, the
outsourced data transfer will become a fundamental requirement from the data owners’ point of
view.
disclosure in the transfer phase. But there are still some security problems in
processing the cloud data migration and deletion. Firstly, for saving network
bandwidth, the cloud server might merely migrate part of the data, or even deliver
some unrelated data to cheat the data owner. Secondly, because of the network
instability, some data blocks may lose during the transfer process. Meanwhile, the
adversary may destroy the transferred data blocks. Hence, the transferred data may
be polluted during the migration process. Last but not least, the original cloud
server might maliciously reserve the transferred data for digging the implicit
benefits. The data reservation is unexpected from the data owners’ point of view. In
short, the cloud storage service is economically attractive, but it inevitably suffers
from some serious security challenges, specifically for the secure data transfer,
integrity verification, verifiable deletion. These challenges, if not solved suitably,
might prevent the public from accepting and employing cloud storage service.
In this work, we study the problems of secure data transfer and deletion in
cloud storage, and focus on realizing the public verifiability. Then we propose a
counting Bloom filter-based scheme, which not only can realize provable data
transfer between two different clouds but also can achieve publicly verifiable data
deletion. If the original cloud server does not migrate or remove the data honestly,
the verifier (the data owner and the target cloud server) can detect these malicious
operations by verifying the returned transfer and deletion evidences. Moreover, our
proposed scheme does not need any Trusted third party (TTP), which is different
from the existing solutions. Furthermore, we prove that our new proposal can
satisfy the desired design goals through security analysis. Finally, the simulation
experiments show that our new proposal is efficient and practical.
Verifiable data deletion has been well studied for a long time, resulting in
many solutions [12−18]. Xue et al studied the goal of secure data deletion, and
put forward a key policy attribute based encryption scheme, which can achieve
data fine grained access control and assured deletion. They reach data deletion by
removing the attribute and use Merkle hash tree (MHT) to achieve verifiability, but
their scheme requires a trusted authority. Du et al. designed a scheme called
Associated deletion scheme for multi-copy (ADM), which uses pre-deleting
sequence and MHT to achieve data integrity verification and provable deletion.
However, their scheme also requires a TTP to manage the data keys. In 2018, Yang
et al presented a Blockchain-based cloud data deletion scheme, in which the cloud
executes deletion operation and publishes the corresponding deletion evidence on
Blockchain. Then any verifier can check the deletion result by verifying the
deletion proof. Besides, they solve the bottleneck of requiring a TTP. Although
these schemes all can achieve verifiable data deletion, they cannot realize secure
data transfer
The proposed scheme not only can achieve secure data transfer but also can
realize permanent data deletion. Additionally, the proposed scheme can satisfy the
public verifiability without requiring any trusted third party. Finally, we also
develop a simulation implementation that demonstrates the practicality and
efficiency of our proposal. Moreover, the cloud A should adopt CBF to generate a
deletion evidence after deletion, which will be used to verify the deletion result by
the data owner. Hence, the cloud A cannot behave maliciously and cheat the data
owner successfully. Finally, the security analysis and simulation results validate the
security and practicability of our proposal, respectively.
1.3 OBJECTIVE
In our scheme, we aim to achieve verifiable data transfer between two different
clouds and reliable data deletion in cloud storage. Hence, three entities are included
in our new construction, we construct a new counting Bloom filter-based scheme in
this paper. The proposed scheme not only can achieve secure data transfer but also
can realize permanent data deletion. Additionally, the proposed scheme can satisfy
the public verifiability without requiring any trusted third party. Finally, we also
develop a simulation implementation that demonstrates the practicality and
efficiency of our proposal.
In Existing approaches to securely migrate the data from one cloud to another
and permanently delete the transferred data from the original cloud becomes a
primary concern of data owners.
In short, the cloud storage service is economically attractive, but it inevitably
suffers from some serious security challenges, specifically for the secure.
These challenges, if not solved suitably, might prevent the public from
practical.
Not maintained public verifiability
Title: A Secure and Efficient Data Deletion Mechanism in Cloud Storage Using
Counting Bloom Filters.
Year: 2023.
Description:
With the fast growth of cloud storage, a growing number of data owners are opting
to outsource their data to a cloud server, which may significantly reduce local
storage overhead. Because various cloud service providers provide varying levels
of data storage service, such as security, dependability, access speed, and pricing,
cloud data transfer has become a must have for data owners looking to switch
cloud service providers. As a result, data owners' major issue is how to safely
migrate data from one cloud to another while also permanently deleting the
transferred data from the original cloud. In this work, we propose a novel counting
Bloom filter-based technique to tackle this problem. Not only can the suggested
method provide safe data transport, but it can also ensure permanent data erasure.
Furthermore, the suggested system may meet public verifiability requirements
without the need of a trusted third party. Finally, we provide a simulation
implementation to illustrate our proposal's feasibility and efficiency. Cloud
computing is the fusion and evolution of parallel computing, distributed
computing, and grid computing as a new computer paradigm. Cloud storage is one
of the most appealing cloud computing services because it allows customers to
have convenient data storage and business access by connecting a large number of
dispersed storage devices in a network. Users can outsource their data to a cloud
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING (SCET)
5
SECURE DATA TRANSFER AND DELETION FROM BLOOM FILTER
Year: 2023
Description:
Year: 2022.
Description:
Bloom filter (BF) has been widely used to support membership query, i.e., to judge
whether a given element x is a member of a given set S or not. Recent years have
seen a flourish design explosion of BF due to its characteristic of space-efficiency
and the functionality of constant time membership query. The existing reviews or
surveys mainly focus on the applications of BF, but fall short in covering the
current trends, thereby lacking intrinsic understanding of their design philosophy.
To this end, this survey provides an overview of BF and its variants, with an
emphasis on the optimization techniques. Basically, we survey the existing variants
from two dimensions, i.e., performance and generalization. To improve the
performance, dozens of variants devote themselves to reducing the false positives
Year: 2022.
Description:
Year: 2021
Description:
Year: 2018
Description:
The exponential growth of digital data in cloud storage systems is a critical issue
presently as a large amount of duplicate data in the storage systems exerts an extra
load on it. Deduplication is an efficient technique that has gained attention in large-
scale storage systems. Deduplication eliminates redundant data, improves storage
utilization and reduces storage cost. This paper presents a broad methodical
literature review of existing data deduplication techniques along with various
existing taxonomies of deduplication techniques that have been based on cloud
data storage. Furthermore, the paper investigates deduplication techniques based
on text and multimedia data along with their corresponding taxonomies as these
techniques have different challenges for duplicate data detection. This research
work is useful to identify deduplication techniques based on text, image and video
data. It also discusses existing challenges and significant research directions in
deduplication for future researchers, and article concludes with a summary of
valuable suggestions for future enhancements in deduplication.
The proposed scheme not only can achieve secure data transfer but also can
We prove that our new proposal can satisfy the desired design goals through
security analysis.
Our new proposal is more efficient and practical.
CHAPTER 2
PROJECT DESCRIPTION
2.1 GENERAL
The data owner encrypts the data and outsources the ciphertext to the cloud A.
Then he checks the storage result and deletes the local backup. Later, the data
owner may change the cloud storage service provider and migrate some data from
cloud A to cloud B.
2.2 METHODOLOGIES
Data Owner
Cloud A
Cloud B
To connect with server user must give their username and password then only
they can able to connect the server. If the user already exits directly can login into
the server else user must register their details such as username, password, Email
id, City and Country into the server. Database will create the account for the entire
user to maintain upload and download rate. Name will be set as user id. Logging in
is usually used to enter a specific page. It will search the query and display the
query.
User Page
Error
Message
Database
Data owner
This is the second module in our project where Data Owner process. Data
Owner has to register and login with valid username and password. After login
successful he can do some operations such as user details. If Data Owner what
to Upload Data click on Upload Data and storage status it shows Memory in
cloud A. Data Owner wants to view files in Cloud A click on view files. Data
Owner wants to transfer files/Delete files result click on transfer Result/Delete
result, these files from Cloud B.
Data Owner
Login Page
Database
Cloud A
This is the third module in our project where Cloud A plays the main server part
of the project role. Enter Cloud A name and password then login to the
application. First verified in database then display home page. When Cloud A
click on View Users it shows Registered Data Owners, when Cloud A click on
Accept User Files it shows User files for Accept. If Cloud A wants to View User
files click on View User files, when click on Transfer Request from DO it
shows Transfer Request Files from Data Owner and send to Cloud B for
transfer the File when click on Feed Back it shows feedback Messages from
clients.
Cloud A
Login Page
View user & Accept View user files Transfer request Delete request
User files From DO And DO
Database
Cloud B
This is the fourth module in our project where Cloud B plays the main server
part of the project role. Enter Cloud B name and password then login to the
application. First verified in database then display home page. If Cloud B wants
to View User Files click on View User Files in Cloud B. If Cloud B wants to
Accept Transfer request, click on Transfer Request from Cloud A and accept it.
If Cloud B wants to View Delete request, click on Delete Request from Cloud A
and accept it.
Cloud B
Login Page
Database
Output : If valid user name and password then directly open the home page
otherwise show error message and redirect to the registration page.
Data Owner
Output: If valid operator name and password then directly open the operator
home page otherwise show error message and if Data Owner what to Upload Data
click on Upload Data and storage status it shows Memory in cloud A.
Cloud A
Output : Cloud A verify all data owner requests and accept DO data then data
send to DO.
Admin will verify all data status and DO feedback also.
Cloud B
Output: If valid Cloud B name and password then directly open the Cloud B
home page. All the resources added by some options. If Cloud B wants to Accept
Transfer request, click on Transfer Request from Cloud A and accept it.
We propose a CBF-based secure data transfer scheme, which can also realize
verifiable data deletion. In our scheme, the cloud B can check the transferred data
integrity, which can guarantee the data is entirely migrated. Moreover, the cloud A
should adopt CBF to generate a deletion evidence after deletion, which will be
used to verify the deletion result by the data owner.
deletion involves decrementing them; if all counters for an element reach zero, it is
effectively removed from the set. This structure enables efficient membership
queries, producing false positives but no false negatives—meaning if the CBF
indicates an element is absent, it definitely is. The advantages of CBFs include
their ability to handle dynamic updates and their space efficiency, making them
suitable for memory-constrained environments. However, they can face issues like
counter overflow and increased memory usage compared to traditional Bloom
Filters. Common applications include network traffic monitoring, database
management, data deduplication, and access control in cloud systems. Effective
implementation requires careful selection of hash functions and counter sizes to
prevent overflow, while performance optimizations may be necessary based on
usage patterns. Overall, the Counting Bloom Filter is a versatile tool for managing
large sets of data, especially in cloud computing and large-scale data management
scenarios.
CHAPTER 3
REQUIREMENTS ENGINEERING
3.1 GENERAL
The data confidentiality means that adversary cannot get any plaintext
information without the corresponding data decryption key. In our scheme, the data
owner uses IND-CPA secure AES algorithm to encrypt the file.
The hardware requirements may serve as the basis for a contract for the
implementation of the system and should therefore be a complete and consistent
specification of the whole system. They are used by software engineers as the
starting point for the system design. It shoulds what the system do and not how it
should be implemented.
HARDWARE
PROCESSOR : PENTIUM IV 2.6 GHz, Intel Core
2 Duo.
RAM : 512 MB DD RAM
MONITOR : 15” COLOR
HARD DISK : 40 GB
3.3 SOFTWARE REQUIREMENTS
estimating cost, planning team activities, performing tasks and tracking the teams
and tracking the team’s progress throughout the development activity.
HTML (Hyper Text Markup Language) is the standard language used to create web
pages. It is a combination of Hypertext and Markup language, where Hypertext
defines the link between web pages, and Markup defines the text document within
tags to structure the web pages1. Here are some of the key features of HTML:
HTML is easy to learn and use. It uses tags to structure the content, making it
human-readable and easy for browsers to interpret. This simplicity allows
developers to create web pages efficiently2.
Semantic Structure
HTML5 introduced semantic tags that provide meaning to the web content. Tags
like <article>, <aside>, <header>, <footer>, and <nav> help in defining the
structure of the web page, making it more accessible and SEO-friendly2.
Media Support
HTML supports the inclusion of images, videos, and audio in web pages. HTML5
introduced <video> and <audio> tags, making it easier to embed multimedia
content. This enhances the user experience by providing rich media content 2.
HTML stands for Hyper Text Markup Language. It is used to structure the
content on the web by using various elements (commonly known as tags).
These HTML elements define the different sections of a web page, such as
headings, paragraphs, links to other webpages, listings, images, tables, etc.
These elements tell the browser about the content and formatting to display.
Hyper Text refers to the way in which Web pages (HTML documents) are
linked together. Thus, the link available on a webpage is called “ Hyper
Text".
Markup Language, which means you use HTML to simply "mark up" a
text document with tags that tell a Web browser how to structure it to
display.
Where,
CSS (Cascading Style Sheets) is a powerful language used to style and layout web
pages. It allows developers to control the appearance of HTML elements, making
websites more visually appealing and user-friendly. Here are some key features of
CSS:
Selectors are a core concept in CSS that allow you to target specific HTML
elements for styling. They can be based on IDs, classes, element names, and
other attributes. Understanding selector specificity is crucial for determining
which styles take precedence when multiple rules apply to the same
element2.
Cascading
The term "cascading" refers to how styles are applied in CSS. Styles can be
inherited from parent elements, and more specific rules can override these
inherited styles. This cascading effect allows for a flexible and consistent
styling system across a website
Box Model
The box model is a fundamental concept in CSS that describes how elements
are structured on a webpage. Each HTML element is considered a "box"
with properties such as content, padding, border, and margin. Understanding
the box model is essential for creating layouts and positioning elements on a
webpage.
Typography
CSS provides tools for adding interactivity and animation to web pages.
Properties like transition and animation allow developers to create smooth
Print Friendly Styles: CSS can define how a webpage should be printed,
optimizing content for printing2.
Optimizing Page Load Times: CSS can reduce the amount of data needed
to be downloaded, improving page load times2.
Creating Print and PDF Documents: CSS can be used to create printable
versions of web pages2.
CSS is among the core languages of the open web and is standardized
across Web browsers according to W3C specifications. Previously, the
development of various parts of CSS specification was done synchronously,
which allowed the versioning of the latest recommendations. You might
have heard about CSS1, CSS2.1, or even CSS3. There will never be a CSS3
or a CSS4; rather, everything is now just "CSS" with individual CSS
modules having version numbers.
After CSS 2.1, the scope of the specification increased significantly and the
progress on different CSS modules started to differ so much, that it became
more effective to develop and release recommendations separately per
module. Instead of versioning the CSS specification, W3C now periodically
takes a snapshot of the latest stable state of the CSS specification and
individual modules progress. CSS modules now have version numbers, or
levels, such as CSS Colour Module Level 5.
Dynamic Typing
Functional Style
function greet(name) {
Platform Independent
Prototype-Based Language
Interpreted Language
Form Validation: JavaScript can validate user input on the client side before
sending data to the server, reducing server load and providing immediate
feedback to users.
Data types
Let's start off by looking at the building blocks of any language: the types.
JavaScript programs manipulate values, and those values all belong to a
type. JavaScript offers seven primitive types:
Number: used for all number values (integer and floating point) except
for very big integers.
Function
Array
Date
Reg Exp
Error
Data Owner
1. Upload Data
2. Storage Status
3. View Files
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING (SCET)
36
SECURE DATA TRANSFER AND DELETION FROM BLOOM FILTER
5. Logout
Cloud A
1. View Users
6. Feed Back
7. Logout
Cloud B
1. View User Files in Cloud B
4. Logout
Usability
Reliability
The system is more reliable because of the qualities that are inherited from the
chosen platform java. The code built by using java is more reliable.
Performance
This system is developing in the high level languages and using the advanced
front-end and back-end technologies it will give response to the end user on client
system with in very less time.
Supportability
Implementation
CHAPTER 4
DESIGN ENGINEERING
4.1 GENERAL
Our scheme, we
aim to achieve verifiable data transfer between two different clouds and reliable
data deletion in cloud storage. Hence, three entities are included in our new
construction, as shown in Fig.3. In our scenario, the resource-constraint data owner
might outsource his largescale data to the cloud server A to greatly reduce the local
storage overhead. Besides, the data owner might require the cloud A to move some
data to the cloud B, or delete some data from the storage medium. The cloud A and
cloud B provide the data owner with cloud storage service. We assume that the
cloud A is the original cloud, which will be required to migrate some data to the
target cloud B, and remove the transferred data. However, the cloud A might not
execute these operations sincerely for economic reasons.
Moreover, we assume that the cloud A and cloud B will not collude together
to mislead the data owner because they belong to two different companies. Hence,
the two clouds will independently follow the protocol. Furthermore, we assume
that the target cloud B will not maliciously slander the original cloud A.
4.3 UML
The UML stands for Unified modelling language, is a standardized general-
purpose visual modelling language in the field of Software Engineering. It is used
for specifying, visualizing, constructing, and documenting the primary artifacts of
the software system. It helps in designing and characterizing, especially those
software systems that incorporate the concept of Object orientation. It describes the
working of both the software and hardware systems.
The UML was developed in 1994-95 by Grady Booch, Ivar Jacobson, and James
Rumbaugh at the Rational Software. In 1997, it got adopted as a standard by the
Object Management Group (OMG).
EXPLANATION:
The main purpose of a use case diagram is to show what system functions are
performed for which actor. Roles of the actors in the system can be depicted. The
above diagram consists of user as actor. Each will play a certain role to achieve the
concept. This use case diagram illustrates the interactions between the user and the
system, outlining the primary functionalities needed for secure cloud data transfer
and deletion. If you need help creating an actual diagram
EXPLANATION
Database : Database
Cloud B : Cloud B
EXPLANATION:
In the above diagram tells about the flow of objects between the classes. It is a
diagram that shows a complete or partial view of the structure of a modelled
system. In this object diagram represents how the classes with attributes and
methods are linked together to perform the verification with security. This object
diagram encapsulates the relationships and states of the objects involved in the
cloud data management process, illustrating how users interact with files and cloud
services during transfers.
Register / Login
Cloud B
Data Owner Cloud A
Database
EXPLANATION:
1 : login ()
2 : verification ()
3 : if fail ()
4 : if success ()
5 : upload file ()
6 : success ()
7 : file storage ()
8 : view files ()
9 : Transfer file ()
10 : success ()
11 : Delete request ()
12 : success ()
14 : logout request ()
15 : loggedout ()
EXPLANATION:
object interactions arranged in time sequence. It depicts the objects and classes
involved in the scenario and the sequence of messages exchanged between the
objects needed to carry out the functionality of the scenario.
This sequence diagram captures the step-by-step interactions, showcasing the flow
of messages and actions taken by each participant throughout the file transfer
lifecycle.
Logout Transfer
11 : logout request
() 10 : Transfer or delete ()
result 9 : Delete request
()
12 : loggedout
()
() Home
4 : if success
2: () 6 : file storage
()
verification
Login
8 : Transfer ()
file
5 : upload file
()
7 : view ()
1: () files
login
Files
3 : if ()
fail
Data Owner
EXPLANATION:
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING (SCET)
49
SECURE DATA TRANSFER AND DELETION FROM BLOOM FILTER
Data Owner
Login
No
yes
Home
EXPLANATION:
Data
Owner Fil Dele
es te
EXPLANATION:
Database
Cloud B Data
Owner
EXPLANATION:
They are used to illustrate the structure of arbitrarily complex systems. User gives
m ain query and it converted into sub queries and sends through data
dissemination. Results are to be showed to user by data aggregators. All boxes are
arrow indicates dependencies.
DO
Cloud A, B
UID Email
Register
PWD
Login
PWD
Login
View Request
Data
Details Upload Files
Name
Verify Transfer
Send
Result/Delete
Requirements
Result
Request &
Response
Database
EXPLANATION:
The user logs into their account, verifying their Identity.The user chooses a file
from their local system to upload.The application processes the upload request,
saving the file details in the database and updating the file status to Uploaded.
Level 0:
User Page
Database
Level 1:
Cloud A
Login Page
View Users & Accept View User Files Transfer request Delete request
Users Files from DO and DO
Database
EXPLANATION:
A DFD shows what kinds of data will be input to and output from the
system, where the data will come from and go to, and where the data will be stored.
It does not show information about the timing of processes, or information about
whether processes will operate in sequence or in parallel.
Pointers
Pointers are symbols that appear on the screen and move according to the
user's input from a pointing device like a mouse or touchpad. They are used
to select and interact with various elements on the screen.
Icons
Windows
Menus
Controls (Widgets)
Controls, also known as widgets, are interactive elements that allow users to
input or manipulate data. Common controls include buttons, checkboxes,
radio buttons, sliders, and text fields. They provide a consistent way for
users to interact with the software2.
Tabs
Tabs are rectangular boxes that contain text labels or icons associated with
different views or sections. They allow users to switch between different
views or functionalities within the same window. Tabs are commonly used in
web browsers and settings panes
Cursors
Cursors indicate the position on the screen that will respond to input from a
text input or pointing device. They help users know where their actions will
take effect
Selection
Selection allows users to choose one or more items from a list or area. It is
often used in conjunction with other controls to perform actions on the
selected items
1. Windows: These are the movable boxes on your screen that can contain
different content and applications.
5. Toolbars: These usually contain buttons, icons, and menus for quick
access to functions.
9. Pointers and Cursor: Visual indicators that show your position on the
screen, often changing shape to indicate different functions.
CHAPTER 5
IMPLEMENTATION
5.1 GENERAL
Login.jsp
<!DOCTYPE html>
<html lang="en">
<head>
<title>Login</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width,
initial-scale=1, shrink-tofit=no">
<div id="colorlib-page">
<a href="#" class="js-colorlib-nav-toggle
colorlib-nav-
toggle"><i></i></a>
<jsp:include page="Menu.jsp"></jsp:include>
<!-- END COLORLIB-ASIDE -->
<div id="colorlib-main" style="margin-top: 5%;margin-
left: 10%;">
<section class="ftco-section ftco-no-
pt ftco-no-pb">
<div class="container px-md-0">
<div class="row d-flex no-
gutters">
<div class="col-lg-8 col-md-7 order-md-
last d-flex align-items-stretch">
<div class="contact-wrap w-100
p-md-5 p-4">
<h3 class="mb-4
heading">Login</h3>
<form method="POST"
action="./LoginServlet" id="contactForm"
name="contactForm" class="contactForm">
<div class="row">
<div
class="col-md-12">
<div class="form-group">
</div>
<div
class="col-md-12">
<div class="form-group">
<div
class="col-md-12">
<div class="form-group">
request.getParameter("statu
<h2><%out.print(status); %></h2>
<%}
%>
<%String s = request.getParameter("s");
if(s!
=null)
{%>
<h3><font color="green"><%out.print(s);
%></font></h3>
<%}
%>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
</div>
</section>
</div><!-- END COLORLIB-MAIN -->
</div><!-- END COLORLIB-PAGE -->
<script src="js/jquery.min.js"></script>
<script src="js/jquery-migrate-
3.0.1.min.js"></script>
<script src="js/popper.min.js"></script>
<script src="js/bootstrap.min.js"></script>
<script src="js/jquery.easing.1.3.js"></script>
<script
src="js/jquery.waypoints.min.js"></script>
<script
src="js/jquery.stellar.min.js"></script>
<script src="js/owl.carousel.min.js"></script>
<script src="js/jquery.magnific-
popup.min.js"></script>
<script
src="js/jquery.animateNumber.min.js"></script>
<script src="js/scrollax.min.js"></script>
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING (SCET)
64
SECURE DATA TRANSFER AND DELETION FROM BLOOM FILTER
<script
src="https://maps.googleapis.com/maps/api/js?
key=AIzaSyBVWaKrjvy3MaE7SQ74_uJiULgl1JY0
H2s&sensor=false"></script>
<script src="js/google-map.js"></script>
<script src="js/main.js"></script>
</body>
</html>
RegisterServlet.java
java.io.IOException; import
javax.servlet.ServletException; import
javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import
javax.servlet.http.HttpServletRequest;
import
javax.servlet.http.HttpServletResponse;
util.Bean;
@WebServlet("/RegisterServlet") public
String target="";
Bean b = new
Bean();
b.setName(request.getParameter("name"));
b.setPassword(request.getParameter("password"));
b.setEmail(request.getParameter("email"));
b.setMobile(request.getParameter("mobile"));
b.setDob(request.getParameter("dob"));
b.setAddress(request.getParameter("address"));
SecurityDAO().reg(b); if(i!
=0)
target="register.jsp?status=Registration Successfull";
else
{ target="register.jsp?status=Not
Successfull";
}catch (Exception e) {
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING (SCET)
66
SECURE DATA TRANSFER AND DELETION FROM BLOOM FILTER
e.printStackTrace(); target="register.jsp?
response.sendRedirect(tar
get);
TransferFileServlet_DataOwner.java
package servlet;
javax.servlet.ServletException; import
javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import
javax.servlet.http.HttpServletRequest;
import
javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;
import dao.SecurityDAO;
@WebServlet("/TransferFileServlet_DataOwner") public
HttpServlet {
if(fid!=0)
CHAPTER 6
SNAPSHOTS
6.1 GENERAL
This project is implements like web application using COREJAVA and the
Server process is maintained using the SOCKET & SERVERSOCKET and the
Design part is played by Cascading Style Sheet.
SOFTWARE TESTING
7.1 GENERAL
The purpose of testing is to discover errors. Testing is the process of trying
to discover every conceivable fault or weakness in a work product. It provides a
way to check the functionality of components, sub assemblies, assemblies and/or a
finished product It is the process of exercising software with the intent of ensuring
that the Software system meets its requirements and user expectations and does not
fail in an unacceptable manner. There are various types of test. Each test type
addresses a specific testing requirement.
The Test strategy document is a high-level document that outlines the testing
technique used in the Software Development Life Cycle and confirms the test
kinds or levels that will be performed on the product. One can’t change the test
strategy once it’s been written, and it’s been accepted by the Project Manager and
development team.
Testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program input produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on
knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined
inputs and expected results.
7.3.2 Types of Testing
Active Testing
Type of testing consisting in introducing test data and analyzing the execution
results. It is usually conducted by the testing team.
Any project can be divided into units that can be further performed for detailed
processing. Then a testing strategy for each of this unit is carried out. Unit testing
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING (SCET)
77
helps to identity the possible bugs in the individual component, so the component
that has bugs can be identified and can be rectified from errors.
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.
8.1 CONCLUSION
In cloud storage, the data owner does not believe that the cloud server might
execute the data transfer and deletion operations honestly. To solve this problem,
we propose a CBF-based secure data transfer scheme, which can also realize
verifiable data deletion. In our scheme, the cloud B can check the transferred data
integrity, which can guarantee the data is entirely migrated. Moreover, the cloud A
should adopt CBF to generate a deletion evidence after deletion, which will be
used to verify the deletion result by the data owner. Hence, the cloud A cannot
behave maliciously and cheat the data owner successfully. Finally, the security
analysis and simulation results validate the security and practicability of our
proposal, respectively.
Future work Similar to all the existing solutions, our scheme considers the data
transfer between two different cloud servers. However, with the development of
cloud storage, the data owner might want to simultaneously migrate the outsourced
data from one cloud to the other two or more target clouds. However, the multi-
target clouds might collude together to cheat the data owner maliciously. Hence,
the provable data migration among three or more clouds requires our further
exploration.
[1] C. Yang and J. Ye, “Secure and efficient fine-grained data access control
scheme in cloud computing”, Journal of High Speed Networks, Vol.21, No.4,
pp.259–271, 2015.
[2] X. Chen, J. Li, J. Ma, et al., “New algorithms for secure outsourcing of
modular exponentiations”, IEEE Transactions on Parallel and Distributed Systems,
Vol.25, No.9, pp.2386–2396, 2014.
[4] B. Varghese and R. Buyya, “Next generation cloud computing: New trends
and research directions”, Future Generation Computer Systems, Vol.79, pp.849–
861, 2018.
[5] W. Shen, J. Qin, J. Yu, et al., “Enabling identity-based integrity auditing and
data sharing with sensitive information hiding for secure cloud storage”, IEEE
Transactions on Information Forensics and Security, Vol.14, No.2, pp.331–346,
2019.
[7] Cisco, “Cisco global cloud index: Forecast and methodology, 2014–2019”,
available at: https://www.cisco.com/c/en/us- /solutions/collateral/service-
provider/global-cloud-index-gci/white-paperc11-738085.pdf, 2019-5-5.
[9] Y. Liu, S. Xiao, H. Wang, et al., “New provable data transfer from provable
data possession and deletion for secure cloud storage”, International Journal of
Distributed Sensor Networks, Vol.15, No.4, pp.1–12, 2019.
[10] Y. Wang, X. Tao, J. Ni, et al., “Data integrity checking with reliable data
transfer for secure cloud storage”, International Journal of Web and Grid Services,
Vol.14, No.1, pp.106–121, 2018.
[11] Y. Luo, M. Xu, S. Fu, et al., “Enabling assured deletion in the cloud storage
by overwriting”, Proc. of the 4th ACM International Workshop on Security in
Cloud Computing, Xi’an, China, pp.17–23, 2016. [12] C. Yang and X. Tao, “New
publicly verifiable cloud data deletion scheme with efficient tracking”,
[13] Y. Tang, P.P Lee, J.C. Lui, et al., “Secure overlay cloud storage with access
control and assured deletion”, IEEE Transactions on Dependable and Secure
Computing, Vol.9, No.6, pp.903–916, 2012.
[14] Y. Tang, P.P.C. Lee, J.C.S. Lui, et al., “FADE: Secure overlay cloud storage
with file assured deletion”, Proc. Of the 6th International Conference on
Security and Privacy in Communication Systems, Springer, pp.380-397, 2010.
[17] A. Rahumed, H.C.H. Chen, Y. Tang, et al., “A secure cloud backup system
with assured deletion and version control”, Proc. of the 40th International
Conference on Parallel Processing Workshops, Taipei City, Taiwan, pp.160–
167, 2011.
[18] B. Hall and M. Govindarasu, “An assured deletion technique for cloud-based
IoT”, Proc. of the 27th International Conference on Computer Communication
and Networks, Hangzhou, China, pp.1–8, 2018.
[19] L. Xue, Y. Yu, Y. Li, et al., “Efficient attribute based encryption with attribute
revocation for assured data deletion”, Information Sciences, Vol.479, pp.640–
650, 2019.
[20] L. Du, Z. Zhang, S. Tan, et al., “An Associated Deletion Scheme for Multi-
copy in Cloud Storage”,
Proc. of the 18th International Conference on Algorithms and Architectures for
Parallel Processing, Guangzhou, China, pp.511–526, 2018.
[22] Y. Yu, J. Ni, W. Wu, et al., “Provable data possession supporting secure data
transfer for cloud storage”, Proc. Of 2015 10th International Conference on
Broadband and Wireless Computing, Communication and
Applications(BWCCA 2015), Krakow, Poland, pp.38–42, 2015.
[24] L. Xue, J. Ni, Y. Li, et al., “Provable data transfer from provable data
possession and deletion in cloud storage”, Computer Standards & Interfaces,
Vol.54, pp.46–54, 2017.
[25] Y. Liu, X. Wang, Y. Cao, et al., “Improved provable data transfer from
provable data possession and deletion in cloud storage”, Proc. of Conference
on Intelligent Networking and Collaborative Systems, Bratislava, Slovakia,
pp.445–452, 2018.
[26] C. Yang, J. Wang, X. Tao, et al., “Publicly verifiable data transfer and deletion
scheme for cloud storage”, Proc. of the 20th International Conference on
Information and Communications Security (ICICS 2018 ), Lille, France,
pp.445–458, 2018.
[27] B.H. Bloom, “Space/time trade-offs in hash coding with allowable errors”,
Communications of the ACM, Vol.13, No.7, pp.422–426, 1970.
[32] F. Hao, D. Clarke and A. F. Zorzo, “Deleting secret data with public
verifiability”, IEEE
Transactions on Dependable and Secure Computing, Vol.13, No.6, pp.617–629,
2015