0% found this document useful (0 votes)
10 views

Cloud Data share encrypted data share key (Final)

The document discusses the development of a Group Key Management Protocol (GKMP) for secure file sharing in cloud storage, addressing concerns about data security and privacy. It highlights the limitations of existing systems and proposes a mixed encryption technology for group key generation, along with a verification scheme to prevent attacks. Security and performance analyses indicate that the proposed protocol is both secure and efficient for data sharing in cloud computing.

Uploaded by

roobiniprincess
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Cloud Data share encrypted data share key (Final)

The document discusses the development of a Group Key Management Protocol (GKMP) for secure file sharing in cloud storage, addressing concerns about data security and privacy. It highlights the limitations of existing systems and proposes a mixed encryption technology for group key generation, along with a verification scheme to prevent attacks. Security and performance analyses indicate that the proposed protocol is both secure and efficient for data sharing in cloud computing.

Uploaded by

roobiniprincess
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

SYNOPSIS

SYNOPSIS

 The large-scale sharing needs of many enterprises promote the development of


cloud storage. While the cloud computing stores the shared files outside the
trust domain of the owner, the demands and concerns for file security is arising.

 In this paper, a Group Key Management Protocol for file sharing on cloud
storage (GKMP) is proposed. Faced with network attacks from public channel,
a group key generation scheme based on mixed encryption technology is
proposed.

 And a verification scheme is used to prevent shared files from being attacked
by the collusion attack of cloud providers’ and group members’.

 Security and performance analyses indicate that the proposed protocol is both
secure and efficient for data sharing in cloud computing.
CONTENTS
CONTENTS

PARTICULARS PAGE NO

1. INTRODUCTION 1

1.1 ABOUT THE PROJECT 1

2. SYSTEM STUDY ANALYSIS 3

2.1 EXISTING SYSTEM 3

2.2 DRAWBACKS OF EXISTING SYSTEM 3

2.3 PROPOSED SYSTEM 4

2.4 ADVANTAGES OF PROPOSE SYSTEM 4

2.5 FEASIBILITY STUDY 5

3. SYSTEM REQUIRWMENTS 7

3.1 HARDWARE REQUIREMENTS 7

3.2 SOFTWARE REQUIREMENTS 7

3.3 SOFTWARE DESCRIPTION 8

3.4 SPYDER 12

3.5 MODULE DESCRIPTION 13

3.6 ALGORITHM 15

4. SYSTEM DESIGN AND DEVELOPMENT 16

4.1 INPUT DESIGN 16

4.2 OUTPUT DESIGN 17


4.3 CASE 18

4.4 CASE TOOL 18

4.5 TYPES OF CASE TOOLS 20

4.6 TOOLKITS 22

5. SYSTEM TESTING 25

5.1 TESTING 25

5.2 UNIT TESTING: 25

5.3 INTEGRATION TESTING 26

5.4 ACCEPTANCE TESTING 26

6. CONCLUSION 27

7. SCOPE FOR FUTURE STUDY 28

8. REFERENCE 29

9. BIBLIOGRAPHY 31

10. APPENDICES 33

A. DIAGRAM 33

B. SAMPLE CODING 35

C. SAMPLE SCREENSHOTS 52
INTRODUCTION
1 INTRODUCTION
1.1 ABOUT THE PROJECT

Faced with today’s innovative blow-up of cloud technologies, rebuilding


services in terms of cloud have become more popular. In a shared-tenancy cloud
computing environment, data from different clients which can be hosted on
separate virtual machines may reside on a single physical machine. Under this
paradigm, the data storage and management is under full control of the cloud
provider, so data owners are left vulnerable and have to solely rely on the cloud
provider to protect their data. Recent news shows that Google provided the FBI all
the documents of one of its users after receiving a search warrant, but the users
have not been aware of the search until they are arrested.

Because cloud provider has the full access to the data, the privacy of data
could be violated if user’s data is intercepted or modified by the cloud provider. A
common way to guarantee privacy is encrypting and authenticating the shared files.
There is a series of cryptographic schemes under such circumstance that a third
party auditor is able to check the availability of files while nothing about the file
leaks. Likewise, cloud users probably will not hold the strong belief that the cloud
server is doing a good job in terms of confidentiality. The cloud users are
motivated to encrypt their files with their own keys before uploading them to the
cloud server. The remaining challenge is how to share and manage the
cryptographic keys among valid users without the participant of the cloud provider.

Theoretically, access control and group key management can be used for key
management on file sharing. However, some unique features of cloud storage
introduce new problems that have not been fully considered. Firstly, shared files
are transmitted via the network and the files may be intercepted by various network

1
monitoring. Just using access control on the cloud storage cannot fully address this
problem. Secondly, group key management depends on the cloud provider to
manage the encryption key. That can prevent the shared files from intercepting by
the network, while the shared files can be intercepted by the cloud provider.

2
SYSTEM STUDY ANALYSIS
2 SYSTEM STUDY ANALYSIS

2.1 EXISTING SYSTEM

Key share protocol is used to distribute group key to members of the sharing
group without the participation of the cloud provider. Verification Protocol is used
to judge whether there is any cheating exists in key share protocol and provide the
security of key sharing. By executing these protocols stepwise, the group key is
distributed to the group memberships secretly though public channels.
The shared data stored on the cloud is encrypted using AES algorithm. As
the security performance of AES is excellent and unknown attack methods can
attack non-linear components, we conclude that shared data could not be decrypted
by cloud provider.

2.2 DRAWBACKS OF EXISTING SYSTEM

The first is the cloud provider or passive adversary who only gathers
information but does not affect the behavior of the group members in the
communication.
The second is the positive adversary who could alter the output information
as a file sharer.
The last is adaptive adversary who could compromise one or more group
sharers and with the ability of gathering and alter the compromised ones’ output
information

3
2.3 PROPOSED SYSTEM

Firstly, shared files are transmitted via the network and the files may be
intercepted by various network monitoring. Just using access control on the cloud
storage cannot fully address this problem. Secondly, group key management
depends on the cloud provider to manage the encryption key. That can prevent the
shared files from intercepting by the network, while the shared files can be
intercepted by the cloud provider.
In this paper, we proposed a secure group key management protocol on
cloud storage over unreliable channels, aiming at protecting the shared files on the
cloud storage. Mixed encryption technology is used to generate and distribute
group keys, which resistance attacks from network monitor. In addition, we
propose a verified protocol that against the attacks from the file sharers or the
cloud provider.

2.4 ADVANTAGES OF PROPOSE SYSTEM

The public key is managed by the cloud provider, while the private key is
only known by the sharers. Whenever a sharer wants to share his file within the
group, it should generate a group key and encrypt the file with the group key
before transmitting the file to the cloud. Then he uses a key distribution scheme to
distribute the group key to the other group sharers without the participation of the
cloud provider. Recovering the group key needs the collaboration of all the group
members.
Key share protocol is an efficient protocol to distribute group key to group
members. Here we further extend it to enable the group members to verify their
own intermediate information.

4
2.5 FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal
is put forth with a very general plan for the project and some cost estimates.
During system analysis the feasibility study of the proposed system is to be carried
out. This is to ensure that the proposed system is not a burden to the company. For
feasibility analysis, some understanding of the major requirements for the system
is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

1. ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system
will have on the organization. The amount of fund that the company can pour into
the research and development of the system is limited. The expenditures must be
justified. Thus the developed system as well within the budget and this was
achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.

5
2. TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a high
demand on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.

3. SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently.
The user must not feel threatened by the system, instead must accept it as a
necessity. The level of acceptance by the users solely depends on the methods that
are employed to educate the user about the system and to make him familiar with it.
His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.

6
SYSTEM REQUIREMENTS
3 SYSTEM REQUIRWMENTS

3.1 HARDWARE REQUIREMENTS

 System : Pentium IV 2.4 GHz.

 Hard Disk : 40 GB.

 Monitor : 15 inch VGA Color.

 Mouse : Logitech Mouse.

 Ram : 512 MB

 Keyboard : Standard Keyboard

3.2 SOFTWARE REQUIREMENTS

 Operating System : Windows XP.

 Platform : PYTHON TECHNOLOGY

 Tool : Python 3.6

 Front End : Python anaconda script

 Back End : Spyder

7
3.3 SOFTWARE DESCRIPTION

Anaconda is an open-source distribution of the Python and R programming


languages for data science that aims to simplify package management and
deployment. Package versions in Anaconda are managed by the package
management system, conda, which analyzes the current environment before
executing an installation to avoid disrupting other frameworks and packages.

The Anaconda distribution comes with over 250 packages automatically


installed. Over 7500 additional open-source packages can be installed from PyPI as
well as the conda package and virtual environment manager. It also includes a GUI
(graphical user interface), Anaconda Navigator, as a graphical alternative to the
command line interface. Anaconda Navigator is included in the Anaconda
distribution, and allows users to launch applications and manage conda packages,
environments and channels without using command-line commands. Navigator can
search for packages, install them in an environment, run the packages and update
them.

The big difference between conda and the pip package manager is in how
package dependencies are managed, which is a significant challenge for Python
data science. When pip installs a package, it automatically installs any dependent
Python packages without checking if these conflict with previously installed
packages. It will install a package and any of its dependencies regardless of the
state of the existing installation.

Because of this, a user with a working installation of, for example Tensor
Flow, can find that it stops working after using pip to install a different package
that requires a different version of the dependent NumPy library than the one used

8
by Tensor Flow. In some cases, the package may appear to work but produce
different results in execution. In contrast, conda analyzes the current environment
including everything currently installed, and together with any version limitations
specified (e.g., the user may wish to have Tensor Flow version 2.0 or higher),
works out how to install a compatible set of dependencies, and shows a warning if
this cannot be done.

Open source packages can be individually installed from the Anaconda


repository, Anaconda Cloud (anaconda.org), or the user’s own private repository or
mirror, using the conda install command. Anaconda Inc. compiles and builds the
packages available in the Anaconda repository itself, and provides binaries for
Windows 32/64-bit, Linux 64-bit and MacOS 64-bit. Anything available on PyPI
may be installed into a conda environment using pip, and conda will keep track of
what it has installed itself and what pip has installed.

DIFFERENCES BETWEEN ANACONDA AND DATA SCIENCE


PLATFORMS

While Anaconda supports some functionality you find in a data science platform,
like Domino, it provides a subset of that functionality. Domino and other platforms
not only support package management, but they also support capabilities like
collaboration, reproducibility, scalable compute, and model monitoring. Conda can
be used within the Domino environment.

9
Anaconda is an amazing collection of scientific Python packages, tools,
resources, and IDEs. This package includes many important tools that a Data
Scientist can use to harness the incredible force of Python. Anaconda individual
edition is free and open source. This makes working with Anaconda accessible and
easy. Just go to the website and download the distribution.

With over 20 million users, covering 235 regions, and with over 2.4 billion
package downloads; Anaconda has grown an exceptionally large community.
Anaconda makes it easy to connect to several different scientific, Machine
Learning, and Data Science packages.

10
3.3.1 KEY FEATURES

 Neural Networks

 Machine Learning

 Predictive Analytics

 Data Visualization

 Bias Mitigation

If you are interested in Data Science, then you should know about this
Python Distribution. Anaconda is great for deep models and neural networks. You
can build models, deploy them, and integrate with leading technologies in the
subject. Anaconda is optimized to run efficiently for machine learning tasks and
will save you time when developing great algorithms. Over 250 packages are
included in the distribution. You can install other third-party packages through the
Anaconda terminal with conda install. With over 7500 data science and machine
learning packages available in their cloud-based repository, almost any package
you need will be easily accessible. Anaconda offers individual, team, and
enterprise editions. Included also is support for the R programming language.

The Anaconda distribution comes with packages that can be used on


Windows, Linux, and MacOS. The individual edition includes popular package
names like numpy, pandas, scipy, sklearn, tensorflow, pytorch, matplotlib, and
more. The Anaconda Prompt and PowerShell makes working within the file
system easy and manageable. Also, the GUI interface on Anaconda Navigator
makes working with the everything exceptionally smooth. Anaconda is an
excellent choice if you are looking for a thriving community of Data Scientists and

11
ever-growing support in the industry. Conducting Data Science projects is an
increasingly simpler task with the help of great tools like this.

Open Source software that allows Data Scientist to conduct workflows and
effectively realize scientific and computational solutions. With an emphasis on
presentation and readability, Jupyter Notebooks are a smart choice for
collaborative projects as well and insightful publications. Jupyter Notebooks are
open source and developed on GitHub publicly by the Jupyter community.

A top-notch Python IDE that is packed full of features and pre-installed


packages. With comfortable environment management and an easy to setup
workstation, PyCharm is in a league of its own, when it comes to Python. With
community, professional, and enterprise editions, there is a version for everyone.

3.4 SPYDER

A highly advanced Data Science Python platform. Created with Python for
Python, this IDE boasts some immensely robust toolsets. With an editor, IPython
Console, Variable Explorer, Advanced Plot Functionality, a built-in debugger, and
object doc helper tools, the Spyder IDE is a promising choice for a large amount of
Data Science tasks.

Link your datasets and data to a single graph or figure with Glueviz. This
Python library allows you to view data visualizations by combining datasets an
using the logical links within them.

If Data Mining is your goal, then Orange 3 has you covered. Orange 3 is a
toolset built for Data Mining. They offer great GUI, extendable functionality with
add-ons, data management, and interactive data visualizations. Also, loved by the

12
teaching and student communities for its immersive visualizations, figures, and
graphs.

If you are new to Data Science and want to get the complete experience with
Python or if you are an experienced seasoned Data Scientist that is looking for
more functionality and efficiency, I really recommend you look at this amazing
distribution. It makes package management and deployment quick and easy.
Packed with tools, IDEs, packages & libraries Anaconda is a truly authentic
decision for Data Science.

Because popularity for Anaconda seems to be expanding in many industries


and areas that are new to having the availability to such advanced capabilities, it
has never been a better time to start with this ever growing package of tools and
resources. I hope this article helped detail the Anaconda distribution to anyone who
wanted to know more about what it brings to the table and includes. Thank you for
reading and happy coding!

3.5 MODULE DESCRIPTION

CLOUD STORAGE

The large-scale sharing needs of many enterprises promote the development


of cloud storage. While the cloud computing stores the shared files outside the trust
domain of the owner, the demands and concerns for file security is arising. In this
paper, a Group Key Management Protocol for file sharing on cloud storage
(GKMP) is proposed. Faced with network attacks from public channel, a group key
generation scheme based on mixed encryption technology is proposed.

13
FILE SHARING

Theoretically, access control and group key management can be used for key
management on file sharing. However, some unique features of cloud storage
introduce new problems that have not been fully considered. Firstly, shared files
are transmitted via the network and the files may be intercepted by various network
monitoring. Just using access control on the cloud storage cannot fully address this
problem. Secondly, group key management depends on the cloud provider to
manage the encryption key. That can prevent the shared files from intercepting by
the network, while the shared files can be intercepted by the cloud provider.

KEY DISTRIBUTION

The public key is managed by the cloud provider, while the private key is
only known by the sharers. Whenever a sharer wants to share his file within the
group, it should generate a group key and encrypt the file with the group key
before transmitting the file to the cloud. Then he uses a key distribution scheme to
distribute the group key to the other group sharers without the participation of the
cloud provider. Recovering the group key needs the collaboration of all the group
members.
It combines the attribute based encryption along with proxy re-encryption
and secret key updating capability without relying on any trusted third party. But
the storage and communication overhead of SAPDS is decided by attribute
encryption scheme.

14
3.6 ALGORITHM

In order to prove that our protocol resist cloud provider, we must make sure
that the shared data cloud not be decrypted by cloud provider. As proved in proof2,
cloud provider couldn’t get the decryption key by gathering information or
corrupting group members.

The shared data stored on the cloud is encrypted using AES algorithm. As
the security performance of AES is excellent and unknown attack methods can
attack non-linear components, we conclude that shared data could not be decrypted
by cloud provider.

The first step in SAPDS and GKMP is generating a secret key K to encrypt
the shared files and then encryption algorithm is used to process the secret Key.

15
SYSTEM DESIGN AND DEVELOPMENT
4 SYSTEM DESIGN AND DEVELOPMENT
4.1 INPUT DESIGN

The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and
those steps are necessary to put transaction data in to a usable form for processing
can be achieved by inspecting the computer to read data from a written or printed
document or it can occur by having people keying the data directly into the system.

The design of input focuses on controlling the amount of input required,


controlling the errors, avoiding delay, avoiding extra steps and keeping the process
simple. The input is designed in such a way so that it provides security and ease of
use with retaining the privacy. Input Design considered the following things:

 What data should be given as input?


 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error
occur.

OBJECTIVES

1. Input Design is the process of converting a user-oriented description of the


input into a computer-based system. This design is important to avoid errors in
the data input process and show the correct direction to the management for
getting correct information from the computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle


large volume of data. The goal of designing input is to make data entry easier

16
and to be free from errors. The data entry screen is designed in such a way that
all the data manipulates can be performed. It also provides record viewing
facilities

3. When the data is entered it will check for its validity. Data can be entered with
the help of screens. Appropriate messages are provided as when needed so that
the user will not be in maize of instant. Thus the objective of input design is to
create an input layout that is easy to follow.

4.2 OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user and
presents the information clearly. In any system results of processing are
communicated to the users and to other system through outputs. In output design it
is determined how the information is to be displaced for immediate need and also
the hard copy output. It is the most important and direct source information to the
user. Efficient and intelligent output design improves the system’s relationship to
help user decision-making.

OBJECTIVES:

1. Designing computer output should proceed in an organized, well thought out


manner; the right output must be developed while ensuring that each output
element is designed so that people will find the system can use easily and
effectively. When analysis design computer output, they should Identify the
specific output that is needed to meet the requirements.

2. Select methods for presenting information.

17
3. Create document, report, or other formats that contain information produced by
the system.

4. The output form of an information system should accomplish one or more of


the following objectives.

 Convey information about past activities, current status or projections of the


Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.

4.3 CASE

CASE Computer-Aided Software Engineering (CASE) is the use of


software tools to assist in the development and maintenance of software. Tools
used to assist in this way are known as CASE Tools.

4.4 CASE TOOL

1. A CASE tool is a computer-based product aimed at supporting one or more


software engineering activities within a software development process.

2. Computer-Aided Software Engineering tools are those software which are used
in any and all phases of developing an information system, including analysis,
design and programming. For example, data dictionaries and diagramming tools
aid in the analysis and design phases, while application generators speed up the
programming phase.

18
3. CASE tools provide automated methods for designing and documenting
traditional structured programming techniques. The ultimate goal of CASE is to
provide a language for describing the overall system that is sufficient to generate
all the necessary programs needed.

4.4.1 CLASSIFICATION of CASE TOOLS

Existing CASE tools can be classified along 4 different dimensions:

1. Life-cycle support

2. Integration dimension

3. Construction dimension

4. Knowledge-based CASE dimension

Let us take the meaning of these dimensions along with their examples one by one:

4.4.2 LIFE-CYCLE BASED CASE TOOLS

This dimension classifies CASE Tools on the basis of the activities they support in
the information systems life cycle. They can be classified as Upper or Lower
CASE tools.

UpperCASE Tool: UpperCASE Tool is a Computer-Aided Software Engineering


(CASE) software tool that supports the software development activities upstream
from implementation. Uppercasetool focus on the analysis phase (but sometimes
also the design phase) of the software development lifecycle (diagramming tools,
report and form generators, and analysis tools)

19
LowerCASE Tool: LowerCASE Tool Computer-Aided Software Engineering
(CASE) software tool that directly supports the implementation (programming)
and integration tasks. LowerCASE tools support database schema generation,
program generation, implementation, testing, and configuration management.

4.4.3 INTEGRATION DIMENSION

Three main CASE Integration dimensions have been proposed:

1. CASE Framework

2. ICASE Tools Tools that integrate both upper and lower CASE, for example
making it possible to design a form and build the database to support it at the same
time. An automated system development environment that provides numerous
tools to create diagrams, forms and reports. It also offers analysis, reporting, and
code generation facilities and seamlessly shares and integrates data across and
between tools.

3. Integrated Project Support Environment(IPSE)

4.5 TYPES OF CASE TOOLS


The general types of CASE tools are listed below:
1. Diagramming tools: enable system process, data and control structures to be
represented graphically.

2. Computer display and report generators: help prototype how systems look
and feel. It makes it easier for the systems analyst to identify data requirements and
relationship.

3. Analysis tools: automatically check for importance, inconsistent, or incorrect


specifications in diagrams, forms, and reports.

20
4. Central repository: enables the integrated storage of specifications, diagrams,
reports and project management information.

5. Documentation Generators: produce technical and user documentation in


standard formats.

6. Code generators: enable the automatic generation of program and data base
definition code directly from the design documents, diagrams, forms, and reports.

FUNCTIONS OF A CASE TOOL

1. Analysis CASE analysis tools automatically check for incomplete, inconsistent,


or in correct specifications in diagrams, forms and reports.

2. Design This is where the technical blueprint of the system is created by


designing the technical architecture – choosing amongst the architectural designs
of telecommunications, hardware and software that will best suit the organization’s
system and future needs. Also designing the systems model – graphically creating
a model from graphical user interface, screen design, and databases, to placement
of objects on screen

3. Code generation CASE Tool has code generators which enable the automatic
generation of program and data base definition code directly from the documents,
diagrams, forms, and reports.

4. Documentation CASE Tool has documentation generators to produce technical


and user documentation in standard forms. Each phase of the SDLC produces
documentation. The types of documentation that flow from one face to the next

21
vary depending upon the organization, methodologies employed and type of
system being built.

CASE ENVIRONMENTS

An environment is a collection of CASE tools and workbenches that supports the


software process. CASE environments are classified based on the focus/basis of
integration

1. Toolkits

2. Language-centered

3. Integrated

4. Fourth generation

5. Process-centered

4.6 TOOLKITS

Toolkits are loosely integrated collections of products easily extended by


aggregating different tools and workbenches. Typically, the support provided by a
toolkit is limited to programming, configuration management and project
management. And the toolkit itself is environments extended from basic sets of
operating system tools, for example, the Unix Programmer's Work Bench and the
VMS VAX Set. In addition, toolkits' loose integration requires user to activate
tools by explicit invocation or simple control mechanisms. The resulting files are
unstructured and could be in different format, therefore the access of file from
different tools may require explicit file format conversion. However, since the only

22
constraint for adding a new component is the formats of the files, toolkits can be
easily and incrementally extended.

LANGUAGE-CENTERED

The environment itself is written in the programming language for which it was
developed, thus enabling users to reuse, customize and extend the environment.
Integration of code in different languages is a major issue for language-centered
environments. Lack of process and data integration is also a problem. The
strengths of these environments include good level of presentation and control
integration. Interlisp, Smalltalk, Rational, and KEE are examples of language-
centered environments.

INTEGRATED

These environments achieve presentation integration by providing uniform,


consistent, and coherent tool and workbench interfaces. Data integration is
achieved through therepository concept: they have a specialized database
managing all information produced and accessed in the environment. Examples of
integrated environment are the ICL CADESsystem, IBM AD/Cycle and DEC
Cohesion.

FOURTH-GENERATION

Fourth-generation environments were the first integrated environments. They are


sets of tools and workbenches supporting the development of a specific class of
program: electronic data processing and business-oriented applications. In general,
they include programming tools, simple configuration management tools,
document handling facilities and, sometimes, a code generator to produce code in
lower level languages. Informix 4GL, and Focus fall into this category.

23
PROCESS-CENTERED

Environments in this category focus on process integration with other integration


dimensions as starting points. A process-centered environment operates by
interpreting a process model created by specialized tools. They usually consist of
tools handling two functions:

1. Process-model execution

2. Process-model production

Examples are East, Enterprise II, Process Wise, Process Weaver, and Arcadia

CASE Tools ADVANTAGES DISADVANTAGES

Helps standardization of notations and Limitations in the flexibility of


diagrams documentation

Help communication between May lead to restriction to the tool's


development team members capabilities

Automatically check the quality of the Major danger: completeness and


models syntactic correctness does NOT mean
compliance with requirements

Reduction of time and effort Costs associated with the use of the
tool: purchase + training

Enhance reuse of models or models' Staff resistance to CASE tools


components

24
SYSTEM TESTING
5 SYSTEM TESTING
5.1 TESTING
The purpose of testing is to discover errors. Testing is the process of trying
to discover every conceivable fault or weakness in a work product. It provides a
way to check the functionality of components, sub-assemblies, assemblies and/or a
finished product It is the process of exercising software with the intent of ensuring
that the

Software system meets its requirements and user expectations and does not
fail in an unacceptable manner. There are various types of test. Each test type
addresses a specific testing requirement.

TEST OBJECTIVES
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

FEATURES TO BE TESTED

 Verify that the entries are of the correct format


 No duplicate entries should be allowed
 All links should take the user to the correct page.

5.2 UNIT TESTING

Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.

25
5.3 INTEGRATION TESTING
Software integration testing is the incremental integration testing of two or
more integrated software components on a single platform to produce failures
caused by interface defects.

The task of the integration test is to check that components or software


applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

5.4 ACCEPTANCE TESTING

User Acceptance Testing is a critical phase of any project and requires


significant participation by the end user. It also ensures that the system meets the
functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

26
CONCLUSION
6 CONCLUSION

In this paper, we propose a novel group key management protocol for file
sharing on cloud storage. Public key are used by GKMP to guarantee the group key
distribute fairly and resist attack from compromised vehicles or the cloud provider.
We give detailed analysis of possible security attacks and corresponding defense,
which demonstrates that GKMP is secure under weaker assumptions. Moreover we
demonstrate the ptotocol exhibits less storage and computing complexity.

27
SCOPE FOR FUTURE STUDY
7 SCOPE FOR FUTURE STUDY

The process of encryption key with five different group members would take
maximum 14.3s by SAPDS. However GKMP just take at most 191ms and the size
of decryption keys has little influence in GKMP’s computational overhead.

SAPDS and GKMP exhibits different decryption time for different sizes of
secret key K, encrypted under same size of encryption key. Shown in Fig9, SAPDS
tends to consume slightly more time as compared to GKMP. Furthermore, GKMP
shows linear decryption overhead with the increase in number of group member.

28
REFERENCE
8 REFERENCE

[1] P.-W. Chi and C.-L. Lei, ‘‘Audit–free cloud storage via deniable attribute–
based encryption,’’ IEEE Trans. Cloud Comput., vol. 6, no. 2, pp. 414–427, Apr.
2018.

[2] J. Zhou, H. Duan, K. Liang, Q. Yan, F. Chen, F. R. Yu, J. Wu, and J. Chen,
‘‘Securing outsourced data in the multi–authority cloud with fine–grained access
control and efficient attribute revocation,’’ Comput. J., vol. 60, no. 8, pp. 1210–
1222, Aug. 2017.

[3] J. Wu, Y. Li, T. Wang, and Y. Ding, ‘‘CPDA: A confidentiality-preserving


deduplication cloud storage with public cloud auditing,’’ IEEE Access, vol. 7, pp.
160482–160497, 2019.

[4] H. Xiong and J. Sun, ‘‘Comments on verifiable and exculpable outsourced


attribute–based encryption for access control in cloud computing,’’ IEEE Trans.
Depend. Sec. Comput., vol. 14, no. 4, pp. 461–462, Jul. 2017.

[5] J. Shao, R. Lu, and X. Lin, ‘‘Fine-grained data sharing in cloud computing for
mobile devices,’’ in Proc. IEEE Conf. Comput. Commun.(INFOCOM), Apr. 2015,
pp. 2677–2685.

[6] R. Ahuja, S. K. Mohanty, and K. Sakurai, ‘‘A scalable attribute-set-based


access control with both sharing and full-fledged delegation of access privileges in
cloud computing,’’ Comput. Elect. Eng., vol. 57, pp. 241–256, Jan. 2017.

[7] S. Roy, A. K. Das, S. Chatterjee, N. Kumar, S. Chattopadhyay, and J. J. P. C.


Rodrigues, ‘‘Provably secure fine–grained data access control over multiple cloud

29
servers in mobile cloud computing based healthcare applications,’’ IEEE Trans.
Ind. Informat., vol. 15, no. 1, pp. 457–468, Jan. 2019.

[8] Z. Fu, X. Sun, S. Ji, and G. Xie, ‘‘Towards efficient content-aware search over
encrypted outsourced data in cloud,’’ in Proc. IEEE 35th Annu. IEEE Int. Conf.
Comput. Commun. (INFOCOM), Apr. 2016, pp. 1–9.

[9] M. Blaze, ‘‘A cryptographic file system for UNIX,’’ in Proc. 1st ACM Conf.
Comput. Commun. Secur. (CCS), 1993, pp. 9–15.

[10] H. Gobioff, ‘‘Security for a high performance commodity storage


subsystem,’’ Ph.D. dissertation, School Comput. Sci., Carnegie Mellon Univ.,
Pittsburgh, PA, USA, 1999.

30
BIBLIOGRAPHY
9 BIBLIOGRAPHY

Good Teachers are worth more than thousand books, we have them in Our
Department

References Made From:

1. User Interfaces in C#: Windows Forms and Custom Controls by Matthew


MacDonald.
2. Applied Microsoft® .NET Framework Programming (Pro-Developer) by
Jeffrey Richter.
3. Practical .Net2 and C#2: Harness the Platform, the Language, and the
Framework by Patrick Smacchia.
4. Data Communications and Networking, by Behrouz A Forouzan.
5. Computer Networking: A Top-Down Approach, by James F. Kurose.
6. Operating System Concepts, by Abraham Silberschatz.
7. M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. H. Katz, A. Konwinski,
G. Lee, D. A. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “Above the
clouds: A berkeley view of cloud computing,” University of California,
Berkeley, Tech. Rep. USB-EECS-2009-28, Feb 2009.
8. “The apache cassandra project,” http://cassandra.apache.org/.
9. L. Lamport, “The part-time parliament,” ACM Transactions
on Computer Systems, vol. 16, pp. 133–169, 1998.
10. N. Bonvin, T. G. Papaioannou, and K. Aberer, “Cost-efficient
and differentiated data availability guarantees in data clouds,”
in Proc. of the ICDE, Long Beach, CA, USA, 2010.

31
SITES REFERRED:

http://www.sourcefordgde.com
http://www.networkcomputing.com/
http://www.ieee.org
http://www.emule-project.net/

32
APPENDICES
10 APPENDICES
A. DIAGRAM:
USE CASE DIAGRAM

Encryption Service

Cloud Server

Initiate Data Share

Generate Encryption Key Cloud data

User

Encrypt Data

Store Encrypted Data

Process Data

33
SEQUENCE DIAGRAM

Data Set
User Initiate
StopWordData Stemming
Generate Positive
EncryptWord
Data Negative
Store Word Cloud Server
Frequency
Share
Removal Word Removal
Encryption Key Removal Removal
Encrypted Data
Represents a user who initiates

Generating encryption keys and encrypting data

Represents the cloud server that stores

The Cloud User initiates the process

The Encryption Service generates

The Encryption Service encrypts the data

COLLABORATIVE DIAGRAM

1: Represents a user who initiates 2: Generating encryption keys and encrypting data
User Initiate Generate
Data Share Encryption Key

6: Result

3: Represents the cloud server that stores

4: The Cloud User initiates the process 5: The Encryption Service generates
Encrypt Data Store Encrypted Data Cloud Server

34
B. SAMPLE CODE:

#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys

def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'FineGrained.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)

if __name__ == '__main__':
main()

from django.contrib import admin


from . models import *
admin.site.register(Register_Detail)
admin.site.register(CA_Login)

35
admin.site.register(Upload_File)
admin.site.register(File_Request)
admin.site.register(File_Key)

from django.apps import AppConfig

class AppConfig(AppConfig):
name = 'app'

from django.db import models


from django.utils import timezone
class Register_Detail(models.Model):
username = models.CharField(max_length=50,unique=True)
fname = models.CharField(max_length=50)
lname = models.CharField(max_length=50)
address = models.CharField(max_length=50)
mobile = models.CharField(max_length=20)
password = models.CharField(max_length=50)
email = models.EmailField(max_length=50)
country = models.CharField(max_length=50)
city = models.CharField(max_length=50)
zip = models.CharField(max_length=50)
user_type = models.CharField(max_length=50)
def __str__(self):
return self.username
class CA_Login(models.Model):
username = models.CharField(max_length=50,unique=True)

36
fname = models.CharField(max_length=50)
lname = models.CharField(max_length=50)
address = models.CharField(max_length=50)
mobile = models.CharField(max_length=20)
password = models.CharField(max_length=50)
email = models.EmailField(max_length=50)
country = models.CharField(max_length=50)
city = models.CharField(max_length=50)
zip = models.CharField(max_length=50)
def __str__(self):
return self.username
class Upload_File(models.Model):
user_id=models.ForeignKey(Register_Detail,
on_delete=models.CASCADE,null=True)
name = models.CharField('File Name',max_length=100)
file = models.FileField('File',upload_to='',null=True)
notes = models.TextField('Notes',max_length=2000)
date = models.DateField('Uploaded Date',default=timezone.now())
def __str__(self):
return self.name
def publish(self):
self.date = timezone.now()
self.save()
class File_Request(models.Model):
user_id=models.ForeignKey(Register_Detail, on_delete=models.CASCADE)
file_id = models.ForeignKey(Upload_File, on_delete=models.CASCADE)

37
public_key=models.CharField('Public
Key',max_length=1000,null=True,blank=True)
private_key=models.CharField('Private
Key',max_length=1000,null=True,blank=True)
status = models.CharField('Status',max_length=1000)
def __str__(self):
return self.file_id.name
class File_Key(models.Model):
file_id = models.ForeignKey(Upload_File, on_delete=models.CASCADE)
file_key =
models.CharField('keywords',max_length=1000,null=True,blank=True)
def __str__(self):
return self.file_id.name

from django.test import TestCase


# Create your tests here.

from django.urls import path


from . import views
urlpatterns = [
path('home/',views.home,name="home"),
path('register/',views.register,name="register"),
path('', views.user_login, name='user_login'),
path('user_dashboard/', views.user_dashboard, name='user_dashboard'),
path('logout/', views.logout, name='logout'),
path('aa_login/', views.aa_login, name='aa_login'),
path('aa_dashboard/', views.aa_dashboard, name='aa_dashboard'),

38
path('aa_logout/', views.aa_logout, name='aa_logout'),
path('upload_file/', views.upload_file, name='upload_file'),
path('files/', views.files, name='files'),
path('delete/<int:pk>/', views.delete, name='delete'),
path('search_file/', views.search_file, name='search_file'),
path('all_users/', views.all_users, name='all_users'),
path('send_request/<int:pk>/', views.send_request, name='send_request'),
path('requested_file/', views.requested_file, name='requested_file'),
path('send_key/', views.send_key, name='send_key'),
path('generate_aakey/<int:pk>/', views.generate_aakey, name='generate_aakey'),
path('download_file/<int:pk>/', views.download_file, name='download_file'),
path('view_keys/<int:pk>/', views.view_keys, name='view_keys'),
path('add_keyword/', views.add_keyword, name='add_keyword')
]

from django.shortcuts import render,redirect,get_object_or_404


from . models import *
from django.contrib import messages
import datetime
from django.db.models import Q
from django.db import connection
from django.db.models import Sum, Count
from django.conf import settings
from django.utils import timezone
import os
from cryptography.fernet import Fernet
import random

39
import string
from django.core.mail import send_mail
from django.core.mail import EmailMessage
def public_key(length):
sample_string = 'd0LW25jG8feETs4WWpeCUA4AU1oPj7lAcCtKB1Cmuso='
# define the specific string
# define the condition for random string
result = ''.join((random.choice(sample_string)) for x in range(length))
return result
def private_key(length):
sample_string = 'd0LW25jG8feETs4WWpeCUA4AU1oPj7lAcCtKB1Cmuso='
# define the specific string
# define the condition for random string
result = ''.join((random.choice(sample_string)) for x in range(length))
return result
import datetime
def home(request):
return render(request,'index.html',{})
def register(request):
if request.method == 'POST':
username = request.POST.get('username')
address = request.POST.get('address')
mobile= request.POST.get('mobile')
email = request.POST.get('email')
password = request.POST.get('password')
fname = request.POST.get('fname')
lname = request.POST.get('lname')

40
country = request.POST.get('country')
city = request.POST.get('city')
zip = request.POST.get('zip')
user_type = request.POST.get('user_type')
crt = Register_Detail.objects.create(username=username,

address=address,mobile=mobile,password=password,email=email,fname=fn
ame,lname=lname,
city=city,country=country,zip=zip,user_type=user_type)
if crt:
messages.success(request,'Registered Successfully')
return render(request,'register.html',{})
def user_login(request):
if request.session.has_key('username'):
return redirect("user_dashboard")
else:
if request.method == 'POST':
username = request.POST.get('username')
password = request.POST.get('password')
user_type = request.POST.get('user_type')
post =
Register_Detail.objects.filter(username=username,user_type=user_type,password=
password)
if post:
username = request.POST.get('username')
request.session['username'] = username
request.session['user_type']=user_type

41
a = request.session['username']
sess =
Register_Detail.objects.only('id').get(username=a).id
request.session['user_id']=sess
return redirect("user_dashboard")
else:
messages.success(request, 'Invalid Username or
Password')
return render(request,'login.html',{})
def user_dashboard(request):
if request.session.has_key('username'):
return render(request,'user_dashboard.html',{})
else:
return render(request,'login.html',{})
def logout(request):
try:
del request.session['username']
del request.session['user_id']
except:
pass
return render(request, 'login.html', {})
def aa_login(request):
if request.session.has_key('aa'):
return redirect("aa_dashboard")
else:
if request.method == 'POST':
username = request.POST.get('username')

42
password = request.POST.get('password')
post =
CA_Login.objects.filter(username=username,password=password)
if post:
username = request.POST.get('username')
request.session['aa'] = username
a = request.session['aa']
sess = CA_Login.objects.only('id').get(username=a).id
request.session['aa_id']=sess
return redirect("aa_dashboard")
else:
messages.success(request, 'Invalid Username or
Password')
return render(request,'aa_login.html',{})
def aa_dashboard(request):
if request.session.has_key('aa'):
return render(request,'aa_dashboard.html',{})
else:
return render(request,'aa_login.html',{})
def aa_logout(request):
try:
del request.session['aa']
del request.session['aa_id']
except:
pass
return render(request, 'aa_login.html', {})

43
def upload_file(request):
if request.session.has_key('username'):
uid = request.session['user_id']
user_id = Register_Detail.objects.get(id=int(uid))
if request.method == 'POST':
name = request.POST.get('name')
notes = request.POST.get('notes')
a = request.FILES['file']
crt =
Upload_File.objects.create(user_id=user_id,name=name,notes=notes,
file=a)
if crt:

cursor = connection.cursor()
sql='''select f.file,f.id from app_upload_file as f order by
f.id DESC'''
post = cursor.execute(sql)
row = cursor.fetchone()
a = str(row[0])
b = str(row[1])
directory = os.getcwd()
file_name = directory+"/media/"
img = file_name+a
class Encryptor():

def key_create(self):
key = Fernet.generate_key()

44
return key

def key_write(self, key, key_name):


with open(key_name, 'wb') as mykey:
mykey.write(key)

def key_load(self, key_name):


with open(key_name, 'rb') as mykey:
key = mykey.read()
return key

def file_encrypt(self, key, original_file,


encrypted_file):

f = Fernet(key)

with open(original_file, 'rb') as file:


original = file.read()

encrypted = f.encrypt(original)

with open (encrypted_file, 'wb') as file:


file.write(encrypted)

def file_decrypt(self, key, encrypted_file,


decrypted_file):

45
f = Fernet(key)

with open(encrypted_file, 'rb') as file:


encrypted = file.read()

decrypted = f.decrypt(encrypted)

with open(decrypted_file, 'wb') as file:


file.write(decrypted)

encryptor=Encryptor()

mykey=encryptor.key_create()

encryptor.key_write(mykey, file_name+a+'.key')

loaded_key=encryptor.key_load(file_name+a+'.key')

encryptor.file_encrypt(loaded_key, img,
file_name+'enc_'+a)
encryptor.file_decrypt(loaded_key, file_name+'enc_'+a,
file_name+'dec_'+a)
return redirect('add_keyword')
return render(request,'upload_file.html',{})
else:
return render(request,'index.html',{})

46
def add_keyword(request):
if request.session.has_key('username'):
uid = request.session['user_id']
cursor = connection.cursor()
sql = '''SELECT f.id from app_upload_file as f order by f.id DESC'''
post = cursor.execute(sql)
row = cursor.fetchone()
last_id = row[0]
file_id=Upload_File.objects.get(id=int(last_id))
if request.method == 'POST':
a = request.POST.get('name')
key_word = a.split(',')
length = len(key_word)
for i in range(0,length):

File_Key.objects.create(file_id=file_id,file_key=key_word[i])
return redirect('files')

return render(request,'add_keyword.html',{'b':last_id})
else:
return render(request,'index.html',{})
def files(request):
if request.session.has_key('username'):
uid = request.session['user_id']
a = Upload_File.objects.filter(user_id=int(uid)).order_by('-id')
return render(request,'files.html',{'b':a})
else:

47
return render(request,'index.html',{})
def delete(request,pk):
if request.session.has_key('username'):
uid = request.session['user_id']
a = Upload_File.objects.filter(id=pk).delete()
return redirect('files')
else:
return render(request,'index.html',{})
def search_file(request):
if request.session.has_key('username'):
#a = Upload_File.objects.all().order_by('-id')
if request.method == 'GET':
key_word = request.GET.get('key')
cursor = connection.cursor()
sql = '''SELECT
f.name,f.notes,f.date,f.file,f.id,k.file_id_id,k.file_key from app_upload_file as f
INNER JOIN app_file_key as k
ON f.id=k.file_id_id where k.file_key='%s' ''' % (key_word)
post = cursor.execute(sql)
row = cursor.fetchall()
return render(request,'search_file.html',{'b':row})
return render(request,'search_file.html',{})
else:
return render(request,'index.html',{})
def all_users(request):
if request.session.has_key('aa'):
uid = request.session['aa_id']

48
a = Register_Detail.objects.all()
return render(request,'all_users.html',{'b':a})
else:
return render(request,'index.html',{})
def send_request(request,pk):
if request.session.has_key('username'):
uid = request.session['user_id']
file_id = Upload_File.objects.get(id=pk)
user_id = Register_Detail.objects.get(id=int(uid))
a =
File_Request.objects.create(file_id=file_id,user_id=user_id,status='Pending')
return redirect('requested_file')
else:
return render(request,'index.html',{})
def requested_file(request):
if request.session.has_key('username'):
uid = request.session['user_id']
a = File_Request.objects.filter(user_id=int(uid))
return render(request,'requested_file.html',{'b':a})
else:
return render(request,'index.html',{})
def send_key(request):
if request.session.has_key('aa'):
uid = request.session['aa_id']
a = File_Request.objects.all()
return render(request,'send_key.html',{'b':a})
else:

49
return render(request,'index.html',{})
def generate_aakey(request,pk):
if request.session.has_key('aa'):
uid = request.session['aa_id']
pkey = public_key(20)
b = private_key(25)
a =
File_Request.objects.filter(status='Pending',id=pk).update(status='Send',
public_key=pkey,private_key=b)
recipient_list = [request.GET.get('email')]
email_from = settings.EMAIL_HOST_USER
b = EmailMessage(' Your Private Key to Download Requested File',
'Private Key: ' + b,email_from,recipient_list).send()
return redirect('send_key')
else:
return render(request,'index.html',{})

def download_file(request,pk):
if request.session.has_key('username'):
uid = request.session['user_id']
if request.method == 'POST':
pkey = request.POST.get('pkey')
prkey = request.POST.get('prkey')
detail =
File_Request.objects.filter(public_key=pkey,private_key=prkey,id=pk)
if detail:

50
return
render(request,'download_file.html',{'b':detail,'pk':pk})
else:
messages.success(request, 'You have entered wrong keys
pls check the keys.')

return render(request,'download_file.html',{'pk':pk})
else:
return render(request,'index.html',{})
def view_keys(request,pk):
a = File_Request.objects.filter(id=pk)
return render(request,'view_keys.html',{'b':a})

51
C. SAMPLE SCREENSHOTS
USER LOGIN

USER REGISTRATION

52
CLOUD ADMIN LOGIN

DATA OWNER PAGE – UPLOAD FILES

53
UPLOADED FILE

SEARCH FILE

54
GENERATED KEY

55

You might also like