A Novel Data Authentication and Monitoring Approach Over Cloud
Security is one of the important and interesting
factors in the field of cloud computing, during the usage of
cloud resources, Even though various traditional approaches
are there for cloud storage, but they are not optimal providing
optimal services because many of the traditional mechanisms
are not optimal for data correctness, integrity and dynamic
data support. In this paper we are introducing an efficient
mechanism for data correctness and detection of errors and
the implementation purpose we simulated the system with the
new architecture.
A Novel Data Authentication and Monitoring Approach Over Cloud
Security is one of the important and interesting
factors in the field of cloud computing, during the usage of
cloud resources, Even though various traditional approaches
are there for cloud storage, but they are not optimal providing
optimal services because many of the traditional mechanisms
are not optimal for data correctness, integrity and dynamic
data support. In this paper we are introducing an efficient
mechanism for data correctness and detection of errors and
the implementation purpose we simulated the system with the
new architecture.
A Novel Data Authentication and Monitoring approach over cloud B.S.L.Satyavathi Devi 1 , M.Vamsi Krishna 2 , B.Srinivas 3
1,2 Chaitanya Institute of Science and Technology, Kakinada, A.P., India 3 Pragati Engineering college, Surampalem, A.P., India
Abstract: Security is one of the important and interesting factors in the field of cloud computing, during the usage of cloud resources, Even though various traditional approaches are there for cloud storage, but they are not optimal providing optimal services because many of the traditional mechanisms are not optimal for data correctness, integrity and dynamic data support. In this paper we are introducing an efficient mechanism for data correctness and detection of errors and the implementation purpose we simulated the system with the new architecture.
I.INTRODUCTION
Cloud computing promises greater flexibility in business planning along with significant cost savings by leveraging economies of scale in the IT infrastructure. It also offers a simplified capital and expenditure model for compute services as well as increased agility for cloud customers who can easily expand and contract their IT services as business needs change and there are many enterprise customers are hesitant to buy into cloud offerings due to governance and security concerns. Many potential users of cloud services lack confidence that cloud providers will adequately protect their data and deliver safe and predictable computing results. As the most recent evolution in computing architecture, cloud computing is simply a further extension of the distributed computing model. Its key characteristics such as multi-tenancy and massive scalability are also those that may create new governance challenges for both cloud Providers and their customers. Todays cloud computing solutions may also provide a computing infrastructure and related services in which the consumer has limited or no control over the cloud infrastructure thus creating a greater need for customers to assess and control risk. Customers must trust the security and governance of the cloud environment in order to have confidence that their data will be protected and its integrity maintained. Many potential cloud customers are also looking for some level of assurance that appropriate security measures are indeed being properly implemented in the daily operations of the cloud infrastructure. These potential customers want to make informed decisions about whether their data will be sufficiently protected and whether they will be able to comply with specific regulations when using a cloud service. In short they want the security of the cloud offering to be transparent. Transparent security would entail cloud providers disclosing adequate information about their security policies design and practices and that includes disclosing of relevant security measures in daily operations. The best ways to help customers understand the cloud security environment is for cloud service providers to develop a common way to disclose relevant practices as well as principles and capabilities using a common framework such the providers of cloud and customers can create a governance framework by leveraging the existing ISO 27001 and ISO 27002 standards4 to provide an approach that can naturally be applied in a cloud environment. In computing a denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a machine or network resource unavailable to its intended users and anyways that means to carry out the motives for and targets of a DoS attack may vary and consists of efforts to temporarily or indefinitely interrupt or suspend services of a host connected to the Internet. Perpetrators of DoS attacks typically target sites or services hosted on high-profile web servers such as banks and credit card payment gateways and even root name servers. This technique has now seen extensive use in certain games used by server owners or disgruntled competitors on games. Increasingly these attacks have also been used as a formof resistance and they say is a tool for registering dissent. The author Richard Stallman has stated that Daniel of service attack is a form of Internet Street Protests and generally used relating to computer networks but is not limited to this field for example it is also used in reference to CPU resource management. One common method of attack involves saturating the target machine with external communications requests and so much of that it cannot respond to legitimate traffic or responds so slowly as to be rendered essentially not available. That attacks are usually leads to server burden.In normal terms DoS attacks are implemented by either forcing the targeted computer(s) to reset or consuming its resources so that it can no longer provide its intended service or obstructing the communication media between the intended users and the victimso that they can no longer communicate adequately. International Journal of Engineering Trends and Technology (IJETT) Volume 4 Issue 9- Sep 2013 ISSN: 2231-5381 http://www.ijettjournal.org Page 3727
II. RELATED WORK
Cloud storage is a model of networked enterprise storage where data is stored not only in the user's computer but in virtualized pools of storage which are generally hosted by third parties too. Hosting companies operate large data centres and people who require their data to be hosted buy or lease storage capacity from them. The data centre operators in the background virtualizes the resources according to the requirements of the customer and expose themas storage pools which the customers can themselves use to store files. Manually the resource may span across multiple servers and for the files safety depends upon the hosting websites.
The storage system of cloud users store their data in the cloud and no longer possess the local data and the correctness and availability of the data files being stored on the distributed cloud servers must be guaranteed and finalized and One of the key issues is to effectively detect any unauthorized data modification and mal functioning, mostly due to server compromise and/or random Byzantine failures and Besides the distributed case when such inconsistencies are successfully identified to find which server the data error lies in is also of great significance, because it can always be the first step to fast recover the storage errors and/or identifying potential threats of external attacks. The simplest Proof of retrievability (POR) scheme can be made using a keyed hash function hk(F) and in this approach the verifier, before archiving the data le F in the storage of cloud and pre-computes the cryptographic hash of F using hk(F) and stores this hash as well as the secret key(K) and to check if the integrity of the file F is lost the verier releases the secret key K to the cloud archive and asks it to compute and return the value of hk(F) while storing multiple hash values for different keys the verier can check for the integrity of the file F for Multiple times, each one being an independent proof. The traditional architecture contains basic three roles data owner, auditor and user as follows.
Fig 1 : Architecture of the protocol In this paper we are proposing an efficient mechanismi.e novel signature for authentication for error recovery and for the data integrity we implemented an efficient file segmentation method for error correctness and for providing the language interoperability we implemented our application in service oriented application
III. PROPOSED WORK Our proposed work consists of data integrity, data correctness and language interoperability. In our framework it generates an authentication code for each and every block and that is used to detect the error detection and it is service oriented architecture. A) Novel secure architecture In our architecture there are different users with different accessing privileges. The details of them are explained below: Cloud Server(CS): It is an entity that is monitored and maintained by cloud service provider and it contains sufficient storage and efficient computational resources. Third-Party Auditor: This is mediator and has the capability that users may not have and it is trusted to access and expose risk of cloud storage services on behalf of the users upon request. User: It is an entity and who has data to be stored in the cloud and relies on the cloud for computation and storage of data, can be either enterprise or individual customers. Auditing Delegation Public Auditing Data auditing to enforce service level agreement Dataflow Coud Servers International Journal of Engineering Trends and Technology (IJETT) Volume 4 Issue 9- Sep 2013 ISSN: 2231-5381 http://www.ijettjournal.org Page 3728
Fig 2: Architecture for Data correctness over cloud
In implementation process is here at first the data owner divides the data into blocks and then apply hash algorithm to individual blocks .Then for encryption of the resultant blocks uses Rijandael algorithmand uploads to the server. Then forward the meta-data of the uploaded file is send to third party auditor. Auditor monitors the uploaded files and he generate same signature too the uploaded files using meta-data and checks the uploaded and the generated signature is same for the individual blocks. If any block signature is mismatched it can intimate to the data owner that his file is corrupted. User can access the information by using the limited access provided by the cloud service. A) Error Detection and data correctness In order to accomplish this task we have devised an algorithmwhich uses block signature method to identify the exact block error. A new block signature strategy is proposed in this paper to know the exact location of error. We call this error free transfer technique. The above algorithmgenerates signatures against the data in file and appends those generated signatures at the end of file. It is very obvious from the algorithmgenerates signatures for every block separately and then those signatures are appended at the end of file as well. This algorithmuses 16 bytes as blocks reserved bytes. These bytes are used to send the original size of the file. Block size in this algorithm(n) is dependent upon the preference of users. The method of identifying corruption at the Technique receiving site uses the similar technique. The algorithmat receiving site first identifies the actual size of the file received. Then it separates the signatures fromthe received file. After doing this process file only contains the original data with appended zeros and 16 reserved .The signatures are separated the file. This algorithmthen again generates signatures with received original file and compares the signatures with received signatures. If signatures exactly match, it means the file is received without errors. If match Is not found, it means that the file is corrupted. One very strong point about the proposed algorithm I - Calculate Length of(F1) is that it first divides the whole file into blocks of equal count *- 1/n size. Signature for each block is separately generated For j =1 to count and stored in the file. It means that the number of S <- 0 n blocks in the file is exactly equal to the number of signatures generated. That is, each signature represents signatures of the file received after removing sending site signatures from the file. The signatures generated Fn *- F1l Sig at sending site are then matched against the signature generated against the receiving site. Matching of match is found, it means that the block is received accurately. The mask us capable of corrupted. After the identification of corrupted blocks, while receiving side asks sending side only for those blocks which are received corrupted.
Novel Authentication Based signature: Algorithm: Generate file with integrated Signatures Input: User File in ASCII (Fo) Output: File with Signature appended at end of (Fn) Method: For apply hash function on each n byte block of file which is corrupted? If we consider it with the file we performthe following steps to make (m mod n)=0 of Fo
M Calculate Length of (F0) n Length of Block (any one of128/ 256 /512/ 1024 /204/4096/ 8192) bytes
res reserved 16 bytes P mmod n Q n- (P +res)
if(Q >0) FAppend Q zeros at the end of F 0 Else if(Q <0) R n+Q F1 Append R zeros at the end of F 0
F1 Append res at the end of F 0 In order to generate Signatures of Fl, performthe following steps I Calculate_ Length of (F l ) count l/n For j1 to count S 0 S reverse[ n A=1 ((A XOR B) v (A B))]
Where B <- to_Integer (to_Char (A))
Sig Sig+to-Binary (S)
Fn F1 +Sig
Data Owner Auditor Users CSP Response File Meta data and S(s 1 ,s 2 ,..s n ) f(f 1 ,f 2 ,..f n )& S(s1,...sn) Response Get signatures International Journal of Engineering Trends and Technology (IJETT) Volume 4 Issue 9- Sep 2013 ISSN: 2231-5381 http://www.ijettjournal.org Page 3729
IV.CONCLUSION Our approach is efficient during the segmentation and integration even it does not relives to the third party or auditor and error detection mechanisminformto the data owner whenever the correctness is failed with efficient signature authentication mechanism. The simulation of the process can be shown in an efficient way.
REFERENCES [1] C. Wang, Q. Wang, K. Ren, and W. Lou, Ensuring Data StorageSecurity in Cloud Computing, Proc. 17th Intl Workshop Quality of Service (IWQoS 09), pp. 1-9, July 2009. [2] Amazon.com, Amazon Web Services (AWS), http://aws. amazon.com, 2009. [3] Sun Microsystems, Inc., Building Customer Trust in Cloud Computing with Transparent Security, https://www.sun.com/ offers/details/sun_transparency.xml, Nov. 2009. [4] K. Ren, C. Wang, and Q. Wang, Security Challenges for the Public Cloud, IEEE Internet Computing, vol. 16, no. 1, pp. 69-73, 2012. [5] M. Arrington, Gmail Disaster: Reports of Mass Email Deletions, http://www.techcrunch.com/2006/12/28/gmail- disasterreportsof- mass-email-deletions, Dec. 2006. [6] J. Kincaid, MediaMax/TheLinkup Closes Its Doors, http://www.techcrunch.com/2008/07/10/ mediamaxthelinkup-closesits- doors, July 2008. [7] Amazon.com, Amazon S3 Availability Event: July 20, 2008,http://status.aws.amazon.com/s3-20080720.html, July 2008. [8] S. Wilson, Appengine Outage, http://www.cio- weblog.com/ 50226711/appengine_outage.php, June 2008. [9] B. Krebs, Payment Processor Breach May Be Largest Ever, http://voices.washingtonpost.com/ securityfix/2009/01/ Jan. 2009. [10] A. Juels and B.S. Kaliski Jr., PORs: Proofs of Retrievability forLarge Files, Proc. 14th ACM Conf. Computer and Comm. Security (CCS 07), pp. 584-597, Oct. 2007.
BIOGRAPHIES
B.S.L.Satyavathi Devi is a student of Chaitanya Institute of Science and Technology, Madhavapatnam, Kakinada, pursuing her M.Tech(Computer Science Engineering) from J NTU Kakinada. Her area of interest includes Computer Networks, Information security, Compiler Design and Artificial Intelligence.
M.Vamsikrishna, well known Author and excellent teacher received M.Tech(AI&R), M.Tech(CS) from Andhra university. Working as Professor and HOD, CSE Dept., Chaitanya Institute of Science and Technology. He has 13 years of teaching & research experience. He has 20 publications of both national and international conferences / journals. His area of interest includes AI, Computer Networks, information security, flavors of Unix.
B.Srinivas Working as Associate Professor in CSE Dept. of Pragati Engineering College, Surampalem. He received M.Tech(CSE) from Acharya Nagarjuna university. He has 6 years of teaching experience, pursuing Ph.D. from J NTU Kakinada. His area of interest includes Computer Networks, Information Security, Mobile Computing and Cloud Computing.