Secure Distributed Deduplication Systems with Improved Reliability
C.Kavitha Sree, CH.Shashikala, Dr.S.Prem Kumar, ,
Affiliations Pursuing M.Tech, CSE Branch, Dept of CSEAssistant Professor, Department of Computer Science and Engineering</br>Professor & HOD, Department of computer science and engineering, G.Pullaiah College of Engineering and Technology, Kurnool, Andhra Pradesh, India.
Data deduplication is a method for removing duplicate copies of data, and has been extensively used in
cloud storage to decrease storage space and upload bandwidth. On the other hand, there is only one copy for each file
stored in cloud even if such a file is owned by a huge number of users. Accordingly, deduplication system progress
storage utilization while reducing reliability. In addition, the dare of privacy for sensitive data also take place when they
are outsourced by users to cloud. Planning to address the above security test, this paper constructs the first effort to
celebrate the idea of scattered reliable deduplication system. This paper recommends a new distributed deduplication
systems with upper dependability in which the data chunks are distributed from corner to cornering multiple cloud
servers. The safety needs of data privacy and tag stability are also accomplish by introducing a deterministic secret
sharing scheme in distributed storage systems, instead of using convergent encryption as in previous deduplication
C.Kavitha Sree,CH.Shashikala,Dr.S.Prem Kumar."Secure Distributed Deduplication Systems with Improved Reliability". International Journal of Computer Engineering In Research Trends (IJCERT) ,ISSN:2349-7084 ,Vol.2, Issue 12,pp.908-912 , December - 2015, URL :https://ijcert.org/ems/ijcert_papers/V2I1281.pdf,
 J. Gantz and D. Reinsel, â€œThe digital universe in 2020: Bigdigital shadows and biggest growth in the fareast,â€ http://www.emc.com/collateral/analystreports/idcthe-digital-universe-in-2020.pdf, Dec 2012.
 M. O. Rabin, â€œFingerprinting by random polynomials,â€ Center for Research in Computing Technology, Harvard University, Tech.Rep. Tech. Report TR-CSE-03-01, 1981.
 J. R. Douceur, A. Adya, W. J. Bolosky, D. Simon, and M. Theimer,â€œReclaiming space from duplicate files in a serverless distributed file system.â€ in ICDCS, 2002, pp. 617â€“624.
M. Bellare, S. Keelveedhi, and T. Ristenpart, â€œDupless: Server-aided encryption for deduplicated storage,â€ in USENIX Security Symposium, 2013.
 â€œMessage-locked encryption and secure deduplication,â€ in EUROCRYPT, 2013, pp. 296â€“312.
G. R. Blakley and C. Meadows, â€œSecurity of ramp schemes,â€ in Advances in Cryptology: Proceedings of CRYPTO â€™84, ser. Lecture Notes in Computer Science, G. R. Blakley and D. Chaum, Eds.Springer-Verlag Berlin/Heidelberg, 1985, vol. 196, pp. 242â€“268.
A. D. Santis and B. Masucci, â€œMultiple ramp schemes,â€ IEEE Transactions on Information Theory, vol. 45, no. 5, pp. 1720â€“1728,Jul. 1999.
M. O. Rabin, â€œEfficient dispersal of information for security, load balancing, and fault tolerance,â€ Journal of the ACM, vol. 36, no. 2,pp. 335â€“348, Apr. 1989.
A. Shamir, â€œHow to share a secret,â€ Commun. ACM, vol. 22, no. 11,pp. 612â€“613, 1979.
J. Li, X. Chen, M. Li, J. Li, P. Lee, and W. Lou, â€œSecure deduplica-tion with efficient and reliable convergent key management,â€ in IEEE Transactions on Parallel and Distributed Systems, 2014, pp. vol.25(6), pp. 1615â€“1625.
 S. Halevi, D. Harnik, B. Pinkas, and A. ShulmanPeleg, â€œProofs of ownership in remote storage systems.â€ in ACM Conference on Computer and Communications Security, Y. Chen, G. Danezis, and V. Shmatikov, Eds. ACM, 2011, pp. 491â€“500.
J. S. Plank, S. Simmerman, and C. D. Schuman, â€œJerasure: A library in C/C++ facilitating erasure coding for storage applications - Version 1.2,â€ University of Tennessee, Tech. Rep. CS-08-627,August 2008.
 J. S. Plank and L. Xu, â€œOptimizing Cauchy Reed-solomon Codes for fault-tolerant network storage applications,â€ in NCA-06: 5th IEEE International Symposium on Network Computing Applications,Cambridge, MA, July 2006.
 C. Liu, Y. Gu, L. Sun, B. Yan, and D. Wang, â€œRadmad: High reliability provision for large-scale deduplication archival storage systems,â€ in Proceedings of the 23rd international conference on Supercomputing, pp. 370â€“379.
Authors are not required to pay any article-processing charges (APC) for their article to be published open access in Journal IJCERT. No charge is involved in any stage of the publication process, from administrating peer review to copy editing and hosting the final article on dedicated servers. This is free for all authors.
News & Events
Latest issue :Volume 10 Issue 1 Articles In press
☞ INVITING SUBMISSIONS FOR THE NEXT ISSUE :
☞ LAST DATE OF SUBMISSION : 31st March 2023
☞ SUBMISSION TO FIRST DECISION : In 7 Days
☞ FINAL DECISION : IN 3 WEEKS FROM THE DAY OF SUBMISSION
All the authors, conference coordinators, conveners, and guest editors kindly check their articles' originality before submitting them to IJCERT. If any material is found to be duplicate submission or sent to other journals when the content is in the process with IJCERT, fabricated data, cut and paste (plagiarized), at any stage of processing of material, IJCERT is bound to take the following actions.
1. Rejection of the article.
2. The author will be blocked for future communication with IJCERT if duplicate articles are submitted.
3. A letter regarding this will be posted to the Principal/Director of the Institution where the study was conducted.
4. A List of blacklisted authors will be shared among the Chief Editors of other prestigious Journals
We have been screening articles for plagiarism with a world-renowned tool: Turnitin However, it is only rejected if found plagiarized. This more stern action is being taken because of the illegal behavior of a handful of authors who have been involved in ethical misconduct. The Screening and making a decision on such articles costs colossal time and resources for the journal. It directly delays the process of genuine materials.