Deduplication Enables Us To Store Only One Copy Of Identical Data And Becomes Unprecedentedly Significant With The Dramatic Increase In Data Stored In The Cloud. For The Purpose Of Ensuring Data Confidentiality, They Are Usually Encrypted Before Outsourced. Traditional Encryption Will Inevitably Result In Multiple Different Ciphertexts Produced From The Same Plaintext By Different Users' Secret Keys, Which Hinders Data Deduplication. Convergent Encryption Makes Deduplication Possible Since It Naturally Encrypts The Same Plaintexts Into The Same Ciphertexts. One Attendant Problem Is How To Effectively Manage A Huge Number Of Convergent Keys. Several Deduplication Schemes Have Been Proposed To Deal With This Problem. However, They Either Need To Introduce Key Management Servers Or Require Interaction Between Data Owners. In This Paper, We Design A Novel Client-side Deduplication Protocol Named KeyD Without Such An Independent Key Management Server By Utilizing The Identity-based Broadcast Encryption (IBBE) Technique. Users Only Interact With The Cloud Service Provider (CSP) During The Process Of Data Upload And Download. Security Analysis Demonstrates That KeyD Ensures Data Confidentiality And Convergent Key Security, And Well Protects The Ownership Privacy Simultaneously. A Thorough Performance Comparison Shows That Our Scheme Makes A Better Tradeoff Among The Storage Cost, Communication And Computation Overhead.
This Paper Addresses The Problem Of Sharing Person-specific Genomic Sequences Without Violating The Privacy Of Their Data Subjects To Support Large-scale Biomedical Research Projects. The Proposed Method Builds On The Framework Proposed By Kantarcioglu Et Al. [1] But Extends The Results In A Number Of Ways. One Improvement Is That Our Scheme Is Deterministic, With Zero Probability Of A Wrong Answer (as Opposed To A Low Probability). We Also Provide A New Operating Point In The Space-time Tradeoff, By Offering A Scheme That Is Twice As Fast As Theirs But Uses Twice The Storage Space. This Point Is Motivated By The Fact That Storage Is Cheaper Than Computation In Current Cloud Computing Pricing Plans. Moreover, Our Encoding Of The Data Makes It Possible For Us To Handle A Richer Set Of Queries Than Exact Matching Between The Query And Each Sequence Of The Database, Including: (i) Counting The Number Of Matches Between The Query Symbols And A Sequence; (ii) Logical OR Matches Where A Query Symbol Is Allowed To Match A Subset Of The Alphabet Thereby Making It Possible To Handle (as A Special Case) A “not Equal To” Requirement For A Query Symbol (e.g., “not A G”); (iii) Support For The Extended Alphabet Of Nucleotide Base Codes That Encompasses Ambiguities In DNA Sequences (this Happens On The DNA Sequence Side Instead Of The Query Side); (iv) Queries That Specify The Number Of Occurrences Of Each Kind Of Symbol In The Specified Sequence Positions (e.g., Two `A' And Four `C' And One `G' And Three `T', Occurring In Any Order In The Query-specified Sequence Positions); (v) A Threshold Query Whose Answer Is `yes' If The Number Of Matches Exceeds A Query-specified Threshold (e.g., “7 Or More Matches Out Of The 15 Query-specified Positions”). (vi) For All Query Types, We Can Hide The Answers From The Decrypting Server, So That Only The Client Learns The Answer. (vii) In All Cases, The Client Deterministically Learns Only The Query's Answer, Except For Query Type (v) Where We Quantify The (very Small) Statistical Leakage To The Client Of The Actual Count.
Temporary Keyword Search On Confidential Data In A Cloud Environment Is The Main Focus Of This Research. The Cloud Providers Are Not Fully Trusted. So, It Is Necessary To Outsource Data In The Encrypted Form. In The Attribute-based Keyword Search (ABKS) Schemes, The Authorized Users Can Generate Some Search Tokens And Send Them To The Cloud For Running The Search Operation. These Search Tokens Can Be Used To Extract All The Ciphertexts Which Are Produced At Any Time And Contain The Corresponding Keyword. Since This May Lead To Some Information Leakage, It Is More Secure To Propose A Scheme In Which The Search Tokens Can Only Extract The Ciphertexts Generated In A Specified Time Interval. To This End, In This Paper, We Introduce A New Cryptographic Primitive Called Key-policy Attribute-based Temporary Keyword Search (KP-ABTKS) Which Provide This Property. To Evaluate The Security Of Our Scheme, We Formally Prove That Our Proposed Scheme Achieves The Keyword Secrecy Property And Is Secure Against Selectively Chosen Keyword Attack (SCKA) Both In The Random Oracle Model And Under The Hardness Of Decisional Bilinear Diffie-Hellman (DBDH) Assumption. Furthermore, We Show That The Complexity Of The Encryption Algorithm Is Linear With Respect To The Number Of The Involved Attributes. Performance Evaluation Shows Our Scheme's Practicality.
Recent Advancements In Technology Have Led To A Deluge Of Big Data Streams That Require Real-time Analysis With Strict Latency Constraints. A Major Challenge, However, Is Determining The Amount Of Resources Required By Applications Processing These Streams Given Their High Volume, Velocity And Variety. The Majority Of Research Efforts On Resource Scaling In The Cloud Are Investigated From The Cloud Provider's Perspective With Little Consideration For Multiple Resource Bottlenecks. We Aim At Analyzing The Resource Scaling Problem From An Application Provider's Point Of View Such That Efficient Scaling Decisions Can Be Made. This Paper Provides Two Contributions To The Study Of Resource Scaling For Big Data Streaming Applications In The Cloud. First, We Present A Layered Multi-dimensional Hidden Markov Model (LMD-HMM) For Managing Time-bounded Streaming Applications. Second, To Cater To Unbounded Streaming Applications, We Propose A Framework Based On A Layered Multi-dimensional Hidden Semi-Markov Model (LMD-HSMM). The Parameters In Our Models Are Evaluated Using Modified Forward And Backward Algorithms. Our Detailed Experimental Evaluation Results Show That LMD-HMM Is Very Effective With Respect To Cloud Resource Prediction For Bounded Streaming Applications Running For Shorter Periods While The LMD-HSMM Accurately Predicts The Resource Usage For Streaming Applications Running For Longer Periods.
With The Pervasiveness Of Mobile Devices And The Development Of Biometric Technology, Biometric Identification, Which Can Achieve Individual Authentication Relies On Personal Biological Or Behavioral Characteristics, Has Attracted Widely Considerable Interest. However, Privacy Issues Of Biometric Data Bring Out Increasing Concerns Due To The Highly Sensitivity Of Biometric Data. Aiming At This Challenge, In This Paper, We Present A Novel Privacy-preserving Online Fingerprint Authentication Scheme, Named E-Finga, Over Encrypted Outsourced Data. In The Proposed E-Finga Scheme, The User's Fingerprint Registered In Trust Authority Can Be Outsourced To Different Servers With User's Authorization, And Secure, Accurate And Efficient Authentication Service Can Be Provided Without The Leakage Of Fingerprint Information. Specifically, An Improved Homomorphic Encryption Technology For Secure Euclidean Distance Calculation To Achieve An Efficient Online Fingerprint Matching Algorithm Over Encrypted FingerCode Data In The Outsourcing Scenarios. Through Detailed Security Analysis, We Show That E-Finga Can Resist Various Security Threats. In Addition, We Implement E-Finga Over A Workstation With A Real Fingerprint Database, And Extensive Simulation Results Demonstrate That The Proposed E-Finga Scheme Can Serve Efficient And Accurate Online Fingerprint Authentication.
Cloud Storage Has Been In Widespread Use Nowadays, Which Alleviates Users' Burden Of Local Data Storage. Meanwhile, How To Ensure The Security And Integrity Of The Outsourced Data Stored In A Cloud Storage Server Has Also Attracted Enormous Attention From Researchers. Proofs Of Storage (POS) Is The Main Technique Introduced To Address This Problem. Publicly Verifiable POS Allowing A Third Party To Verify The Data Integrity On Behalf Of The Data Owner Significantly Improves The Scalability Of Cloud Service. However, Most Of Existing Publicly Verifiable POS Schemes Are Extremely Slow To Compute Authentication Tags For All Data Blocks Due To Many Expensive Group Exponentiation Operations, Even Much Slower Than Typical Network Uploading Speed, And Thus It Becomes The Bottleneck Of The Setup Phase Of The POS Scheme. In This Article, We Propose A New Variant Formulation Called “Delegatable Proofs Of Storage (DPOS)”. Then, We Construct A Lightweight Privacy-preserving DPOS Scheme, Which On One Side Is As Efficient As Private POS Schemes, And On The Other Side Can Support Third Party Auditor And Can Switch Auditors At Anytime, Close To The Functionalities Of Publicly Verifiable POS Schemes. Compared To Traditional Publicly Verifiable POS Schemes, We Speed Up The Tag Generation Process By At Least Several Hundred Times, Without Sacrificing Efficiency In Any Other Aspect. In Addition, We Extend Our Scheme To Support Fully Dynamic Operations With High Efficiency, Reducing The Computation Of Any Data Update To O(log N) And Simultaneously Only Requiring Constant Communication Costs. We Prove That Our Scheme Is Sound And Privacy Preserving Against Auditor In The Standard Model. Experimental Results Verify The Efficient Performance Of Our Scheme.
With The Popularity Of Wearable Devices, Along With The Development Of Clouds And Cloudlet Technology, There Has Been Increasing Need To Provide Better Medical Care. The Processing Chain Of Medical Data Mainly Includes Data Collection, Data Storage And Data Sharing, Etc.
Cloud Computing Is A Scalable And Efficient Technology For Providing Different Services. For Better Reconfigurability And Other Purposes, Users Build Virtual Networks In Cloud Environments. Since Some Applications Bring Heavy Pressure To Cloud Datacenter Networks, It Is Necessary To Recognize And Optimize Virtual Networks With Different Applications. In Some Cloud Environments, Cloud Providers Are Not Allowed To Monitor User Private Information In Cloud Instances. Therefore, In This Paper, We Present A Virtual Network Recognition And Optimization Method To Improve Quality-of-service (QoS) Of Cloud Services. We First Introduce A Community Detection Method To Recognize Virtual Networks From The Cloud Datacenter Network. Then, We Design A Scheduling Strategy By Combining SDN-based Network Management And Instance Placement To Improve The Service-level Agreements (SLA) Fulfillment. Our Experimental Result Shows That We Can Achieve A Recognition Accuracy As High As 80 Percent To Find Out The Virtual Networks, And The Scheduling Strategy Increases The Number Of SLA Fulfilled Virtual Networks.
Ciphertext-policy Attribute-based Encryption (CP-ABE) Is A Very Promising Encryption Technique For Secure Data Sharing In The Context Of Cloud Computing. Data Owner Is Allowed To Fully Control The Access Policy Associated With His Data Which To Be Shared. However, CP-ABE Is Limited To A Potential Security Risk That Is Known As Key Escrow Problem, Whereby The Secret Keys Of Users Have To Be Issued By A Trusted Key Authority. Besides, Most Of The Existing CP-ABE Schemes Cannot Support Attribute With Arbitrary State. In This Paper, We Revisit Attribute-based Data Sharing Scheme In Order To Solve The Key Escrow Issue But Also Improve The Expressiveness Of Attribute, So That The Resulting Scheme Is More Friendly To Cloud Computing Applications. We Propose An Improved Two-party Key Issuing Protocol That Can Guarantee That Neither Key Authority Nor Cloud Service Provider Can Compromise The Whole Secret Key Of A User Individually. Moreover, We Introduce The Concept Of Attribute With Weight, Being Provided To Enhance The Expression Of Attribute, Which Can Not Only Extend The Expression From Binary To Arbitrary State, But Also Lighten The Complexity Of Access Policy. Therefore, Both Storage Cost And Encryption Complexity For A Ciphertext Are Relieved. The Performance Analysis And The Security Proof Show That The Proposed Scheme Is Able To Achieve Efficient And Secure Data Sharing In Cloud Computing.
In Order To Realize The Sharing Of Data By Multiple Users On The Blockchain, This Paper Proposes An Attribute-based Searchable Encryption With Verifiable Ciphertext Scheme Via Blockchain. The Scheme Uses The Public Key Algorithm To Encrypt The Keyword, The Attribute-based Encryption Algorithm To Encrypt The Symmetric Key, And The Symmetric Key To Encrypt The File. The Keyword Index Is Stored On The Blockchain, And The Ciphertext Of The Symmetric Key And File Are Stored On The Cloud Server. The Scheme Uses Searchable Encryption Technology To Achieve Secure Search On The Blockchain, Uses The Immutability Of The Blockchain To Ensure The Security Of The Keyword Ciphertext, Uses Verify Algorithm Guarantees The Integrity Of The Data On The Cloud. When The User's Attributes Need To Be Changed Or The Ciphertext Access Structure Is Changed, The Scheme Uses Proxy Re-encryption Technology To Implement The User's Attribute Revocation, And The Authority Center Is Responsible For The Whole Attribute Revocation Process. The Security Proof Shows That The Scheme Can Achieve Ciphertext Security, Keyword Security And Anti-collusion. In Addition, The Numerical Results Show That The Proposed Scheme Is Effective.
Elasticity Has Now Become The Elemental Feature Of Cloud Computing As It Enables The Ability To Dynamically Add Or Remove Virtual Machine Instances When Workload Changes. However, Effective Virtualized Resource Management Is Still One Of The Most Challenging Tasks. When The Workload Of A Service Increases Rapidly, Existing Approaches Cannot Respond To The Growing Performance Requirement Efficiently Because Of Either Inaccuracy Of Adaptation Decisions Or The Slow Process Of Adjustments, Both Of Which May Result In Insufficient Resource Provisioning. As A Consequence, The Quality Of Service (QoS) Of The Hosted Applications May Degrade And The Service Level Objective (SLO) Will Be Thus Violated. In This Paper, We Introduce SPRNT, A Novel Resource Management Framework, To Ensure High-level QoS In The Cloud Computing System. SPRNT Utilizes An Aggressive Resource Provisioning Strategy Which Encourages SPRNT To Substantially Increase The Resource Allocation In Each Adaptation Cycle When Workload Increases. This Strategy First Provisions Resources Which Are Possibly More Than Actual Demands, And Then Reduces The Over-provisioned Resources If Needed. By Applying The Aggressive Strategy, SPRNT Can Satisfy The Increasing Performance Requirement In The First Place So That The QoS Can Be Kept At A High Level. The Experimental Results Show That SPRNT Achieves Up To 7.7× Speedup In Adaptation Time, Compared With Existing Efforts. By Enabling Quick Adaptation, SPRNT Limits The SLO Violation Rate Up To 1.3 Percent Even When Dealing With Rapidly Increasing Workload.
The Emergence Of Cloud Computing Services Has Led To An Increased Interest In The Technology Among The General Public And Enterprises Marketing These Services. Although There Is A Need For Studies With A Managerial Relevance For This Emerging Market, The Lack Of Market Analysis Hampers Such Investigations. Therefore, This Study Focuses On The End-user Market For Cloud Computing In Korea. We Conduct A Quantitative Analysis To Show Consumer Adoption Behavior For These Services, Particularly Infrastructure As A Service (IaaS). Bayesian Mixed Logit Model And The Multivariate Probit Model Are Used To Analyze The Data Collected By A Conjoint Survey. From This Analysis, We Find That The Service Fee And Stability Are The Most Critical Adoption Factors. We Also Present An Analysis On The Relationship Between Terminal Devices And IaaS, Classified By Core Attributes Such As Price, Stability, And Storage Capacity. From These Relationships, We Find That Larger Storage Capacity Is More Important For Mobile Devices Such As Laptops Than Desktops. Based On The Results Of The Analysis, This Study Also Recommends Useful Strategies To Enable Enterprise Managers To Focus On More Appropriate Service Attributes, And To Target Suitable Terminal Device Markets Matching The Features Of The Service
Cloud Computing Enables Enterprises And Individu-1 Als To Outsource And Share Their Data. This Way, Cloud Computing 2 Eliminates The Heavy Workload Of Local Information Infrastruc-3 Ture. Attribute-based Encryption Has Become A Promising Solution 4 For Encrypted Data Access Control In Clouds Due To The Ability 5 To Achieve One-to-many Encrypted Data Sharing. Revocation Is A 6 Critical Requirement For Encrypted Data Access Control Systems. 7 After Outsourcing The Encrypted Attribute-based Ciphertext To The 8 Cloud, The Data Owner May Want To Revoke Some Recipients That 9 Were Authorized Previously, Which Means That The Outsourced 10 Attribute-based Ciphertext Needs To Be Updated To A New One 11 That Is Under The Revoked Policy. The Integrity Issue Arises When 12 The Revocation Is Executed. When A New Ciphertext With The 13 Revoked Access Policy Is Generated By The Cloud Server, The Data 14 Recipient Cannot Be Sure That The Newly Generated Ciphertext 15 Guarantees To Be Decrypted To The Same Plaintext As The Originally 16 Encrypted Data, Since The Cloud Server Is Provided By A Third 17 Party, Which Is Not Fully Trusted. In This Paper, We Consider 18 A New Security Requirement For The Revocable Attribute-based 19 Encryption Schemes: Integrity. We Introduce A Formal Definition 20 And Security Model For The Revocable Attribute-based Encryption 21 With Data Integrity Protection (RABE-DI). Then, We Propose 22 A Concrete RABE-DI Scheme And Prove Its Confidentiality And 23 Integrity Under The Defined Security Model. Finally, We Present 24 An Implementation Result And Provide Performance Evaluation 25 Which Shows That Our Scheme Is Efficient And Practical.
Nowadays, Large Amount Of Data Is Stored On The Cloud Which Is Required To Be Protected From The Unauthorized Users. To Maintain The Privacy And Security Of Data Various Algorithms Are Used. The Objective Of Every System Is To Achieve Confidentiality, Integrity, Availability (CIA). However, The Existing Centralized Cloud Storage Lacks To Provide These CIA Properties. So, To Enhance The Security Of Data And Storing Techniques, Decentralized Cloud Storage Is Used Along With Blockchain Technology. It Effectively Helps To Protect Data From Tampering Or Deleting A Part Of Data. The Data Stored In Blockchain Is Linked To Each Other By A Chain Of Blocks. Each Block Has Its Hash Value, Which Is Stored In Next Block. Thus It Reduces The Chances Of Data Altering. For This Purpose, SHA-512 Hashing Algorithm Is Used. Hashing Algorithm Is Used In Many Aspects, Where The Security Of Data Is Required Such As Message Digest, Password Verification, Digital Certificates And In Blockchain. By The Combination Of These Methods And Algorithms, Data Becomes More Secure And Reliable. However, With The Help Of Various Algorithms, The Security Of The Data Can Be Enhanced. Also, Advance Encryption Standard (AES) Is Used To Encrypt And Decrypt The Data Due To The Significant Features Of This Algorithm.
With The Ever-increasing Amount Of Data Resided In A Cloud, How To Provide Users With Secure And Practical Query Services Has Become The Key To Improve The Quality Of Cloud Services. Fuzzy Searchable Encryption (FSE) Is Identified As One Of The Most Promising Approaches For Enabling Secure Query Services, Since It Allows Searching Encrypted Data By Using Keywords With Spelling Errors. However, Existing FSE Schemes Are Far From The Practical Use For The Following Reasons: (1) Inflexibility. It Is Hard For Them To Simultaneously Support AND And OR Semantics In A Multi-keyword Query. (2) Inefficiency. They Require Sequentially Scanning A Whole Dataset To Find Matched Files, And Thus Are Difficult To Apply To A Large-scale Dataset. (3) Limited Robustness. It Is Difficult For Them To Resist The Linear Analysis Attack In The Known-background Model. To Fix The Above Problems, This Article Proposes Matrix-based Multi-keyword Fuzzy Search (M2FS) Schemes, Which Support Approximate Keyword Matching By Exploiting The Indecomposable Property Of Primes. Specifically, We First Present A Basic Scheme, Called M2FS-B, Where Multiple Keywords In A Query Or A File Are Constructed As Prime-related Matrices Such That The Result Of Matrix Multiplication Can Be Employed To Determine The Level Of Matching For Different Query Semantics. Then, We Construct An Advanced Scheme, Named M2FS-E, Which Builds A Searchable Index As A Keyword Balanced Binary (KBB) Tree For Dynamic And Parallel Searches, While Adding Random Noises Into A Query Matrix For Enhanced Robustness. Extensive Analyses And Experiments Demonstrate The Validity Of Our M2FS Schemes.
The Contemporary Literature On Cloud Resource Allocation Is Mostly Focused On Studying The Interactions Between Customers And Cloud Managers. Nevertheless, The Recent Growth In The Customers’ Demands And The Emergence Of Private Cloud Providers (CPs) Entice The Cloud Managers To Rent Extra Resources From The CPs So As To Handle Their Backlogged Tasks And Attract More Customers. This Also Renders The Interactions Between The Cloud Managers And The CPs An Important Problem To Study. In This Paper, We Investigate Both Interactions Through A Two-stage Auction Mechanism. For The Interactions Between Customers And Cloud Managers, We Adopt The Options-based Sequential Auctions (OBSAs) To Design The Cloud Resource Allocation Paradigm. As Compared To Existing Works, Our Framework Can Handle Customers With Heterogeneous Demands, Provide Truthfulness As The Dominant Strategy, Enjoy A Simple Winner Determination Procedure, And Preclude The Delayed Entrance Issue. We Also Provide The Performance Analysis Of The OBSAs, Which Is Among The First In Literature. Regarding The Interactions Between Cloud Managers And CPs, We Propose Two Parallel Markets For Resource Gathering, And Capture The Selfishness Of The CPs By Their Offered Prices . We Conduct A Comprehensive Analysis Of The Two Markets And Identify The Bidding Strategies Of The Cloud Managers.
People Endorse The Great Power Of Cloud Computing, But Cannot Fully Trust The Cloud Providers To Host Privacy-sensitive Data, Due To The Absence Of User-to-cloud Controllability. To Ensure Confidentiality, Data Owners Outsource Encrypted Data Instead Of Plaintexts. To Share The Encrypted Files With Other Users, Ciphertext-policy Attribute-based Encryption (CP-ABE) Can Be Utilized To Conduct Fine-grained And Owner-centric Access Control. But This Does Not Sufficiently Become Secure Against Other Attacks. Many Previous Schemes Did Not Grant The Cloud Provider The Capability To Verify Whether A Downloader Can Decrypt. Therefore, These Files Should Be Available To Everyone Accessible To The Cloud Storage. A Malicious Attacker Can Download Thousands Of Files To Launch Economic Denial Of Sustainability (EDoS) Attacks, Which Will Largely Consume The Cloud Resource. The Payer Of The Cloud Service Bears The Expense. Besides, The Cloud Provider Serves Both As The Accountant And The Payee Of Resource Consumption Fee, Lacking The Transparency To Data Owners. These Concerns Should Be Resolved In Real-world Public Cloud Storage. In This Paper, We Propose A Solution To Secure Encrypted Cloud Storages From EDoS Attacks And Provide Resource Consumption Accountability. It Uses CP-ABE Schemes In A Black-box Manner And Complies With Arbitrary Access Policy Of The CP-ABE. We Present Two Protocols For Different Settings, Followed By Performance And Security Analysis.
In Current Healthcare Systems, Electronic Medical Records (EMRs) Are Always Located In Different Hospitals And Controlled By A Centralized Cloud Provider. However, It Leads To Single Point Of Failure As Patients Being The Real Owner Lose Track Of Their Private And Sensitive EMRs. Hence, This Article Aims To Build An Access Control Framework Based On Smart Contract, Which Is Built On The Top Of Distributed Ledger (blockchain), To Secure The Sharing Of EMRs Among Different Entities Involved In The Smart Healthcare System. For This, We Propose Four Forms Of Smart Contracts For User Verification, Access Authorization, Misbehavior Detection, And Access Revocation, Respectively. In This Framework, Considering The Block Size Of Ledger And Huge Amount Of Patient Data, The EMRs Are Stored In Cloud After Being Encrypted Through The Cryptographic Functions Of Elliptic Curve Cryptography (ECC) And Edwards-curve Digital Signature Algorithm (EdDSA), While Their Corresponding Hashes Are Packed Into Blockchain. The Performance Evaluation Based On A Private Ethereum System Is Used To Verify The Efficiency Of Proposed Access Control Framework In The Real-time Smart Healthcare System.
Cloud Computing Provisions Scalable Resources For High Performance Industrial Applications. Cloud Providers Usually Offer Two Types Of Usage Plans: Reserved And On-demand. Reserved Plans Offer Cheaper Resources For Long-term Contracts While On-demand Plans Are Available For Short Or Long Periods But Are More Expensive. To Satisfy Incoming User Demands With Reasonable Costs, Cloud Resources Should Be Allocated Efficiently. Most Existing Works Focus On Either Cheaper Solutions With Reserved Resources That May Lead To Under-provisioning Or Over-provisioning, Or Costly Solutions With On-demand Resources. Since Inefficiency Of Allocating Cloud Resources Can Cause Huge Provisioning Costs And Fluctuation In Cloud Demand, Resource Allocation Becomes A Highly Challenging Problem. In This Paper, We Propose A Hybrid Method To Allocate Cloud Resources According To The Dynamic User Demands. This Method Is Developed As A Two-phase Algorithm That Consists Of Reservation And Dynamic Provision Phases. In This Way, We Minimize The Total Deployment Cost By Formulating Each Phase As An Optimization Problem While Satisfying Quality Of Service. Due To The Uncertain Nature Of Cloud Demands, We Develop A Stochastic Optimization Approach By Modeling User Demands As Random Variables. Our Algorithm Is Evaluated Using Different Experiments And The Results Show Its Efficiency In Dynamically Allocating Cloud Resources.
In Distribute The Key To Both Sender And Receiver To Avoid The Hacking Of Keys. So This Architecture Will Provide High Level Security. This Work Presents Key Distribution To Safeguard High Level Security In Large Networks, New Directions In Classical Cryptography And Symmetric Cryptography. Two Three-party Key Distributions, One With Implicit User Authentication And The Other With Explicit Trusted Centers’ Authentication, Are Proposed To Demonstrate The Merits Of The New Combination. The Project Titled “Efficient Provable Of Secure Key Distribution Management” Is Designed Using Microsoft Visual Studio.Net 2005 As Front End And Microsoft SQL Server 2000 As Back End Which Works In .Net Framework Version 2.0. The Coding Language Used Is C# .Net. We Authenticated Three Parties Into This Project.
A Cloud Storage System, Consisting Of A Collection Of Storage Servers, Provides Long-term Storage Services Over The Internet. Storing Data In A Third Party's Cloud System Causes Serious Concern Over Data Confidentiality. General Encryption Schemes Protect Data Confidentiality, But Also Limit The Functionality Of The Storage System Because A Few Operations Are Supported Over Encrypted Data. Constructing A Secure Storage System That Supports Multiple Functions Is Challenging When The Storage System Is Distributed And Has No Central Authority. We Propose A Threshold Proxy Re-encryption Scheme And Integrate It With A Decentralized Erasure Code Such That A Secure Distributed Storage System Is Formulated. The Distributed Storage System Not Only Supports Secure And Robust Data Storage And Retrieval, But Also Lets A User Forward His Data In The Storage Servers To Another User Without Retrieving The Data Back. The Main Technical Contribution Is That The Proxy Re-encryption Scheme Supports Encoding Operations Over Encrypted Messages As Well As Forwarding Operations Over Encoded And Encrypted Messages. Our Method Fully Integrates Encrypting, Encoding, And Forwarding. We Analyze And Suggest Suitable Parameters For The Number Of Copies Of A Message Dispatched To Storage Servers And The Number Of Storage Servers Queried By A Key Server. These Parameters Allow More Flexible Adjustment Between The Number Of Storage Servers And Robustness.
With The Character Of Low Maintenance, Cloud Computing Provides An Economical And Efficient Solution For Sharing Group Resource Among Cloud Users. Unfortunately, Sharing Data In A Multi-owner Manner While Preserving Data And Identity Privacy From An Untrusted Cloud Is Still A Challenging Issue, Due To The Frequent Change Of The Membership. In This Paper, We Propose A Secure Multi-owner Data Sharing Scheme, Named Mona, For Dynamic Groups In The Cloud. By Leveraging Group Signature And Dynamic Broadcast Encryption Techniques, Any Cloud User Can Anonymously Share Data With Others. Meanwhile, The Storage Overhead And Encryption Computation Cost Of Our Scheme Are Independent With The Number Of Revoked Users. In Addition, We Analyze The Security Of Our Scheme With Rigorous Proofs, And Demonstrate The Efficiency Of Our Scheme In Experiments.
Personal Health Record (PHR) Is An Emerging Patient-centric Model Of Health Information Exchange, Which Is Often Outsourced To Be Stored At A Third Party, Such As Cloud Providers. However, There Have Been Wide Privacy Concerns As Personal Health Information Could Be Exposed To Those Third Party Servers And To Unauthorized Parties. To Assure The Patients' Control Over Access To Their Own PHRs, It Is A Promising Method To Encrypt The PHRs Before Outsourcing. Yet, Issues Such As Risks Of Privacy Exposure, Scalability In Key Management, Flexible Access, And Efficient User Revocation, Have Remained The Most Important Challenges Toward Achieving Fine-grained, Cryptographically Enforced Data Access Control. In This Paper, We Propose A Novel Patient-centric Framework And A Suite Of Mechanisms For Data Access Control To PHRs Stored In Semitrusted Servers. To Achieve Fine-grained And Scalable Data Access Control For PHRs, We Leverage Attribute-based Encryption (ABE) Techniques To Encrypt Each Patient's PHR File. Different From Previous Works In Secure Data Outsourcing, We Focus On The Multiple Data Owner Scenario, And Divide The Users In The PHR System Into Multiple Security Domains That Greatly Reduces The Key Management Complexity For Owners And Users. A High Degree Of Patient Privacy Is Guaranteed Simultaneously By Exploiting Multiauthority ABE. Our Scheme Also Enables Dynamic Modification Of Access Policies Or File Attributes, Supports Efficient On-demand User/attribute Revocation And Break-glass Access Under Emergency Scenarios. Extensive Analytical And Experimental Results Are Presented Which Show The Security, Scalability, And Efficiency Of Our Proposed Scheme.
The Design Of Secure Authentication Protocols Is Quite Challenging, Considering That Various Kinds Of Root Kits Reside In Personal Computers (PCs) To Observe User's Behavior And To Make PCs Untrusted Devices. Involving Human In Authentication Protocols, While Promising, Is Not Easy Because Of Their Limited Capability Of Computation And Memorization. Therefore, Relying On Users To Enhance Security Necessarily Degrades The Usability. On The Other Hand, Relaxing Assumptions And Rigorous Security Design To Improve The User Experience Can Lead To Security Breaches That Can Harm The Users' Trust. In This Paper, We Demonstrate How Careful Visualization Design Can Enhance Not Only The Security But Also The Usability Of Authentication. To That End, We Propose Two Visual Authentication Protocols: One Is A One-time-password Protocol, And The Other Is A Password-based Authentication Protocol. Through Rigorous Analysis, We Verify That Our Protocols Are Immune To Many Of The Challenging Authentication Attacks Applicable In The Literature. Furthermore, Using An Extensive Case Study On A Prototype Of Our Protocols, We Highlight The Potential Of Our Approach For Real-world Deployment: We Were Able To Achieve A High Level Of Usability While Satisfying Stringent Security Requirements.
Recently, Many Enterprises Have Moved Their Data Into The Cloud By Using File Syncing And Sharing (FSS) Services, Which Have Been Deployed For Mobile Users. However, Bring-Your-Own-Device (BYOD) Solutions For Increasingly Deployed Mobile Devices Have Also In Fact Raised A New Challenge For How To Prevent Users From Abusing The FSS Service. In This Paper, We Address This Issue By Using A New System Model Involving Anomaly Detection, Tracing, And Revocation Approaches. The Presented Solution Applies A New Threshold Public Key Based Cryptosystem, Called Partially-ordered Hierarchical Encryption (PHE), Which Implements A Partial-order Key Hierarchy And It Is Similar To Role Hierarchy Widely Used In RBAC. PHE Provides Two Main Security Mechanisms, I.e., Traitor Tracing And Key Revocation, Which Can Greatly Improve The Efficiency Compared To Previous Approaches. The Security And Performance Analysis Shows That PHE Is A Provably Secure Threshold Encryption And Provides Following Salient Management And Performance Benefits: It Can Promise To Efficiently Trace All Possible Traitor Coalitions And Support Public Revocation Not Only For The Users But For The Specified Groups.
With The Fast Development Of Cloud Computing And Its Wide Application, Data Security Plays An Important Role In Cloud Computing. This Paper Brought Up A Novel Data Security Strategy Based On Artificial Immune Algorithm On Architecture Of HDFS For Cloud Computing. Firstly, We Explained The Main Factors Influence Data Security In Cloud Environment. Then We Introduce HDFS Architecture, Data Security Model And Put Forward An Improved Security Model For Cloud Computing. In The Third Section, Artificial Immune Algorithm Related With Negative Selection And Dynamic Selection Algorithm That Adopted In Our System And How They Applied To Cloud Computing Are Depicted In Detail. Finally Simulations Are Taken By Two Steps. Former Simulations Are Carried Out To Prove The Performance Of Artificial Immune Algorithm Brought Up In This Paper, The Latter Simulation Are Running On Cloudsim Platform To Testify That Data Security Strategy Based On Artificial Immune Algorithm For Cloud Computing Is Efficient.
Cloud Computing May Be Defined As Delivery Of Product Rather Than Service. Cloud Computing Is A Internet Based Computing Which Enables Sharing Of Services. Many Users Place Their Data In The Cloud. However, The Fact That Users No Longer Have Physical Possession Of The Possibly Large Size Of Outsourced Data Makes The Data Integrity Protection In Cloud Computing A Very Challenging And Potentially Formidable Task, Especially For Users With Constrained Computing Resources And Capabilities. So Correctness Of Data And Security Is A Prime Concern. This Article Studies The Problem Of Ensuring The Integrity And Security Of Data Storage In Cloud Computing. Security In Cloud Is Achieved By Signing The Data Block Before Sending To The Cloud. Signing Is Performed Using Boneh–Lynn–Shacham (BLS) Algorithm Which Is More Secure Compared To Other Algorithms. To Ensure The Correctness Of Data, We Consider An External Auditor Called As Third Party Auditor (TPA), On Behalf Of The Cloud User, To Verify The Integrity Of The Data Stored In The Cloud. By Utilizing Public Key Based Homomorphic Authenticator With Random Masking Privacy Preserving Public Auditing Can Be Achieved. The Technique Of Bilinear Aggregate Signature Is Used To Achieve Batch Auditing. Batch Auditing Reduces The Computation Overhead. Extensive Security And Performance Analysis Shows The Proposed Schemes Are Provably Secure And Highly Efficient.
Remote Data Trustworthiness Checking Is A Fundamental Advancement In Dispersed Computing. In Recent Times Numerous Works Center Around Providing Data Elements As Well As Open Proof To This Kind Of Conventions. Existing Conventions Can Support The Two Features With The Help Of An Untouchable Evaluator. In A Earlier Work, Propose A Weak Information Honesty Checking Convention That Bolsters Information Elements. Right Now, Adjust To Help Open Undeniable Nature. The Proposed Show Reinforces Open Evident Nature Without Help Of A Pariah Examiner. What's More, The Proposed Way Doesn't Reveal Any Personal Data To Third Party Verifiers. Using A Conventional Investigation, We Show The Precision And Security Of The Show. Starting There Forward, Through Debatable Investigation And Exploratory Results, We Show That The Proposed Show Has A Conventional Presentation.
In Cloud Storage Services, Deduplication Technology Is Commonly Used To Reduce The Space And Bandwidth Requirements Of Services By Eliminating Redundant Data And Storing Only A Single Copy Of Them. Deduplication Is Most Effective When Multiple Users Outsource The Same Data To The Cloud Storage, But It Raises Issues Relating To Security And Ownership. Proof-of-ownership Schemes Allow Any Owner Of The Same Data To Prove To The Cloud Storage Server That He Owns The Data In A Robust Way. However, Many Users Are Likely To Encrypt Their Data Before Outsourcing Them To The Cloud Storage To Preserve Privacy, But This Hampers Deduplication Because Of The Randomization Property Of Encryption. Recently, Several Deduplication Schemes Have Been Proposed To Solve This Problem By Allowing Each Owner To Share The Same Encryption Key For The Same Data. However, Most Of The Schemes Suffer From Security Flaws, Since They Do Not Consider The Dynamic Changes In The Ownership Of Outsourced Data That Occur Frequently In A Practical Cloud Storage Service. In This Paper, We Propose A Novel Server-side Deduplication Scheme For Encrypted Data. It Allows The Cloud Server To Control Access To Outsourced Data Even When The Ownership Changes Dynamically By Exploiting Randomized Convergent Encryption And Secure Ownership Group Key Distribution. This Prevents Data Leakage Not Only To Revoked Users Even Though They Previously Owned That Data, But Also To An Honest-but-curious Cloud Storage Server. In Addition, The Proposed Scheme Guarantees Data Integrity Against Any Tag Inconsistency Attack. Thus, Security Is Enhanced In The Proposed Scheme. The Efficiency Analysis Results Demonstrate That The Proposed Scheme Is Almost As Efficient As The Previous Schemes, While The Additional Computational Overhead Is Negligible.
The Infrastructure Cloud (IaaS) Service Model Offers Improved Resource Flexibility And Availability, Where Tenants - Insulated From The Minutiae Of Hardware Maintenance - Rent Computing Resources To Deploy And Operate Complex Systems. Large-scale Services Running On IaaS Platforms Demonstrate The Viability Of This Model; Nevertheless, Many Organizations Operating On Sensitive Data Avoid Migrating Operations To IaaS Platforms Due To Security Concerns. In This Paper, We Describe A Framework For Data And Operation Security In IaaS, Consisting Of Protocols For A Trusted Launch Of Virtual Machines And Domain-based Storage Protection. We Continue With An Extensive Theoretical Analysis With Proofs About Protocol Resistance Against Attacks In The Defined Threat Model. The Protocols Allow Trust To Be Established By Remotely Attesting Host Platform Configuration Prior To Launching Guest Virtual Machines And Ensure Confidentiality Of Data In Remote Storage, With Encryption Keys Maintained Outside Of The IaaS Domain. Presented Experimental Results Demonstrate The Validity And Efficiency Of The Proposed Protocols. The Framework Prototype Was Implemented On A Test Bed Operating A Public Electronic Health Record System, Showing That The Proposed Protocols Can Be Integrated Into Existing Cloud Environments.
We Propose A New Design For Large-scale Multimedia Content Protection Systems. Our Design Leverages Cloud Infrastructures To Provide Cost Efficiency, Rapid Deployment, Scalability, And Elasticity To Accommodate Varying Workloads. The Proposed System Can Be Used To Protect Different Multimedia Content Types, Including Videos, Images, Audio Clips, Songs, And Music Clips. The System Can Be Deployed On Private And/or Public Clouds. Our System Has Two Novel Components: (i) Method To Create Signatures Of Videos, And (ii) Distributed Matching Engine For Multimedia Objects. The Signature Method Creates Robust And Representative Signatures Of Videos That Capture The Depth Signals In These Videos And It Is Computationally Efficient To Compute And Compare As Well As It Requires Small Storage. The Distributed Matching Engine Achieves High Scalability And It Is Designed To Support Different Multimedia Objects. We Implemented The Proposed System And Deployed It On Two Clouds: Amazon Cloud And Our Private Cloud. Our Experiments With More Than 11,000 Videos And 1 Million Images Show The High Accuracy And Scalability Of The Proposed System. In Addition, We Compared Our System To The Protection System Used By YouTube And Our Results Show That The YouTube Protection System Fails To Detect Most Copies Of Videos, While Our System Detects More Than 98% Of Them.
This Paper Proposes A Service Operator-aware Trust Scheme (SOTS) For Resource Matchmaking Across Multiple Clouds. Through Analyzing The Built-in Relationship Between The Users, The Broker, And The Service Resources, This Paper Proposes A Middleware Framework Of Trust Management That Can Effectively Reduces User Burden And Improve System Dependability. Based On Multidimensional Resource Service Operators, We Model The Problem Of Trust Evaluation As A Process Of Multi-attribute Decision-making, And Develop An Adaptive Trust Evaluation Approach Based On Information Entropy Theory. This Adaptive Approach Can Overcome The Limitations Of Traditional Trust Schemes, Whereby The Trusted Operators Are Weighted Manually Or Subjectively. As A Result, Using SOTS, The Broker Can Efficiently And Accurately Prepare The Most Trusted Resources In Advance, And Thus Provide More Dependable Resources To Users. Our Experiments Yield Interesting And Meaningful Observations That Can Facilitate The Effective Utilization Of SOTS In A Large-scale Multi-cloud Environment.
Data Deduplication Is One Of Important Data Compression Techniques For Eliminating Duplicate Copies Of Repeating Data, And Has Been Widely Used In Cloud Storage To Reduce The Amount Of Storage Space And Save Bandwidth. To Protect The Confidentiality Of Sensitive Data While Supporting Deduplication, The Convergent Encryption Technique Has Been Proposed To Encrypt The Data Before Outsourcing. To Better Protect Data Security, This Paper Makes The First Attempt To Formally Address The Problem Of Authorized Data Deduplication. Different From Traditional Deduplication Systems, The Differential Privileges Of Users Are Further Considered In Duplicate Check Besides The Data Itself. We Also Present Several New Deduplication Constructions Supporting Authorized Duplicate Check In A Hybrid Cloud Architecture. Security Analysis Demonstrates That Our Scheme Is Secure In Terms Of The Definitions Specified In The Proposed Security Model. As A Proof Of Concept, We Implement A Prototype Of Our Proposed Authorized Duplicate Check Scheme And Conduct Testbed Experiments Using Our Prototype. We Show That Our Proposed Authorized Duplicate Check Scheme Incurs Minimal Overhead Compared To Normal Operations.
Cloud Computing Is Becoming Popular As The Next Infrastructure Of Computing Platform. Despite The Promising Model And Hype Surrounding, Security Has Become The Major Concern That People Hesitate To Transfer Their Applications To Clouds. Concretely, Cloud Platform Is Under Numerous Attacks. As A Result, It Is Definitely Expected To Establish A Firewall To Protect Cloud From These Attacks. However, Setting Up A Centralized Firewall For A Whole Cloud Data Center Is Infeasible From Both Performance And Financial Aspects. In This Paper, We Propose A Decentralized Cloud Firewall Framework For Individual Cloud Customers. We Investigate How To Dynamically Allocate Resources To Optimize Resources Provisioning Cost, While Satisfying QoS Requirement Specified By Individual Customers Simultaneously. Moreover, We Establish Novel Queuing Theory Based Model M/Geo/1 And M/Geo/m For Quantitative System Analysis, Where The Service Times Follow A Geometric Distribution. By Employing Z-transform And Embedded Markov Chain Techniques, We Obtain A Closed-form Expression Of Mean Packet Response Time. Through Extensive Simulations And Experiments, We Conclude That An M/Geo/1 Model Reflects The Cloud Firewall Real System Much Better Than A Traditional M/M/1 Model. Our Numerical Results Also Indicate That We Are Able To Set Up Cloud Firewall With Affordable Cost To Cloud Customers.
Interconnected Systems, Such As Web Servers, Database Servers, Cloud Computing Servers And So On, Are Now Under Threads From Network Attackers. As One Of Most Common And Aggressive Means, Denial-of-service (DoS) Attacks Cause Serious Impact On These Computing Systems. In This Paper, We Present A DoS Attack Detection System That Uses Multivariate Correlation Analysis (MCA) For Accurate Network Traffic Characterization By Extracting The Geometrical Correlations Between Network Traffic Features. Our MCA-based DoS Attack Detection System Employs The Principle Of Anomaly Based Detection In Attack Recognition. This Makes Our Solution Capable Of Detecting Known And Unknown DoS Attacks Effectively By Learning The Patterns Of Legitimate Network Traffic Only. Furthermore, A Triangle-area-based Technique Is Proposed To Enhance And To Speed Up The Process Of MCA. The Effectiveness Of Our Proposed Detection System Is Evaluated Using KDD Cup 99 Data Set, And The Influences Of Both Non-normalized Data And Normalized Data On The Performance Of The Proposed Detection System Are Examined. The Results Show That Our System Outperforms Two Other Previously Developed State-of-the-art Approaches In Terms Of Detection Accuracy.
Cloud Computing Is An Emerging Data Interactive Paradigm To Realize Users' Data Remotely Stored In An Online Cloud Server. Cloud Services Provide Great Conveniences For The Users To Enjoy The On-demand Cloud Applications Without Considering The Local Infrastructure Limitations. During The Data Accessing, Different Users May Be In A Collaborative Relationship, And Thus Data Sharing Becomes Significant To Achieve Productive Benefits. The Existing Security Solutions Mainly Focus On The Authentication To Realize That A User's Privative Data Cannot Be Illegally Accessed, But Neglect A Subtle Privacy Issue During A User Challenging The Cloud Server To Request Other Users For Data Sharing. The Challenged Access Request Itself May Reveal The User's Privacy No Matter Whether Or Not It Can Obtain The Data Access Permissions. In This Paper, We Propose A Shared Authority Based Privacy-preserving Authentication Protocol (SAPA) To Address Above Privacy Issue For Cloud Storage. In The SAPA, 1) Shared Access Authority Is Achieved By Anonymous Access Request Matching Mechanism With Security And Privacy Considerations (e.g., Authentication, Data Anonymity, User Privacy, And Forward Security); 2) Attribute Based Access Control Is Adopted To Realize That The User Can Only Access Its Own Data Fields; 3) Proxy Re-encryption Is Applied To Provide Data Sharing Among The Multiple Users. Meanwhile, Universal Composability (UC) Model Is Established To Prove That The SAPA Theoretically Has The Design Correctness. It Indicates That The Proposed Protocol Is Attractive For Multi-user Collaborative Cloud Applications.
Cloud Storage Services Have Become Commercially Popular Due To Their Overwhelming Advantages. To Provide Ubiquitous Always-on Access, A Cloud Service Provider (CSP) Maintains Multiple Replicas For Each Piece Of Data On Geographically Distributed Servers. A Key Problem Of Using The Replication Technique In Clouds Is That It Is Very Expensive To Achieve Strong Consistency On A Worldwide Scale. In This Paper, We First Present A Novel Consistency As A Service (CaaS) Model, Which Consists Of A Large Data Cloud And Multiple Small Audit Clouds. In The CaaS Model, A Data Cloud Is Maintained By A CSP, And A Group Of Users That Constitute An Audit Cloud Can Verify Whether The Data Cloud Provides The Promised Level Of Consistency Or Not. We Propose A Two-level Auditing Architecture, Which Only Requires A Loosely Synchronized Clock In The Audit Cloud. Then, We Design Algorithms To Quantify The Severity Of Violations With Two Metrics: The Commonality Of Violations, And The Staleness Of The Value Of A Read. Finally, We Devise A Heuristic Auditing Strategy (HAS) To Reveal As Many Violations As Possible. Extensive Experiments Were Performed Using A Combination Of Simulations And Realcloud Deployments To Validate HAS.
With The Increasing Popularity Of Cloud Computing As A Solution For Building High-quality Applications On Distributed Components, Efficiently Evaluating User-side Quality Of Cloud Components Becomes An Urgent And Crucial Research Problem. However, Invoking All The Available Cloud Components From User-side For Evaluation Purpose Is Expensive And Impractical. To Address This Critical Challenge, We Propose A Neighborhood-based Approach, Called CloudPred, For Collaborative And Personalized Quality Prediction Of Cloud Components. CloudPred Is Enhanced By Feature Modeling On Both Users And Components. Our Approach CloudPred Requires No Additional Invocation Of Cloud Components On Behalf Of The Cloud Application Designers. The Extensive Experimental Results Show That CloudPred Achieves Higher QoS Prediction Accuracy Than Other Competing Methods. We Also Publicly Release Our Large-scale QoS Dataset For Future Related Research In Cloud Computing.
Data Sharing Is An Important Functionality In Cloud Storage. In This Paper, We Show How To Securely, Efficiently, And Flexibly Share Data With Others In Cloud Storage. We Describe New Public-key Cryptosystems That Produce Constant-size Ciphertexts Such That Efficient Delegation Of Decryption Rights For Any Set Of Ciphertexts Are Possible. The Novelty Is That One Can Aggregate Any Set Of Secret Keys And Make Them As Compact As A Single Key, But Encompassing The Power Of All The Keys Being Aggregated. In Other Words, The Secret Key Holder Can Release A Constant-size Aggregate Key For Flexible Choices Of Ciphertext Set In Cloud Storage, But The Other Encrypted Files Outside The Set Remain Confidential. This Compact Aggregate Key Can Be Conveniently Sent To Others Or Be Stored In A Smart Card With Very Limited Secure Storage. We Provide Formal Security Analysis Of Our Schemes In The Standard Model. We Also Describe Other Application Of Our Schemes. In Particular, Our Schemes Give The First Public-key Patient-controlled Encryption For Flexible Hierarchy, Which Was Yet To Be Known.
We Present Anchor, A General Resource Management Architecture That Uses The Stable Matching Framework To Decouple Policies From Mechanisms When Mapping Virtual Machines To Physical Servers. In Anchor, Clients And Operators Are Able To Express A Variety Of Distinct Resource Management Policies As They Deem Fit, And These Policies Are Captured As Preferences In The Stable Matching Framework. The Highlight Of Anchor Is A New Many-to-one Stable Matching Theory That Efficiently Matches VMs With Heterogeneous Resource Needs To Servers, Using Both Offline And Online Algorithms. Our Theoretical Analyses Show The Convergence And Optimality Of The Algorithm. Our Experiments With A Prototype Implementation On A 20-node Server Cluster, As Well As Large-scale Simulations Based On Real-world Workload Traces, Demonstrate That The Architecture Is Able To Realize A Diverse Set Of Policy Objectives With Good Performance And Practicality.
Cloud Computing Promises To Increase The Velocity With Which Application Are Deployed, Increase Innovation And Lower Costs, All While Increasing Business Agility And Hence Envisioned As The Next Generation Architecture Of IT Enterprise. Nature Of Cloud Computing Builds An Established Trend For Driving Cost Out Of The Delivery Of Services While Increasing The Speed And Agility With Which Services Are Deployed. Cloud Computing Incorporates Virtualization, On Demand Deployment, Internet Delivery Of Services And Open Source Software .From Another Perspective, Everything Is New Because Cloud Computing Changes How We Invent, Develop, Deploy, Scale, Update, Maintain And Pay For Application And The Infrastructure On Which They Run. Because Of These Benefits Of Cloud Computing, It Requires An Effective And Flexible Dynamic Security Scheme To Ensure The Correctness Of Users’ Data In The Cloud. Quality Of Service Is An Important Aspect And Hence, Extensive Cloud Data Security And Performance Is Required.
Cloud Computing Has Emerging As A Promising Pattern For Data Outsourcing And High-quality Data Services. However, Concerns Of Sensitive Information On Cloud Potentially Causes Privacy Problems. Data Encryption Protects Data Security To Some Extent, But At The Cost Of Compromised Efficiency. Searchable Symmetric Encryption (SSE) Allows Retrieval Of Encrypted Data Over Cloud. In This Paper, We Focus On Addressing Data Privacy Issues Using SSE. For The First Time, We Formulate The Privacy Issue From The Aspect Of Similarity Relevance And Scheme Robustness. We Observe That Server-side Ranking Based On Order-preserving Encryption (OPE) Inevitably Leaks Data Privacy. To Eliminate The Leakage, We Propose A Two-round Searchable Encryption (TRSE) Scheme That Supports Top-(k) Multikeyword Retrieval. In TRSE, We Employ A Vector Space Model And Homomorphic Encryption. The Vector Space Model Helps To Provide Sufficient Search Accuracy, And The Homomorphic Encryption Enables Users To Involve In The Ranking While The Majority Of Computing Work Is Done On The Server Side By Operations Only On Ciphertext. As A Result, Information Leakage Can Be Eliminated And Data Security Is Ensured. Thorough Security And Performance Analysis Show That The Proposed Scheme Guarantees High Security And Practical Efficiency.
Cloud Computing Has Emerged As One Of The Most Influential Paradigms In The IT Industry In Recent Years. Since This New Computing Technology Requires Users To Entrust Their Valuable Data To Cloud Providers, There Have Been Increasing Security And Privacy Concerns On Outsourced Data. Several Schemes Employing Attribute-based Encryption (ABE) Have Been Proposed For Access Control Of Outsourced Data In Cloud Computing; However, Most Of Them Suffer From Inflexibility In Implementing Complex Access Control Policies. In Order To Realize Scalable, Flexible, And Fine-grained Access Control Of Outsourced Data In Cloud Computing, In This Paper, We Propose Hierarchical Attribute-set-based Encryption (HASBE) By Extending Ciphertext-policy Attribute-set-based Encryption (ASBE) With A Hierarchical Structure Of Users. The Proposed Scheme Not Only Achieves Scalability Due To Its Hierarchical Structure, But Also Inherits Flexibility And Fine-grained Access Control In Supporting Compound Attributes Of ASBE. In Addition, HASBE Employs Multiple Value Assignments For Access Expiration Time To Deal With User Revocation More Efficiently Than Existing Schemes. We Formally Prove The Security Of HASBE Based On Security Of The Ciphertext-policy Attribute-based Encryption (CP-ABE) Scheme By Bethencourt And Analyze Its Performance And Computational Complexity. We Implement Our Scheme And Show That It Is Both Efficient And Flexible In Dealing With Access Control For Outsourced Data In Cloud Computing With Comprehensive Experiments.
Encryption Is The Technique Of Hiding Private Or Sensitive Information Within Something That Appears To Be Nothing Be A Usual. If A Person Views That Cipher Text, He Or She Will Have No Idea That There Is Any Secret Information. What Encryption Essentially Does Is Exploit Human Perception, Human Senses Are Not Trained To Look For Files That Have Information Inside Of Them. What This System Does Is, It Lets User To Send Text As Secrete Message And Gives A Key Or A Password To Lock The Text, What This Key Does Is, It Encrypts The Text, So That Even If It Is Hacked By Hacker It Will Not Be Able To Read The Text. Receiver Will Need The Key To Decrypt The Hidden Text. User Then Sends The Key To The Receiver And Then He Enters The Key Or Password For Decryption Of Text, He Then Presses Decrypt Key To Get Secret Text From The Sender. Diffie-Hellman Key Exchange Offers The Best Of Both As It Uses Public Key Techniques To Allow The Exchange Of A Private Encryption Key. By Using This Method, You Can Double Ensure That Your Secret Message Is Sent Secretly Without Outside Interference Of Hackers Or Crackers. If Sender Sends This Cipher Text In Public Others Will Not Know What Is It, And It Will Be Received By Receiver. The System Uses Online Database To Store All Related Information. As, The Project Files And A Database File Will Be Stored Into The Azure Cloud, The Project Will Be Accessed In The Web Browser Through Azure Link.
With The Advent Of Internet, Various Online Attacks Has Been Increased And Among Them The Most Popular Attack Is Phishing. Phishing Is An Attempt By An Individual Or A Group To Get Personal Confidential Information Such As Passwords, Credit Card Information From Unsuspecting Victims For Identity Theft, Financial Gain And Other Fraudulent Activities. Fake Websites Which Appear Very Similar To The Original Ones Are Being Hosted To Achieve This. In This Paper We Have Proposed A New Approach Named As "A Novel Anti-phishing Framework Based On Visual Cryptography "to Solve The Problem Of Phishing. Here An Image Based Authentication Using Visual Cryptography Is Implemented. The Use Of Visual Cryptography Is Explored To Preserve The Privacy Of An Image Captcha By Decomposing The Original Image Captcha Into Two Shares (known As Sheets) That Are Stored In Separate Database Servers(one With User And One With Server) Such That The Original Image Captcha Can Be Revealed Only When Both Are Simultaneously Available; The Individual Sheet Images Do Not Reveal The Identity Of The Original Image Captcha. Once The Original Image Captcha Is Revealed To The User It Can Be Used As The Password. Using This Website Cross Verifies Its Identity And Proves That It Is A Genuine Website Before The End Users.
Nowadays, Large Amounts Of Data Are Stored With Cloud Service Providers. Third-party Auditors (TPAs), With The Help Of Cryptography, Are Often Used To Verify This Data. However, Most Auditing Schemes Don't Protect Cloud User Data From TPAs. A Review Of The State Of The Art And Research In Cloud Data Auditing Techniques Highlights Integrity And Privacy Challenges, Current Solutions, And Future Research Directions.
With The Recent Advancement In Cloud Computing Technology, Cloud Computing Allows Users To Upgrade And Downgrade Their Resource Usage Based On Their Needs. Most Of These Benefits Are Achieved From Resource Multiplexing Through Virtualization Technology In The Cloud Model. Using The Virtualization Technology The Data Center Resources Can Be Dynamically Allocated Based On Application Demands. The Concept Of "green Computing" And Skewness Is Introduced To Optimize The Number Of Servers In Use And To Measure The Unevenness In The Multi-dimensional Resource Utilization Of A Server Respectively. By Minimizing Skewness, The Different Types Of Workloads Can Be Combined Effectively And The Overall Utilization Of Server Resources Can Be Improved.