The Information Shared Over Network Like Audio And Video Files Will Be Having Major Challenge Due To Security Credentials. In Large Scale Systems Like Cloud Infrastructure Used To Improve The Better Security From Past One Decade. The Contents Are Like Pictures, Audio And Video Clips Are Shared Over The Online Training Sessions In Recent Days. Therefore The Video Files Are Used To Protect Using Digital Signature And Digital Watermarking. The Content Of The Multimedia Files Are Require The Better Environment For Sharing Of Knowledge Using Private And Public Clouds. The Digital Signature Method Is Used For Multimedia Components Such As 2D And 3D Video Clips And Shared Among The Users On Cloud Infrastructure Will Be Predicted With Various Cloud Based Security Techniques.
Traditional Searchable Encryption Schemes Based On The Term Frequency-Inverse Document Frequency (TF-IDF) Model Adopt The Presence Of Keywords To Measure The Relevance Of Documents To Queries, Which Ignores The Latent Semantic Meanings That Are Concealed In The Context. Latent Dirichlet Allocation (LDA) Topic Model Can Be Utilized For Modeling The Semantics Among Texts To Achieve Semantic-aware Multi-keyword Search. However, The LDA Topic Model Treats Queries And Documents From The Perspective Of Topics, And The Keywords Information Is Ignored. In This Paper, We Propose A Privacy-preserving Searchable Encryption Scheme Based On The LDA Topic Model And The Query Likelihood Model. We Extract The Feature Keywords From The Document Using The LDA-based Information Gain (IG) And Topic Frequency-Inverse Topic Frequency (TF-ITF) Model. With Feature Keyword Extraction And The Query Likelihood Model, Our Scheme Can Achieve A More Accurate Semantic-aware Keyword Search. A Special Index Tree Is Used To Enhance Search Efficiency. The Secure Inner Product Operation Is Utilized To Implement The Privacy-preserving Ranked Search. The Experiments On Real-world Datasets Demonstrate The Effectiveness Of Our Scheme.
Personal Health Record (PHR) Service Is An Emerging Model For Health Information Exchange. It Allows Patients To Create, Update And Manage Personal And Medical Information. Also They Can Control And Share Their Medical Information With Other Users As Well As Health Care Providers. PHR Data Is Hosted To The Third Party Cloud Service Providers In Order To Enhance Its Interoperability. However, There Have Been Serious Security And Privacy Issues In Outsourcing These Data To Cloud Server. For Security, Encrypt The PHRs Before Outsourcing. So Many Issues Such As Risks Of Privacy Exposure, Scalability In Key Management, Flexible Access And Efficient User Revocation, Have Remained The Most Important Challenges Toward Achieving Fine-grained, Cryptographically Enforced Data Access Control. To Achieve Fine-grained And Scalable Data Access Control For Client’s Data, A Novel Patient-centric Framework Is Used. This Frame Work Is Mainly Focus On The Multiple Data Owner Scenario. A High Degree Of Patient Privacy Is Guaranteed Simultaneously By Exploiting Multi Authority ABE. This Scheme Also Enables Dynamic Modification Of Access Policies Or File Attributes, Support Efficient On Demand User/attribute Revocation. However Some Practical Limitations Are In Building PHR System. If Consider The Workflow Based Access Control Scenarios, The Data Access Right Could Be Given Based On Users Identities Rather Than Their Attributes, While ABE Does Not Handle That Efficiently. For Solving These Problem In This Thesis Proposed PHR System, Based On Attribute Based Broadcast Encryption (ABBE).
Recently, Efficient Fine-grained Access Mechanism Has Been Studied As A Main Concern In Cloud Storage Area For Several Years. Attribute-based Signcryption (ABSC) Which Is Logical Combination Of Attribute-based Encryption(ABE) And Attribute-based Signature(ABS), Can Provide Confidentiality, Authenticity For Sensitive Data And Anonymous Authentication. At The Same Time It Is More Efficient Than Previous “encrypt-then-sign” And “sign-then-encrypt” Patterns. However, Most Of The Existing ABSC Schemes Fail To Serve For Real Scenario Of Multiple Authorities And Have Heavy Communication Overhead And Computing Overhead. Hence, We Construct A Novel ABSC Scheme Realizing Multi-authority Access Control And Constant-size Ciphertext That Does Not Depend On The Number Of Attributes Or Authorities. Furthermore, Our Scheme Provides Public Verifiability Of The Ciphertext And Privacy Protection For The Signcryptor. Specially, It Is Proven To Be Secure In The Standard Model, Including Ciphertext Indistinguishability Under Adaptive Chosen Ciphertext Attacks And Existential Unforgeability Under Adaptive Chosen Message Attack.
As Is Known, Attribute-based Encryption (ABE) Is Usually Adopted For Cloud Storage, Both For Its Achievement Of Fine-grained Access Control Over Data, And For Its Guarantee Of Data Confidentiality. Nevertheless, Single-authority Attribute-based Encryption (SA-ABE) Has Its Obvious Drawback In That Only One Attribute Authority Can Assign The Users' Attributes, Enabling The Data To Be Shared Only Within The Management Domain Of The Attribute Authority, While Rendering Multiple Attribute Authorities Unable To Share The Data. On The Other Hand, Multi-authority Attribute-based Encryption (MA-ABE) Has Its Advantages Over SA-ABE. It Can Not Only Satisfy The Need For The Fine-grained Access Control And Confidentiality Of Data, But Also Make The Data Shared Among Different Multiple Attribute Authorities. However, Existing MA-ABE Schemes Are Unsuitable For The Devices With Resources-constraint, Because These Schemes Are All Based On Expensive Bilinear Pairing. Moreover, The Major Challenge Of MA-ABE Scheme Is Attribute Revocation. So Far, Many Solutions In This Respect Are Not Efficient Enough. In This Paper, On The Basis Of The Elliptic Curves Cryptography, We Propose An Efficient Revocable Multi-authority Attribute-based Encryption (RMA-ABE) Scheme For Cloud Storage. The Security Analysis Indicates That The Proposed Scheme Satisfies Indistinguishable Under Adaptive Chosen Plaintext Attack Assuming Hardness Of The Decisional Diffie-Hellman Problem. Compared With The Other Schemes, The Proposed Scheme Gets Its Advantages In That It Is More Economical In Computation And Storage.
In This Paper, We Present DistSim, A Scalable Distributed In-Memory Semantic Similarity Estimation Framework For Knowledge Graphs. DistSim Provides A Multitude Of State-ofthe-art Similarity Estimators. We Have Developed The Similarity Estimation Pipeline By Combining Generic Software Modules. For Large Scale RDF Data, DistSim Proposes MinHash With Locality Sensitivity Hashing To Achieve Better Scalability Over All-pair Similarity Estimations. The Modules Of DistSim Can Be Set Up Using A Multitude Of (hyper)-parameters Allowing To Adjust The Tradeoff Between Information Taken Into Account, And Processing Time. Furthermore, The Output Of The Similarity Estimation Pipeline Is Native RDF. DistSim Is Integrated Into The SANSA Stack, Documented In Scala-docs, And Covered By Unit Tests. Additionally, The Variables And Provided Methods Follow The Apache Spark MLlib Name-space Conventions. The Performance Of DistSim Was Tested Over A Distributed Cluster, For The Dimensions Of Data Set Size And Processing Power Versus Processing Time, Which Shows The Scalability Of DistSim W.r.t. Increasing Data Set Sizes And Processing Power. DistSim Is Already In Use For Solving Several RDF Data Analytics Related Use Cases. Additionally, DistSim Is Available And Integrated Into The Open-source GitHub Project SANSA
In Order To Realize The Sharing Of Data By Multiple Users On The Blockchain, This Paper Proposes An Attribute-based Searchable Encryption With Verifiable Ciphertext Scheme Via Blockchain. The Scheme Uses The Public Key Algorithm To Encrypt The Keyword, The Attribute-based Encryption Algorithm To Encrypt The Symmetric Key, And The Symmetric Key To Encrypt The File. The Keyword Index Is Stored On The Blockchain, And The Ciphertext Of The Symmetric Key And File Are Stored On The Cloud Server. The Scheme Uses Searchable Encryption Technology To Achieve Secure Search On The Blockchain, Uses The Immutability Of The Blockchain To Ensure The Security Of The Keyword Ciphertext, Uses Verify Algorithm Guarantees The Integrity Of The Data On The Cloud. When The User's Attributes Need To Be Changed Or The Ciphertext Access Structure Is Changed, The Scheme Uses Proxy Re-encryption Technology To Implement The User's Attribute Revocation, And The Authority Center Is Responsible For The Whole Attribute Revocation Process. The Security Proof Shows That The Scheme Can Achieve Ciphertext Security, Keyword Security And Anti-collusion. In Addition, The Numerical Results Show That The Proposed Scheme Is Effective.
With The Help Of Cloud Computing, The Ubiquitous And Diversified Internet Of Things (IoT) Has Greatly Improved Human Society. Revocable Multi-authority Attribute-based Encryption (MA-ABE) Is Considered A Promising Technique To Solve The Security Challenges On Data Access Control In The Dynamic IoT Since It Can Achieve Dynamic Access Control Over The Encrypted Data. However, On The One Hand, The Existing Revocable Large Universe MA-ABE Suffers The Collusion Attack Launched By Revoked Users And Non-revoked Users. On The Other Hand, The User Collusion Avoidance Revocable MA-ABE Schemes Do Not Support Large Attributes (or Users) Universe, I.e. The Flexible Number Of Attributes (or Users). In This Article, The Author Proposes An Efficient Revocable Large Universe MA-ABE Based On Prime Order Bilinear Groups. The Proposed Scheme Supports User-attribute Revocation, I.e., The Revoked User Only Loses One Or More Attributes, And She/he Can Access The Data So Long As Her/his Remaining Attributes Satisfy The Access Policy. It Is Static Security In The Random Oracle Model Under The Q-DPBDHE2 Assumption. Moreover, It Is Secure Against The Collusion Attack Launched By Revoked Users And Non-revoked Users. Meanwhile, It Meets The Requirements Of Forward And Backward Security. The Limited-resource Users Can Choose Outsourcing Decryption To Save Resources. The Performance Analysis Results Indicate That It Is Suitable For Large-scale Cross-domain Collaboration In The Dynamic Cloud-aided IoT.
The Main Goal Of A Cloud Provider Is To Make Profits By Providing Services To Users. Existing Profit Optimization Strategies Employ Homogeneous User Models In Which User Personality Is Ignored, Resulting In Fewer Profits And Particularly Notably Lower User Satisfaction That In Turn, Leads To Fewer Users And Reduced Profits. In This Paper, We Propose Efficient Personality-aware Request Scheduling Schemes To Maximize The Profit Of The Cloud Provider Under The Constraint Of User Satisfaction. Specifically, We First Model The Service Requests At The Granularity Of Individual Personality And Propose A Personalized User Satisfaction Prediction Model Based On Questionnaires. Subsequently, We Design A Personality-guided Integer Linear Programming (ILP)-based Request Scheduling Algorithm To Maximize The Profit Under The Constraint Of User Satisfaction, Which Is Followed By An Approximate But Lightweight Value Assessment And Cross Entropy (VACE)-based Profit Improvement Scheme. The VACE-based Scheme Is Especially Tailored For Applications With High Scheduling Resolution. Extensive Simulation Results Show That Our Satisfaction Prediction Model Can Achieve The Accuracy Of Up To 83%, And Our Profit Optimization Schemes Can Improve The Profit By At Least 3.96% As Compared To The Benchmarking Methods While Still Obtaining A Speedup Of At Least 1.68x
We Propose A New Design For Large-scale Multimedia Content Protection Systems. Our Design Leverages Cloud Infrastructures To Provide Cost Efficiency, Rapid Deployment, Scalability, And Elasticity To Accommodate Varying Workloads. The Proposed System Can Be Used To Protect Different Multimedia Content Types, Including Videos, Images, Audio Clips, Songs, And Music Clips. The System Can Be Deployed On Private And/or Public Clouds. Our System Has Two Novel Components: (i) Method To Create Signatures Of Videos, And (ii) Distributed Matching Engine For Multimedia Objects. The Signature Method Creates Robust And Representative Signatures Of Videos That Capture The Depth Signals In These Videos And It Is Computationally Efficient To Compute And Compare As Well As It Requires Small Storage. The Distributed Matching Engine Achieves High Scalability And It Is Designed To Support Different Multimedia Objects. We Implemented The Proposed System And Deployed It On Two Clouds: Amazon Cloud And Our Private Cloud. Our Experiments With More Than 11,000 Videos And 1 Million Images Show The High Accuracy And Scalability Of The Proposed System. In Addition, We Compared Our System To The Protection System Used By YouTube And Our Results Show That The YouTube Protection System Fails To Detect Most Copies Of Videos, While Our System Detects More Than 98% Of Them.
Set-valued Data And Social Network Provide Opportunities To Mine Useful, Yet Potentially Security-sensitive, Information. While There Are Mechanisms To Anonymize Data And Protect The Privacy Separately In Set-valued Data And In Social Network, The Existing Approaches In Data Privacy Do Not Address The Privacy Issue Which Emerge When Publishing Set-valued Data And Its Correlative Social Network Simultaneously. In This Paper, We Propose A Privacy Attack Model Based On Linking The Set-valued Data And The Social Network Topology Information And A Novel Technique To Defend Against Such Attack To Protect The Individual Privacy. To Improve Data Utility And The Practicality Of Our Scheme, We Use Local Generalization And Partial Suppression To Make Set-valued Data Satisfy The Grouped ρ-uncertainty Model And To Reduce The Impact On The Community Structure Of The Social Network When Anonymizing The Social Network. Experiments On Real-life Data Sets Show That Our Method Outperforms The Existing Mechanisms In Data Privacy And, More Specifically, That It Provides Greater Data Utility While Having Less Impact On The Community Structure Of Social Networks.
In Cloud Computing, An Important Concern Is To Allocate The Available Resources Of Service Nodes To The Requested Tasks On Demand And To Make The Objective Function Optimum, I.e., Maximizing Resource Utilization, Payoffs, And Available Bandwidth. This Article Proposes A Hierarchical Multi-agent Optimization (HMAO) Algorithm In Order To Maximize The Resource Utilization And Make The Bandwidth Cost Minimum For Cloud Computing. The Proposed HMAO Algorithm Is A Combination Of The Genetic Algorithm (GA) And The Multi-agent Optimization (MAO) Algorithm. With Maximizing The Resource Utilization, An Improved GA Is Implemented To Find A Set Of Service Nodes That Are Used To Deploy The Requested Tasks. A Decentralized-based MAO Algorithm Is Presented To Minimize The Bandwidth Cost. We Study The Effect Of Key Parameters Of The HMAO Algorithm By The Taguchi Method And Evaluate The Performance Results. The Results Demonstrate That The HMAO Algorithm Is More Effective Than Two Baseline Algorithms Of Genetic Algorithm (GA) And Fast Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) In Solving The Large-scale Optimization Problem Of Resource Allocation. Furthermore, We Provide The Performance Comparison Of The HMAO Algorithm With Two Heuristic Greedy And Viterbi Algorithms In On-line Resource Allocation.
We Know That With The Emergence Of Internet People All Around The World Are Using Its Services &are Heavily Dependent On It. People Are Also Storing Their Huge Amount Of Data Over The Cloud .It Is The Challenge For Researchers To Secure The Private And Critical Data Of The Users, So That Unauthorized Person Should Not Be Able To Access It And Manipulate It .Cryptography Is A Process Of Converting The User Useful Information To A Form Which Is Insignificant To An Unauthorized Person So That Only Authorized Persons Can Access And Understands It .For Ensuring Privacy There Are Multiple Cryptographic Algorithms, Which Is Selected As Per Requirement Of User Or Security Specification Of The Organization. This Paper Discusses The Comparison Of Various Cryptographic Encryption Algorithms With Respect To Its Various Key Features & Then Later Discusses Their Performance Cost Based On The Some Selected Key Criteria’s. Some Of The Algorithms Chosen For The Purpose Are DES, 3DES, IDEA, CAST128, AES, Blowfish, RSA, ABE &ECC.
Using Cloud Storage Services, Users Can Store Their Data In The Cloud To Avoid The Expenditure Of Local Data Storage And Maintenance. To Ensure The Integrity Of The Data Stored In The Cloud, Many Data Integrity Auditing Schemes Have Been Proposed. In Most, If Not All, Of The Existing Schemes, A User Needs To Employ His Private Key To Generate The Data Authenticators For Realizing The Data Integrity Auditing. Thus, The User Has To Possess A Hardware Token (e.g., USB Token, Smart Card) To Store His Private Key And Memorize A Password To Activate This Private Key. If This Hardware Token Is Lost Or This Password Is Forgotten, Most Of The Current Data Integrity Auditing Schemes Would Be Unable To Work. In Order To Overcome This Problem, We Propose A New Paradigm Called Data Integrity Auditing Without Private Key Storage And Design Such A Scheme. In This Scheme, We Use Biometric Data (e.g., Iris Scan, Fingerprint) As The User’s Fuzzy Private Key To Avoid Using The Hardware Token. Meanwhile, The Scheme Can Still Effectively Complete The Data Integrity Auditing. We Utilize A Linear Sketch With Coding And Error Correction Processes To Confirm The Identity Of The User. In Addition, We Design A New Signature Scheme Which Not Only Supports Blockless Verifiability, But Also Is Compatible With The Linear Sketch. The Security Proof And The Performance Analysis Show That Our Proposed Scheme Achieves Desirable Security And Efficiency.
Searchable Encryption Is Of Increasing Interest For Protecting The Data Privacy In Secure Searchable Cloud Storage. In This Paper, We Investigate The Security Of A Well-known Cryptographic Primitive, Namely, Public Key Encryption With Keyword Search (PEKS) Which Is Very Useful In Many Applications Of Cloud Storage. Unfortunately, It Has Been Shown That The Traditional PEKS Framework Suffers From An Inherent Insecurity Called Inside Keyword Guessing Attack (KGA) Launched By The Malicious Server. To Address This Security Vulnerability, We Propose A New PEKS Framework Named Dual-server PEKS (DS-PEKS). As Another Main Contribution, We Define A New Variant Of The Smooth Projective Hash Functions (SPHFs) Referred To As Linear And Homomorphic SPHF (LH-SPHF). We Then Show A Generic Construction Of Secure DS-PEKS From LH-SPHF. To Illustrate The Feasibility Of Our New Framework, We Provide An Efficient Instantiation Of The General Framework From A Decision Diffie-Hellman-based LH-SPHF And Show That It Can Achieve The Strong Security Against Inside The KGA.
As The Cloud Computing Technology Develops During The Last Decade, Outsourcing Data To Cloud Service For Storage Becomes An Attractive Trend, Which Benefits In Sparing Efforts On Heavy Data Maintenance And Management. Nevertheless, Since The Outsourced Cloud Storage Is Not Fully Trustworthy, It Raises Security Concerns On How To Realize Data Deduplication In Cloud While Achieving Integrity Auditing. In This Work, We Study The Problem Of Integrity Auditing And Secure Deduplication On Cloud Data. Specifically, Aiming At Achieving Both Data Integrity And Deduplication In Cloud, We Propose Two Secure Systems, Namely SecCloud And SecCloud $^+$ . SecCloud Introduces An Auditing Entity With A Maintenance Of A MapReduce Cloud, Which Helps Clients Generate Data Tags Before Uploading As Well As Audit The Integrity Of Data Having Been Stored In Cloud. Compared With Previous Work, The Computation By User In SecCloud Is Greatly Reduced During The File Uploading And Auditing Phases. SecCloud $^+$ Is Designed Motivated By The Fact That Customers Always Want To Encrypt Their Data Before Uploading, And Enables Integrity Auditing And Secure Deduplication On Encrypted Data.
The Notion Of Database Outsourcing Enables The Data Owner To Delegate The Database Management To A Cloud Service Provider (CSP) That Provides Various Database Services To Different Users. Recently, Plenty Of Research Work Has Been Done On The Primitive Of Outsourced Database. However, It Seems That No Existing Solutions Can Perfectly Support The Properties Of Both Correctness And Completeness For The Query Results, Especially In The Case When The Dishonest CSP Intentionally Returns An Empty Set For The Query Request Of The User. In This Paper, We Propose A New Verifiable Auditing Scheme For Outsourced Database, Which Can Simultaneously Achieve The Correctness And Completeness Of Search Results Even If The Dishonest CSP Purposely Returns An Empty Set. Furthermore, We Can Prove That Our Construction Can Achieve The Desired Security Properties Even In The Encrypted Outsourced Database. Besides, The Proposed Scheme Can Be Extended To Support The Dynamic Database Setting By Incorporating The Notion Of Verifiable Database With Updates.
Cloud Storage Services Have Become Increasingly Popular. Because Of The Importance Of Privacy, Many Cloud Storage Encryption Schemes Have Been Proposed To Protect Data From Those Who Do Not Have Access. All Such Schemes Assumed That Cloud Storage Providers Are Safe And Cannot Be Hacked; However, In Practice, Some Authorities (i.e., Coercers) May Force Cloud Storage Providers To Reveal User Secrets Or Confidential Data On The Cloud, Thus Altogether Circumventing Storage Encryption Schemes. In This Paper, We Present Our Design For A New Cloud Storage Encryption Scheme That Enables Cloud Storage Providers To Create Convincing Fake User Secrets To Protect User Privacy. Since Coercers Cannot Tell If Obtained Secrets Are True Or Not, The Cloud Storage Providers Ensure That User Privacy Is Still Securely Protected.
Cloud Computing Moves The Application Software And Databases To The Centralized Large Data Centers, Where The Management Of The Data And Services May Not Be Fully Trustworthy. In This Work, We Study The Problem Of Ensuring The Integrity Of Data Storage In Cloud Computing. To Reduce The Computational Cost At User Side During The Integrity Verification Of Their Data, The Notion Of Public Verifiability Has Been Proposed. However, The Challenge Is That The Computational Burden Is Too Huge For The Users With Resource-constrained Devices To Compute The Public Authentication Tags Of File Blocks. To Tackle The Challenge, We Propose OPoR, A New Cloud Storage Scheme Involving A Cloud Storage Server And A Cloud Audit Server, Where The Latter Is Assumed To Be Semi-honest. In Particular, We Consider The Task Of Allowing The Cloud Audit Server, On Behalf Of The Cloud Users, To Pre-process The Data Before Uploading To The Cloud Storage Server And Later Verifying The Data Integrity. OPoR Outsources And Offloads The Heavy Computation Of The Tag Generation To The Cloud Audit Server And Eliminates The Involvement Of User In The Auditing And In The Pre-processing Phases. Furthermore, We Strengthen The Proof Of Retrievability (PoR) Model To Support Dynamic Data Operations, As Well As Ensure Security Against Reset Attacks Launched By The Cloud Storage Server In The Upload Phase.
Increasingly More And More Organizations Are Opting For Outsourcing Data To Remote Cloud Service Providers (CSPs). Customers Can Rent The CSPs Storage Infrastructure To Store And Retrieve Almost Unlimited Amount Of Data By Paying Fees Metered In Gigabyte/month. For An Increased Level Of Scalability, Availability, And Durability, Some Customers May Want Their Data To Be Replicated On Multiple Servers Across Multiple Data Centers. The More Copies The CSP Is Asked To Store, The More Fees The Customers Are Charged. Therefore, Customers Need To Have A Strong Guarantee That The CSP Is Storing All Data Copies That Are Agreed Upon In The Service Contract, And All These Copies Are Consistent With The Most Recent Modifications Issued By The Customers. In This Paper, We Propose A Map-based Provable Multicopy Dynamic Data Possession (MB-PMDDP) Scheme That Has The Following Features: 1) It Provides An Evidence To The Customers That The CSP Is Not Cheating By Storing Fewer Copies; 2) It Supports Outsourcing Of Dynamic Data, I.e., It Supports Block-level Operations, Such As Block Modification, Insertion, Deletion, And Append; And 3) It Allows Authorized Users To Seamlessly Access The File Copies Stored By The CSP. We Give A Comparative Analysis Of The Proposed MB-PMDDP Scheme With A Reference Model Obtained By Extending Existing Provable Possession Of Dynamic Single-copy Schemes. The Theoretical Analysis Is Validated Through Experimental Results On A Commercial Cloud Platform. In Addition, We Show The Security Against Colluding Servers, And Discuss How To Identify Corrupted Copies By Slightly Modifying The Proposed Scheme.
For Ranked Search In Encrypted Cloud Data, Order Preserving Encryption (OPE) Is An Efficient Tool To Encrypt Relevance Scores Of The Inverted Index. When Using Deterministic OPE, The Ciphertexts Will Reveal The Distribution Of Relevance Scores. Therefore, Wang Et Al. Proposed A Probabilistic OPE, Called One-to-many OPE, For Applications Of Searchable Encryption, Which Can Flatten The Distribution Of The Plaintexts. In This Paper, We Proposed A Differential Attack On One-to-many OPE By Exploiting The Differences Of The Ordered Ciphertexts. The Experimental Results Show That The Cloud Server Can Get A Good Estimate Of The Distribution Of Relevance Scores By A Differential Attack. Furthermore, When Having Some Background Information On The Outsourced Documents, The Cloud Server Can Accurately Infer The Encrypted Keywords Using The Estimated Distributions.
A Key Approach To Secure Cloud Computing Is For The Data Owner To Store Encrypted Data In The Cloud, And Issue Decryption Keys To Authorized Users. Then, When A User Is Revoked, The Data Owner Will Issue Re-encryption Commands To The Cloud To Re-encrypt The Data, To Prevent The Revoked User From Decrypting The Data, And To Generate New Decryption Keys To Valid Users, So That They Can Continue To Access The Data. However, Since A Cloud Computing Environment Is Comprised Of Many Cloud Servers, Such Commands May Not Be Received And Executed By All Of The Cloud Servers Due To Unreliable Network Communications. In This Paper, We Solve This Problem By Proposing A Time-based Re-encryption Scheme, Which Enables The Cloud Servers To Automatically Re-encrypt Data Based On Their Internal Clocks. Our Solution Is Built On Top Of A New Encryption Scheme, Attribute-based Encryption, To Allow Fine-grain Access Control, And Does Not Require Perfect Clock Synchronization For Correctness.
Personal Health Records (PHRs) Should Remain The Lifelong Property Of Patients, Who Should Be Able To Show Them Conveniently And Securely To Selected Caregivers And Institutions. In This Paper, We Present MyPHRMachines, A Cloud-based PHR System Taking A Radically New Architectural Solution To Health Record Portability. In MyPHRMachines, Health-related Data And The Application Software To View And/or Analyze It Are Separately Deployed In The PHR System. After Uploading Their Medical Data To MyPHRMachines, Patients Can Access Them Again From Remote Virtual Machines That Contain The Right Software To Visualize And Analyze Them Without Any Need For Conversion. Patients Can Share Their Remote Virtual Machine Session With Selected Caregivers, Who Will Need Only A Web Browser To Access The Pre-loaded Fragments Of Their Lifelong PHR. We Discuss A Prototype Of MyPHRMachines Applied To Two Use Cases, I.e., Radiology Image Sharing And Personalized Medicine.
Cloud Computing As An Emerging Technology Trend Is Expected To Reshape The Advances In Information Technology An Efficient Information Retrieval For Ranked Queries (EIRQ) Scheme Is Recovery Of Ranked Files On User Demand. An EIRQ Worked Based On The Aggregation And Distribution Layer (ADL). An ADL Is Act As Mediator Between Cloud And End-users. An EIRQ Scheme Reduces The Communication Cost And Communication Overhead. Mask Matrix Is Used To Filter Out As What User Really Wants Matched Data Before Recurring To The Aggregation And Distribution Layer (ADL). A User Can Retrieve Files On Demand By Choosing Queries Of Different Ranks. This Feature Is Useful When There Are A Large Number Of Matched Files, But The User Only Needs A Small Subset Of Them. Under Different Parameter Settings, Extensive Evaluations Have Been Conducted On Both Analytical Models And On A Real Cloud Environment, In Order To Examine The Effectiveness Of Our Schemes To Avoid Small Scale Of Interruptions In Cloud Computing, Follow Two Essential Issues:-Privacy And Efficiency. Private Keyword Based File Retrieval Scheme Was Anticipated By Ostrovsky.
We Propose A Mediated Certificateless Encryption Scheme Without Pairing Operations For Securely Sharing Sensitive Information In Public Clouds. Mediated Certificateless Public Key Encryption (mCL-PKE) Solves The Key Escrow Problem In Identity Based Encryption And Certificate Revocation Problem In Public Key Cryptography. However, Existing MCL-PKE Schemes Are Either Inefficient Because Of The Use Of Expensive Pairing Operations Or Vulnerable Against Partial Decryption Attacks. In Order To Address The Performance And Security Issues, In This Paper, We First Propose A MCL-PKE Scheme Without Using Pairing Operations. We Apply Our MCL-PKE Scheme To Construct A Practical Solution To The Problem Of Sharing Sensitive Information In Public Clouds. The Cloud Is Employed As A Secure Storage As Well As A Key Generation Center. In Our System, The Data Owner Encrypts The Sensitive Data Using The Cloud Generated Users' Public Keys Based On Its Access Control Policies And Uploads The Encrypted Data To The Cloud. Upon Successful Authorization, The Cloud Partially Decrypts The Encrypted Data For The Users. The Users Subsequently Fully Decrypt The Partially Decrypted Data Using Their Private Keys. The Confidentiality Of The Content And The Keys Is Preserved With Respect To The Cloud, Because The Cloud Cannot Fully Decrypt The Information. We Also Propose An Extension To The Above Approach To Improve The Efficiency Of Encryption At The Data Owner. We Implement Our MCL-PKE Scheme And The Overall Cloud Based System, And Evaluate Its Security And Performance. Our Results Show That Our Schemes Are Efficient And Practical.
Trust Management Is One Of The Most Challenging Issues For The Adoption And Growth Of Cloud Computing. The Highly Dynamic, Distributed, And Non-transparent Nature Of Cloud Services Introduces Several Challenging Issues Such As Privacy, Security, And Availability. Preserving Consumers' Privacy Is Not An Easy Task Due To The Sensitive Information Involved In The Interactions Between Consumers And The Trust Management Service. Protecting Cloud Services Against Their Malicious Users (e.g., Such Users Might Give Misleading Feedback To Disadvantage A Particular Cloud Service) Is A Difficult Problem. Guaranteeing The Availability Of The Trust Management Service Is Another Significant Challenge Because Of The Dynamic Nature Of Cloud Environments. In This Article, We Describe The Design And Implementation Of CloudArmor, A Reputation-based Trust Management Framework That Provides A Set Of Functionalities To Deliver Trust As A Service (TaaS), Which Includes I) A Novel Protocol To Prove The Credibility Of Trust Feedbacks And Preserve Users' Privacy, Ii) An Adaptive And Robust Credibility Model For Measuring The Credibility Of Trust Feedbacks To Protect Cloud Services From Malicious Users And To Compare The Trustworthiness Of Cloud Services, And Iii) An Availability Model To Manage The Availability Of The Decentralized Implementation Of The Trust Management Service. The Feasibility And Benefits Of Our Approach Have Been Validated By A Prototype And Experimental Studies Using A Collection Of Real-world Trust Feedbacks On Cloud Services.
Outsourcing Data To A Third-party Administrative Control, As Is Done In Cloud Computing, Gives Rise To Security Concerns. The Data Compromise May Occur Due To Attacks By Other Users And Nodes Within The Cloud. Therefore, High Security Measures Are Required To Protect Data Within The Cloud. However, The Employed Security Strategy Must Also Take Into Account The Optimization Of The Data Retrieval Time. In This Paper, We Propose Division And Replication Of Data In The Cloud For Optimal Performance And Security (DROPS) That Collectively Approaches The Security And Performance Issues. In The DROPS Methodology, We Divide A File Into Fragments, And Replicate The Fragmented Data Over The Cloud Nodes. Each Of The Nodes Stores Only A Single Fragment Of A Particular Data File That Ensures That Even In Case Of A Successful Attack, No Meaningful Information Is Revealed To The Attacker. Moreover, The Nodes Storing The Fragments, Are Separated With Certain Distance By Means Of Graph T-coloring To Prohibit An Attacker Of Guessing The Locations Of The Fragments. Furthermore, The DROPS Methodology Does Not Rely On The Traditional Cryptographic Techniques For The Data Security; Thereby Relieving The System Of Computationally Expensive Methodologies. We Show That The Probability To Locate And Compromise All Of The Nodes Storing The Fragments Of A Single File Is Extremely Low. We Also Compare The Performance Of The DROPS Methodology With 10 Other Schemes. The Higher Level Of Security With Slight Performance Overhead Was Observed.
In This Paper, We Propose A Trustworthy Service Evaluation (TSE) System To Enable Users To Share Service Reviews In Service-oriented Mobile Social Networks (S-MSNs). Each Service Provider Independently Maintains A TSE For Itself, Which Collects And Stores Users' Reviews About Its Services Without Requiring Any Third Trusted Authority. The Service Reviews Can Then Be Made Available To Interested Users In Making Wise Service Selection Decisions. We Identify Three Unique Service Review Attacks, I.e., Linkability, Rejection, And Modification Attacks, And Develop Sophisticated Security Mechanisms For The TSE To Deal With These Attacks. Specifically, The Basic TSE (bTSE) Enables Users To Distributedly And Cooperatively Submit Their Reviews In An Integrated Chain Form By Using Hierarchical And Aggregate Signature Techniques. It Restricts The Service Providers To Reject, Modify, Or Delete The Reviews. Thus, The Integrity And Authenticity Of Reviews Are Improved. Further, We Extend The BTSE To A Sybil-resisted TSE (SrTSE) To Enable The Detection Of Two Typical Sybil Attacks. In The SrTSE, If A User Generates Multiple Reviews Toward A Vendor In A Predefined Time Slot With Different Pseudonyms, The Real Identity Of That User Will Be Revealed. Through Security Analysis And Numerical Results, We Show That The BTSE And The SrTSE Effectively Resist The Service Review Attacks And The SrTSE Additionally Detects The Sybil Attacks In An Efficient Manner. Through Performance Evaluation, We Show That The BTSE Achieves Better Performance In Terms Of Submission Rate And Delay Than A Service Review System That Does Not Adopt User Cooperation.
With The Advent And Popularity Of Social Network, More And More Users Like To Share Their Experiences, Such As Ratings, Reviews, And Blogs. The New Factors Of Social Network Like Interpersonal Influence And Interest Based On Circles Of Friends Bring Opportunities And Challenges For Recommender System To Solve The Cold Start And Sparsity Problem Of Datasets. Some Of The Social Factors Have Been Used In RS, But Have Not Been Fully Considered. In This Paper, Three Social Factors, Personal Interest, Interpersonal Interest Similarity, And Interpersonal Influence, Fuse Into A Unified Personalized Recommendation Model Based On Probabilistic Matrix Factorization. The Factor Of Personal Interest Can Make The RS Recommend Items To Meet Users' Individualities, Especially For Experienced Users. Moreover, For Cold Start Users, The Interpersonal Interest Similarity And Interpersonal Influence Can Enhance The Intrinsic Link Among Features In The Latent Space. We Conduct A Series Of Experiments On Three Rating Datasets: Yelp, MovieLens, And Douban Movie. Experimental Results Show The Proposed Approach Outperforms The Existing RS Approaches.
Cloud Storage Is An Application Of Clouds That Liberates Organizations From Establishing In-house Data Storage Systems. However, Cloud Storage Gives Rise To Security Concerns. In Case Of Group-shared Data, The Data Face Both Cloud-specific And Conventional Insider Threats. Secure Data Sharing Among A Group That Counters Insider Threats Of Legitimate Yet Malicious Users Is An Important Research Issue. In This Paper, We Propose The Secure Data Sharing In Clouds (SeDaSC) Methodology That Provides: 1) Data Confidentiality And Integrity; 2) Access Control; 3) Data Sharing (forwarding) Without Using Compute-intensive Reencryption; 4) Insider Threat Security; And 5) Forward And Backward Access Control. The SeDaSC Methodology Encrypts A File With A Single Encryption Key. Two Different Key Shares For Each Of The Users Are Generated, With The User Only Getting One Share. The Possession Of A Single Share Of A Key Allows The SeDaSC Methodology To Counter The Insider Threats. The Other Key Share Is Stored By A Trusted Third Party, Which Is Called The Cryptographic Server. The SeDaSC Methodology Is Applicable To Conventional And Mobile Cloud Computing Environments. We Implement A Working Prototype Of The SeDaSC Methodology And Evaluate Its Performance Based On The Time Consumed During Various Operations. We Formally Verify The Working Of SeDaSC By Using High-level Petri Nets, The Satisfiability Modulo Theories Library, And A Z3 Solver. The Results Proved To Be Encouraging And Show That SeDaSC Has The Potential To Be Effectively Used For Secure Data Sharing In The Cloud.
Cloud Computing Is Popularizing The Computing Paradigm In Which Data Is Outsourced To A Third-party Service Provider (server) For Data Mining. Outsourcing, However, Raises A Serious Security Issue: How Can The Client Of Weak Computational Power Verify That The Server Returned Correct Mining Result? In This Paper, We Focus On The Specific Task Of Frequent Itemset Mining. We Consider The Server That Is Potentially Untrusted And Tries To Escape From Verification By Using Its Prior Knowledge Of The Outsourced Data. We Propose Efficient Probabilistic And Deterministic Verification Approaches To Check Whether The Server Has Returned Correct And Complete Frequent Itemsets. Our Probabilistic Approach Can Catch Incorrect Results With High Probability, While Our Deterministic Approach Measures The Result Correctness With 100 Percent Certainty. We Also Design Efficient Verification Methods For Both Cases That The Data And The Mining Setup Are Updated. We Demonstrate The Effectiveness And Efficiency Of Our Methods Using An Extensive Set Of Empirical Results On Real Datasets.
Cloud Computing Has Great Potential Of Providing Robust Computational Power To The Society At Reduced Cost. It Enables Customers With Limited Computational Resources To Outsource Their Large Computation Workloads To The Cloud, And Economically Enjoy The Massive Computational Power, Bandwidth, Storage, And Even Appropriate Software That Can Be Shared In A Pay-per-use Manner .Storing Data In A Third Party’s Cloud System Causes Serious Concern Over Data Confidentiality. General Encryption Schemes Protect Data Confidentiality, But Also Limit The Functionality Of The Storage System Because A Few Operations Are Supported Over Encrypted Data. Constructing A Secure Storage System That Supports Multiple Functions Is Challenging When The Storage System Is Distributed And Has No Central Authority. We Propose A Threshold Proxy Re-encryption Scheme And Integrate It With A Decentralized Erasure Code Such That A Secure Distributed Storage System Is Formulated. The Distributed Storage System Not Only Supports Secure And Robust Data Storage And Retrieval, But Also Lets A User Forward His Data In The Storage Servers To Another User Without Retrieving The Data Back. The Main Technical Contribution Is That The Proxy Re-encryption Scheme Supports Encoding Operations Over Encrypted Messages As Well As Forwarding Operations Over Encoded And Encrypted Messages. Our Method Fully Integrates Encrypting, Encoding, And Forwarding. We Analyze And Suggest Suitable Parameters For The Number Of Copies Of A Message Dispatched To Storage Servers And The Number Of Storage Servers Queried By A Key Server.
Cloud Computing Is A Style Of Computing Where Different Capabilities Are Provided As A Service To Customers Using Internet Technologies. The Most Common Offered Services Are Infrastructure (IasS), Software (SaaS) And Platform (PaaS). This Work Integrates The Service Management Into The Cloud Computing Concept And Shows How Management Can Be Provided As A Service In The Cloud. Nowadays, Services Need To Adapt Their Functionalities Across Heterogeneous Environments With Different Technological And Administrative Domains. The Implied Complexity Of This Situation Can Be Simplified By A Service Management Architecture In The Cloud. This Paper Focuses On This Architecture, Taking Into Account Specific Service Management Functionalities, Like Incident Management Or KPI/SLA Management, And Provides A Complete Solution. The Proposed Architecture Is Based On A Distributed Set Of Agents, Using Semantic-based Techniques: A Shared Knowledge Plane, Instantiated In The Cloud, Has Been Introduced To Ensure Communication Between Agents.
This Paper Presents A Novel Economic Model To Regulate Capacity Sharing In A Federation Of Hybrid Cloud Providers (CPs). The Proposed Work Models The Interactions Among The CPs As A Repeated Game Among Selfish Players That Aim At Maximizing Their Profit By Selling Their Unused Capacity In The Spot Market But Are Uncertain Of Future Workload Fluctuations. The Proposed Work First Establishes That The Uncertainty In Future Revenue Can Act As A Participation Incentive To Sharing In The Repeated Game. We, Then, Demonstrate How An Efficient Sharing Strategy Can Be Obtained Via Solving A Simple Dynamic Programming Problem. The Obtained Strategy Is A Simple Update Rule That Depends Only On The Current Workloads And A Single Variable Summarizing Past Interactions. In Contrast To Existing Approaches, The Model Incorporates Historical And Expected Future Revenue As Part Of The Virtual Machine (VM) Sharing Decision. Moreover, These Decisions Are Not Enforced Neither By A Centralized Broker Nor By Predefined Agreements. Rather, The Proposed Model Employs A Simple Grim Trigger Strategy Where A CP Is Threatened By The Elimination Of Future VM Hosting By Other CPs. Simulation Results Demonstrate The Performance Of The Proposed Model In Terms Of The Increased Profit And The Reduction In The Variance In The Spot Market VM Availability And Prices.
Social Network Platforms Have Rapidly Changed The Way That People Communicate And Interact. They Have Enabled The Establishment Of, And Participation In, Digital Communities As Well As The Representation, Documentation And Exploration Of Social Relationships. We Believe That As `apps' Become More Sophisticated, It Will Become Easier For Users To Share Their Own Services, Resources And Data Via Social Networks. To Substantiate This, We Present A Social Compute Cloud Where The Provisioning Of Cloud Infrastructure Occurs Through “friend” Relationships. In A Social Compute Cloud, Resource Owners Offer Virtualized Containers On Their Personal Computer(s) Or Smart Device(s) To Their Social Network. However, As Users May Have Complex Preference Structures Concerning With Whom They Do Or Do Not Wish To Share Their Resources, We Investigate, Via Simulation, How Resources Can Be Effectively Allocated Within A Social Community Offering Resources On A Best Effort Basis. In The Assessment Of Social Resource Allocation, We Consider Welfare, Allocation Fairness, And Algorithmic Runtime. The Key Findings Of This Work Illustrate How Social Networks Can Be Leveraged In The Construction Of Cloud Computing Infrastructures And How Resources Can Be Allocated In The Presence Of User Sharing Preferences.
Cloud Data Center Management Is A Key Problem Due To The Numerous And Heterogeneous Strategies That Can Be Applied, Ranging From The VM Placement To The Federation With Other Clouds. Performance Evaluation Of Cloud Computing Infrastructures Is Required To Predict And Quantify The Cost-benefit Of A Strategy Portfolio And The Corresponding Quality Of Service (QoS) Experienced By Users. Such Analyses Are Not Feasible By Simulation Or On-the-field Experimentation, Due To The Great Number Of Parameters That Have To Be Investigated. In This Paper, We Present An Analytical Model, Based On Stochastic Reward Nets (SRNs), That Is Both Scalable To Model Systems Composed Of Thousands Of Resources And Flexible To Represent Different Policies And Cloud-specific Strategies. Several Performance Metrics Are Defined And Evaluated To Analyze The Behavior Of A Cloud Data Center: Utilization, Availability, Waiting Time, And Responsiveness. A Resiliency Analysis Is Also Provided To Take Into Account Load Bursts. Finally, A General Approach Is Presented That, Starting From The Concept Of System Capacity, Can Help System Managers To Opportunely Set The Data Center Parameters Under Different Working Conditions.
Data Deduplication Is One Of Important Data Compression Techniques For Eliminating Duplicate Copies Of Repeating Data, And Has Been Widely Used In Cloud Storage To Reduce The Amount Of Storage Space And Save Bandwidth. To Protect The Confidentiality Of Sensitive Data While Supporting Deduplication, The Convergent Encryption Technique Has Been Proposed To Encrypt The Data Before Outsourcing. To Better Protect Data Security, This Paper Makes The First Attempt To Formally Address The Problem Of Authorized Data Deduplication. Different From Traditional Deduplication Systems, The Differential Privileges Of Users Are Further Considered In Duplicate Check Besides The Data Itself. We Also Present Several New Deduplication Constructions Supporting Authorized Duplicate Check In A Hybrid Cloud Architecture. Security Analysis Demonstrates That Our Scheme Is Secure In Terms Of The Definitions Specified In The Proposed Security Model. As A Proof Of Concept, We Implement A Prototype Of Our Proposed Authorized Duplicate Check Scheme And Conduct Testbed Experiments Using Our Prototype. We Show That Our Proposed Authorized Duplicate Check Scheme Incurs Minimal Overhead Compared To Normal Operations.
With Cloud Data Services, It Is Commonplace For Data To Be Not Only Stored In The Cloud, But Also Shared Across Multiple Users. Unfortunately, The Integrity Of Cloud Data Is Subject To Skepticism Due To The Existence Of Hardware/software Failures And Human Errors. Several Mechanisms Have Been Designed To Allow Both Data Owners And Public Verifiers To Efficiently Audit Cloud Data Integrity Without Retrieving The Entire Data From The Cloud Server. However, Public Auditing On The Integrity Of Shared Data With These Existing Mechanisms Will Inevitably Reveal Confidential Information—identity Privacy—to Public Verifiers. In This Paper, We Propose A Novel Privacy-preserving Mechanism That Supports Public Auditing On Shared Data Stored In The Cloud. In Particular, We Exploit Ring Signatures To Compute Verification Metadata Needed To Audit The Correctness Of Shared Data. With Our Mechanism, The Identity Of The Signer On Each Block In Shared Data Is Kept Private From Public Verifiers, Who Are Able To Efficiently Verify Shared Data Integrity Without Retrieving The Entire File. In Addition, Our Mechanism Is Able To Perform Multiple Auditing Tasks Simultaneously Instead Of Verifying Them One By One. Our Experimental Results Demonstrate The Effectiveness And Efficiency Of Our Mechanism When Auditing Shared Data Integrity.
While Demands On Video Traffic Over Mobile Networks Have Been Souring, The Wireless Link Capacity Cannot Keep Up With The Traffic Demand. The Gap Between The Traffic Demand And The Link Capacity, Along With Time-varying Link Conditions, Results In Poor Service Quality Of Video Streaming Over Mobile Networks Such As Long Buffering Time And Intermittent Disruptions. Leveraging The Cloud Computing Technology, We Propose A New Mobile Video Streaming Framework, Dubbed AMES-Cloud, Which Has Two Main Parts: Adaptive Mobile Video Streaming (AMoV) And Efficient Social Video Sharing (ESoV). AMoV And ESoV Construct A Private Agent To Provide Video Streaming Services Efficiently For Each Mobile User. For A Given User, AMoV Lets Her Private Agent Adaptively Adjust Her Streaming Flow With A Scalable Video Coding Technique Based On The Feedback Of Link Quality. Likewise, ESoV Monitors The Social Network Interactions Among Mobile Users, And Their Private Agents Try To Prefetch Video Content In Advance. We Implement A Prototype Of The AMES-Cloud Framework To Demonstrate Its Performance. It Is Shown That The Private Agents In The Clouds Can Effectively Provide The Adaptive Streaming, And Perform Video Sharing (i.e., Prefetching) Based On The Social Network Analysis.
With The Character Of Low Maintenance, Cloud Computing Provides An Economical And Efficient Solution For Sharing Group Resource Among Cloud Users. Unfortunately, Sharing Data In A Multi-owner Manner While Preserving Data And Identity Privacy From An Untrusted Cloud Is Still A Challenging Issue, Due To The Frequent Change Of The Membership. In This Paper, We Propose A Secure Multi-owner Data Sharing Scheme, Named Mona, For Dynamic Groups In The Cloud. By Leveraging Group Signature And Dynamic Broadcast Encryption Techniques, Any Cloud User Can Anonymously Share Data With Others. Meanwhile, The Storage Overhead And Encryption Computation Cost Of Our Scheme Are Independent With The Number Of Revoked Users. In Addition, We Analyze The Security Of Our Scheme With Rigorous Proofs, And Demonstrate The Efficiency Of Our Scheme In Experiments
With The Advent Of Cloud Computing, Data Owners Are Motivated To Outsource Their Complex Data Management Systems From Local Sites To Commercial Public Cloud For Great Flexibility And Economic Savings. But For Protecting Data Privacy, Sensitive Data Has To Be Encrypted Before Outsourcing, Which Obsoletes Traditional Data Utilization Based On Plaintext Keyword Search. Thus, Enabling An Encrypted Cloud Data Search Service Is Of Paramount Importance. Considering The Large Number Of Data Users And Documents In Cloud, It Is Crucial For The Search Service To Allow Multi-keyword Query And Provide Result Similarity Ranking To Meet The Effective Data Retrieval Need. Related Works On Searchable Encryption Focus On Single Keyword Search Or Boolean Keyword Search, And Rarely Differentiate The Search Results. In This Paper, For The First Time, We Define And Solve The Challenging Problem Of Privacy-preserving Multi-keyword Ranked Search Over Encrypted Cloud Data (MRSE), And Establish A Set Of Strict Privacy Requirements For Such A Secure Cloud Data Utilization System To Become A Reality. Among Various Multi-keyword Semantics, We Choose The Efficient Principle Of “coordinate Matching”, I.e., As Many Matches As Possible, To Capture The Similarity Between Search Query And Data Documents, And Further Use “inner Product Similarity” To Quantitatively Formalize Such Principle For Similarity Measurement. We First Propose A Basic MRSE Scheme Using Secure Inner Product Computation, And Then Significantly Improve It To Meet Different Privacy Requirements In Two Levels Of Threat Models. Thorough Analysis Investigating Privacy And Efficiency Guarantees Of Proposed Schemes Is Given, And Experiments On The Real-world Dataset Further Show Proposed Schemes Indeed Introduce Low Overhead On Computation And Communication.
Security Challenges Are Still Among The Biggest Obstacles When Considering The Adoption Of Cloud Services. This Triggered A Lot Of Research Activities, Resulting In A Quantity Of Proposals Targeting The Various Cloud Security Threats. Alongside With These Security Issues, The Cloud Paradigm Comes With A New Set Of Unique Features, Which Open The Path Toward Novel Security Approaches, Techniques, And Architectures. This Paper Provides A Survey On The Achievable Security Merits By Making Use Of Multiple Distinct Clouds Simultaneously. Various Distinct Architectures Are Introduced And Discussed According To Their Security And Privacy Capabilities And Prospects.
Cloud Computing, With Its Promise Of (almost) Unlimited Computation, Storage, And Bandwidth, Is Increasingly Becoming The Infrastructure Of Choice For Many Organizations. As Cloud Offerings Mature, Service-based Applications Need To Dynamically Recompose Themselves To Self-adapt To Changing QoS Requirements. In This Paper, We Present A Decentralized Mechanism For Such Self-adaptation, Using Market-based Heuristics. We Use A Continuous Double-auction To Allow Applications To Decide Which Services To Choose, Among The Many On Offer. We View An Application As A Multi-agent System And The Cloud As A Marketplace Where Many Such Applications Self-adapt. We Show Through A Simulation Study That Our Mechanism Is Effective For The Individual Application As Well As From The Collective Perspective Of All Applications Adapting At The Same Time.
Cloud Computing Is Becoming Popular. Building High-quality Cloud Applications Is A Critical Research Problem. QoS Rankings Provide Valuable Information For Making Optimal Cloud Service Selection From A Set Of Functionally Equivalent Service Candidates. To Obtain QoS Values, Real-world Invocations On The Service Candidates Are Usually Required. To Avoid The Time-consuming And Expensive Real-world Service Invocations, This Paper Proposes A QoS Ranking Prediction Framework For Cloud Services By Taking Advantage Of The Past Service Usage Experiences Of Other Consumers. Our Proposed Framework Requires No Additional Invocations Of Cloud Services When Making QoS Ranking Prediction. Two Personalized QoS Ranking Prediction Approaches Are Proposed To Predict The QoS Rankings Directly. Comprehensive Experiments Are Conducted Employing Real-world QoS Data, Including 300 Distributed Users And 500 Real-world Web Services All Over The World. The Experimental Results Show That Our Approaches Outperform Other Competing Approaches.