Artificial Intelligence (AI) And Machine Learning (ML) Have Caused A Paradigm Shift In Healthcare That Can Be Used For Decision Support And Forecasting By Exploring Medical Data. Recent Studies Have Shown That AI And ML Can Be Used To Fight COVID-19. The Objective Of This Article Is To Summarize The Recent AI- And ML-based Studies That Have Addressed The Pandemic. From An Initial Set Of 634 Articles, A Total Of 49 Articles Were Finally Selected Through An Inclusion-exclusion Process. In This Article, We Have Explored The Objectives Of The Existing Studies (i.e., The Role Of AI/ML In Fighting The COVID-19 Pandemic); The Context Of The Studies (i.e., Whether It Was Focused On A Specific Country-context Or With A Global Perspective; The Type And Volume Of The Dataset; And The Methodology, Algorithms, And Techniques Adopted In The Prediction Or Diagnosis Processes). We Have Mapped The Algorithms And Techniques With The Data Type By Highlighting Their Prediction/classification Accuracy. From Our Analysis, We Categorized The Objectives Of The Studies Into Four Groups: Disease Detection, Epidemic Forecasting, Sustainable Development, And Disease Diagnosis. We Observed That Most Of These Studies Used Deep Learning Algorithms On Image-data, More Specifically On Chest X-rays And CT Scans. We Have Identified Six Future Research Opportunities That We Have Summarized In This Paper.
Pain Sensation Is Essential For Survival, Since It Draws Attention To Physical Threat To The Body. Pain Assessment Is Usually Done Through Self-reports. However, Self-assessment Of Pain Is Not Available In The Case Of Noncommunicative Patients, And Therefore, Observer Reports Should Be Relied Upon. Observer Reports Of Pain Could Be Prone To Errors Due To Subjective Biases Of Observers. Moreover, Continuous Monitoring By Humans Is Impractical. Therefore, Automatic Pain Detection Technology Could Be Deployed To Assist Human Caregivers And Complement Their Service, Thereby Improving The Quality Of Pain Management, Especially For Noncommunicative Patients. Facial Expressions Are A Reliable Indicator Of Pain, And Are Used In All Observer-based Pain Assessment Tools. Following The Advancements In Automatic Facial Expression Analysis, Computer Vision Researchers Have Tried To Use This Technology For Developing Approaches For Automatically Detecting Pain From Facial Expressions. This Paper Surveys The Literature Published In This Field Over The Past Decade, Categorizes It, And Identifies Future Research Directions. The Survey Covers The Pain Datasets Used In The Reviewed Literature, The Learning Tasks Targeted By The Approaches, The Features Extracted From Images And Image Sequences To Represent Pain-related Information, And Finally, The Machine Learning Methods Used.
With The Continuous Development Of EHealthcare Systems, Medical Service Recommendation Has Received Great Attention. However, Although It Can Recommend Doctors To Users, There Are Still Challenges In Ensuring The Accuracy And Privacy Of Recommendation. In This Paper, To Ensure The Accuracy Of The Recommendation, We Consider Doctors’ Reputation Scores And Similarities Between Users’ Demands And Doctors’ Information As The Basis Of The Medical Service Recommendation. The Doctors’ Reputation Scores Are Measured By Multiple Feedbacks From Users. We Propose Two Concrete Algorithms To Compute The Similarity And The Reputation Scores In A Privacy-preserving Way Based On The Modified Paillier Cryptosystem, Truth Discovery Technology, And The Dirichlet Distribution. Detailed Security Analysis Is Given To Show Its Security Prosperities. In Addition, Extensive Experiments Demonstrate The Efficiency In Terms Of Computational Time For Truth Discovery And Recommendation Process.
Internet Of Things (IoT) Is A New Technology Which Offers Enormous Applications That Make People’s Lives More Convenient And Enhances Cities’ Development. In Particular, Smart Healthcare Applications In IoT Have Been Receiving Increasing Attention For Industrial And Academic Research. However, Due To The Sensitiveness Of Medical Information, Security And Privacy Issues In IoT Healthcare Systems Are Very Important. Designing An Efficient Secure Scheme With Less Computation Time And Energy Consumption Is A Critical Challenge In IoT Healthcare Systems. In This Paper, A Lightweight Online/offline Certificateless Signature (L-OOCLS) Is Proposed, Then A Heterogeneous Remote Anonymous Authentication Protocol (HRAAP) Is Designed To Enable Remote Wireless Body Area Networks (WBANs) Users To Anonymously Enjoy Healthcare Service Based On The IoT Applications. The Proposed L-OOCLS Scheme Is Proven Secure In Random Oracle Model And The Proposed HRAAP Can Resist Various Types Of Attacks. Compared With The Existing Relevant Schemes, The Proposed HRAAP Achieves Less Computation Overhead As Well As Less Power Consumption On WBANs Client. In Addition, To Nicely Meet The Application In The IoT, An Application Scenario Is Given.
Dynamic Rumor Influence Minimization With User Experience In Social NetworksWith The Soaring Development Of Large Scale Online Social Networks, Online Information Sharing Is Becoming Ubiquitous Everyday. Various Information Is Propagating Through Online Social Networks Including Both The Positive And Negative. In This Paper, We Focus On The Negative Information Problems Such As The Online Rumors. Rumor Blocking Is A Serious Problem In Large-scale Social Networks. Malicious Rumors Could Cause Chaos In Society And Hence Need To Be Blocked As Soon As Possible After Being Detected. In This Paper, We Propose A Model Of Dynamic Rumor Influence Minimization With User Experience (DRIMUX). Our Goal Is To Minimize The Influence Of The Rumor (i.e., The Number Of Users That Have Accepted And Sent The Rumor) By Blocking A Certain Subset Of Nodes. A Dynamic Ising Propagation Model Considering Both The Global Popularity And Individual Attraction Of The Rumor Is Presented Based On A Realistic Scenario. In Addition, Different From Existing Problems Of Influence Minimization, We Take Into Account The Constraint Of User Experience Utility. Specifically, Each Node Is Assigned A Tolerance Time Threshold. If The Blocking Time Of Each User Exceeds That Threshold, The Utility Of The Network Will Decrease. Under This Constraint, We Then Formulate The Problem As A Network Inference Problem With Survival Theory, And Propose Solutions Based On Maximum Likelihood Principle. Experiments Are Implemented Based On Large-scale Real World Networks And Validate The Effectiveness Of Our Method.
IoT (Internet Of Things) Devices Often Collect Data And Store The Data In The Cloud For Sharing And Further Processing; This Collection, Sharing, And Processing Will Inevitably Encounter Secure Access And Authentication Issues. Attribute Based Signature (ABS), Which Utilizes The Signer’s Attributes To Generate Private Keys, Plays A Competent Role In Data Authentication And Identity Privacy Preservation. In ABS, There Are Multiple Authorities That Issue Different Private Keys For Signers Based On Their Various Attributes, And A Central Authority Is Usually Established To Manage All These Attribute Authorities. However, One Security Concern Is That If The Central Authority Is Compromised, The Whole System Will Be Broken. In This Paper, We Present An Outsourced Decentralized Multi-authority Attribute Based Signature (ODMA-ABS) Scheme. The Proposed ODMA-ABS Achieves Attribute Privacy And Stronger Authority-corruption Resistance Than Existing Multi-authority Attribute Based Signature Schemes Can Achieve. In Addition, The Overhead To Generate A Signature Is Further Reduced By Outsourcing Expensive Computation To A Signing Cloud Server. We Present Extensive Security Analysis And Experimental Simulation Of The Proposed Scheme. We Also Propose An Access Control Scheme That Is Based On ODMA-ABS.
The Use Of Digital Games In Education Has Gained Considerable Popularity In The Last Years Due To The Fact That These Games Are Considered To Be Excellent Tools For Teaching And Learning And Offer To Students An Engaging And Interesting Way Of Participating And Learning. In This Study, The Design And Implementation Of Educational Activities That Include Game Creation And Use In Elementary And Secondary Education Is Presented. The Proposed Educational Activities’ Content Covers The Parts Of The Curricula Of All The Informatics Courses, For Each Education Level Separately, That Include The Learning Of Programming Principles. The Educational Activities Were Implemented And Evaluated By Teachers Through A Discussion Session. The Findings Indicate That The Teachers Think That Learning Through Creating And Using Games Is More Interesting And That They Also Like The Idea Of Using Various Programming Environments To Create Games In Order To Teach Basic Programming Principles To Students.
In This Paper, We Consider A Scenario Where A User Queries A User Profile Database, Maintained By A Social Networking Service Provider, To Identify Users Whose Profiles Match The Profile Specified By The Querying User. A Typical Example Of This Application Is Online Dating. Most Recently, An Online Dating Website, Ashley Madison, Was Hacked, Which Resulted In A Disclosure Of A Large Number Of Dating User Profiles. This Data Breach Has Urged Researchers To Explore Practical Privacy Protection For User Profiles In A Social Network. In This Paper, We Propose A Privacy-preserving Solution For Profile Matching In Social Networks By Using Multiple Servers. Our Solution Is Built On Homomorphic Encryption And Allows A User To Find Out Matching Users With The Help Of Multiple Servers Without Revealing To Anyone The Query And The Queried User Profiles In Clear. Our Solution Achieves User Profile Privacy And User Query Privacy As Long As At Least One Of The Multiple Servers Is Honest. Our Experiments Demonstrate That Our Solution Is Practical.
Privacy Is One Of The Friction Points That Emerges When Communications Get Mediated In Online Social Networks (OSNs). Different Communities Of Computer Science Researchers Have Framed The ‘OSN Privacy Problem’ As One Of Surveillance, Institutional Or Social Privacy. In Tackling These Problems They Have Also Treated Them As If They Were Independent. We Argue That The Different Privacy Problems Are Entangled And That Research On Privacy In OSNs Would Benefit From A More Holistic Approach. In This Article, We First Provide An Introduction To The Surveillance And Social Privacy Perspectives Emphasizing The Narratives That Inform Them, As Well As Their Assumptions, Goals And Methods. We Then Juxtapose The Differences Between These Two Approaches In Order To Understand Their Complementarity, And To Identify Potential Integration Challenges As Well As Research Questions That So Far Have Been Left Unanswered.
Online Reviews Regarding Different Products Or Services Have Become The Main Source To Determine Public Opinions. Unfortunately, To Gain Profits Or Fame, Spam Reviews Are Written To Promote Or Demote Targeted Products Or Services. This Practice Is Known As Review Spamming. This Can Help To Analyze The Impact Of Widespread Opinion Spam In Online Reviews. In This Work, Two Different Spam Review Detection Methods Have Been Proposed: (1) Spam Review Detection Using Behavioral Method (SRD-BM) Utilizes Thirteen Different Spammer’s Behavioral Features To Calculate The Review Spam Score Which Is Then Used To Identify Spammers And Spam Reviews, And (2) Spam Review Detection Using Linguistic Method (SRD-LM) Works On The Content Of The Reviews And Utilizes Transformation, Feature Selection And Classification To Identify The Spam Reviews. Experimental Evaluations Are Conducted On A Real-world Amazon Review Dataset Which Analyze 26.7 Million Reviews And 15.4 Million Reviewers. The Evaluations Show That Both Proposed Models Have Significantly Improved The Detection Process Of Spam Reviews. Comparatively, SRD-BM Achieved Better Accuracy Because It Works On Utilizing Rich Set Of Spammers Behavioral Features Of Review Dataset Which Provides In-depth Analysis Of Spammer Behavior. To The Best Of Our Knowledge, This Is The First Study Of Its Kind Which Uses Large-scale Review Dataset To Analyze Different Spammers’ Behavioral Features And Linguistic Method Utilizing Different Available Classifiers.
Finger-vein Biometrics Has Been Extensively Investigated For Personal Verification. Despite Recent Advances In Fingervein Verification, Current Solutions Completely Depend On Domain Knowledge And Still Lack The Robustness To Extract Finger-vein Features From Raw Images. This Paper Proposes A Deep Learning Model To Extract And Recover Vein Features Using Limited A Priori Knowledge. Firstly, Based On A Combination Of Known State Of The Art Handcrafted Finger-vein Image Segmentation Techniques, We Automatically Identify Two Regions: A Clear Region With High Separability Between Finger-vein Patterns And Background, And An Ambiguous Region With Low Separability Between Them. The First Is Associated With Pixels On Which All The Segmentation Techniques Above Assign The Same Segmentation Label (either Foreground Or Background), While The Second Corresponds To All The Remaining Pixels. This Scheme Is Used To Automatically Discard The Ambiguous Region And To Label The Pixels Of The Clear Region As Foreground Or Background. A Training Dataset Is Constructed Based On The Patches Centered On The Labeled Pixels. Secondly, A Convolutional Neural Network (CNN) Is Trained On The Resulting Dataset To Predict The Probability Of Each Pixel Of Being Foreground (i.e. Vein Pixel) Given A Patch Centered On It. The CNN Learns What A Fingervein Pattern Is By Learning The Difference Between Vein Patterns And Background Ones. The Pixels In Any Region Of A Test Image Can Then Be Classified Effectively. Thirdly, We Propose Another New And Original Contribution By Developing And Investigating A Fully Convolutional Network (FCN) To Recover Missing Fingervein Patterns In The Segmented Image. The Experimental Results On Two Public Finger-vein Databases Show A Significant Improvement In Terms Of Finger-vein Verification Accuracy.
Data Outsourcing Is A Promising Technical Paradigm To Facilitate Cost-effective Real-time Data Storage, Processing, And Dissemination. In Such A System, A Data Owner Proactively Pushes A Stream Of Data Records To A Third-party Cloud Server For Storage, Which In Turn Processes Various Types Of Queries From End Users On The Data Owner’s Behalf. This Paper Considers Outsourced Multi-version Key-value Stores That Have Gained Increasing Popularity In Recent Years, Where A Critical Security Challenge Is To Ensure That The Cloud Server Returns Both Authentic And Fresh Data In Response To End Users’ Queries. Despite Several Recent Attempts On Authenticating Data Freshness In Outsourced Key-value Stores, They Either Incur Excessively High Communication Cost Or Can Only Offer Very Limited Real-time Guarantee. To Fill This Gap, This Paper Introduces KV-Fresh, A Novel Freshness Authentication Scheme For Outsourced Key-value Stores That Offers Strong Real-time Guarantee. KV-Fresh Is Designed Based On A Novel Data Structure, Linked Key Span Merkle Hash Tree, Which Enables Highly Efficient Freshness Proof By Embedding Chaining Relationship Among Records Generated At Different Time. Detailed Simulation Studies Using A Synthetic Dataset Generated From Real Data Confirm The Efficacy And Efficiency Of KV-Fresh.
Deduplication Enables Us To Store Only One Copy Of Identical Data And Becomes Unprecedentedly Significant With The Dramatic Increase In Data Stored In The Cloud. For The Purpose Of Ensuring Data Confidentiality, They Are Usually Encrypted Before Outsourced. Traditional Encryption Will Inevitably Result In Multiple Different Ciphertexts Produced From The Same Plaintext By Different Users' Secret Keys, Which Hinders Data Deduplication. Convergent Encryption Makes Deduplication Possible Since It Naturally Encrypts The Same Plaintexts Into The Same Ciphertexts. One Attendant Problem Is How To Effectively Manage A Huge Number Of Convergent Keys. Several Deduplication Schemes Have Been Proposed To Deal With This Problem. However, They Either Need To Introduce Key Management Servers Or Require Interaction Between Data Owners. In This Paper, We Design A Novel Client-side Deduplication Protocol Named KeyD Without Such An Independent Key Management Server By Utilizing The Identity-based Broadcast Encryption (IBBE) Technique. Users Only Interact With The Cloud Service Provider (CSP) During The Process Of Data Upload And Download. Security Analysis Demonstrates That KeyD Ensures Data Confidentiality And Convergent Key Security, And Well Protects The Ownership Privacy Simultaneously. A Thorough Performance Comparison Shows That Our Scheme Makes A Better Tradeoff Among The Storage Cost, Communication And Computation Overhead.
This Paper Addresses The Problem Of Sharing Person-specific Genomic Sequences Without Violating The Privacy Of Their Data Subjects To Support Large-scale Biomedical Research Projects. The Proposed Method Builds On The Framework Proposed By Kantarcioglu Et Al. [1] But Extends The Results In A Number Of Ways. One Improvement Is That Our Scheme Is Deterministic, With Zero Probability Of A Wrong Answer (as Opposed To A Low Probability). We Also Provide A New Operating Point In The Space-time Tradeoff, By Offering A Scheme That Is Twice As Fast As Theirs But Uses Twice The Storage Space. This Point Is Motivated By The Fact That Storage Is Cheaper Than Computation In Current Cloud Computing Pricing Plans. Moreover, Our Encoding Of The Data Makes It Possible For Us To Handle A Richer Set Of Queries Than Exact Matching Between The Query And Each Sequence Of The Database, Including: (i) Counting The Number Of Matches Between The Query Symbols And A Sequence; (ii) Logical OR Matches Where A Query Symbol Is Allowed To Match A Subset Of The Alphabet Thereby Making It Possible To Handle (as A Special Case) A “not Equal To” Requirement For A Query Symbol (e.g., “not A G”); (iii) Support For The Extended Alphabet Of Nucleotide Base Codes That Encompasses Ambiguities In DNA Sequences (this Happens On The DNA Sequence Side Instead Of The Query Side); (iv) Queries That Specify The Number Of Occurrences Of Each Kind Of Symbol In The Specified Sequence Positions (e.g., Two `A' And Four `C' And One `G' And Three `T', Occurring In Any Order In The Query-specified Sequence Positions); (v) A Threshold Query Whose Answer Is `yes' If The Number Of Matches Exceeds A Query-specified Threshold (e.g., “7 Or More Matches Out Of The 15 Query-specified Positions”). (vi) For All Query Types, We Can Hide The Answers From The Decrypting Server, So That Only The Client Learns The Answer. (vii) In All Cases, The Client Deterministically Learns Only The Query's Answer, Except For Query Type (v) Where We Quantify The (very Small) Statistical Leakage To The Client Of The Actual Count.
Temporary Keyword Search On Confidential Data In A Cloud Environment Is The Main Focus Of This Research. The Cloud Providers Are Not Fully Trusted. So, It Is Necessary To Outsource Data In The Encrypted Form. In The Attribute-based Keyword Search (ABKS) Schemes, The Authorized Users Can Generate Some Search Tokens And Send Them To The Cloud For Running The Search Operation. These Search Tokens Can Be Used To Extract All The Ciphertexts Which Are Produced At Any Time And Contain The Corresponding Keyword. Since This May Lead To Some Information Leakage, It Is More Secure To Propose A Scheme In Which The Search Tokens Can Only Extract The Ciphertexts Generated In A Specified Time Interval. To This End, In This Paper, We Introduce A New Cryptographic Primitive Called Key-policy Attribute-based Temporary Keyword Search (KP-ABTKS) Which Provide This Property. To Evaluate The Security Of Our Scheme, We Formally Prove That Our Proposed Scheme Achieves The Keyword Secrecy Property And Is Secure Against Selectively Chosen Keyword Attack (SCKA) Both In The Random Oracle Model And Under The Hardness Of Decisional Bilinear Diffie-Hellman (DBDH) Assumption. Furthermore, We Show That The Complexity Of The Encryption Algorithm Is Linear With Respect To The Number Of The Involved Attributes. Performance Evaluation Shows Our Scheme's Practicality.
Recent Advancements In Technology Have Led To A Deluge Of Big Data Streams That Require Real-time Analysis With Strict Latency Constraints. A Major Challenge, However, Is Determining The Amount Of Resources Required By Applications Processing These Streams Given Their High Volume, Velocity And Variety. The Majority Of Research Efforts On Resource Scaling In The Cloud Are Investigated From The Cloud Provider's Perspective With Little Consideration For Multiple Resource Bottlenecks. We Aim At Analyzing The Resource Scaling Problem From An Application Provider's Point Of View Such That Efficient Scaling Decisions Can Be Made. This Paper Provides Two Contributions To The Study Of Resource Scaling For Big Data Streaming Applications In The Cloud. First, We Present A Layered Multi-dimensional Hidden Markov Model (LMD-HMM) For Managing Time-bounded Streaming Applications. Second, To Cater To Unbounded Streaming Applications, We Propose A Framework Based On A Layered Multi-dimensional Hidden Semi-Markov Model (LMD-HSMM). The Parameters In Our Models Are Evaluated Using Modified Forward And Backward Algorithms. Our Detailed Experimental Evaluation Results Show That LMD-HMM Is Very Effective With Respect To Cloud Resource Prediction For Bounded Streaming Applications Running For Shorter Periods While The LMD-HSMM Accurately Predicts The Resource Usage For Streaming Applications Running For Longer Periods.
With The Pervasiveness Of Mobile Devices And The Development Of Biometric Technology, Biometric Identification, Which Can Achieve Individual Authentication Relies On Personal Biological Or Behavioral Characteristics, Has Attracted Widely Considerable Interest. However, Privacy Issues Of Biometric Data Bring Out Increasing Concerns Due To The Highly Sensitivity Of Biometric Data. Aiming At This Challenge, In This Paper, We Present A Novel Privacy-preserving Online Fingerprint Authentication Scheme, Named E-Finga, Over Encrypted Outsourced Data. In The Proposed E-Finga Scheme, The User's Fingerprint Registered In Trust Authority Can Be Outsourced To Different Servers With User's Authorization, And Secure, Accurate And Efficient Authentication Service Can Be Provided Without The Leakage Of Fingerprint Information. Specifically, An Improved Homomorphic Encryption Technology For Secure Euclidean Distance Calculation To Achieve An Efficient Online Fingerprint Matching Algorithm Over Encrypted FingerCode Data In The Outsourcing Scenarios. Through Detailed Security Analysis, We Show That E-Finga Can Resist Various Security Threats. In Addition, We Implement E-Finga Over A Workstation With A Real Fingerprint Database, And Extensive Simulation Results Demonstrate That The Proposed E-Finga Scheme Can Serve Efficient And Accurate Online Fingerprint Authentication.
Cloud Storage Has Been In Widespread Use Nowadays, Which Alleviates Users' Burden Of Local Data Storage. Meanwhile, How To Ensure The Security And Integrity Of The Outsourced Data Stored In A Cloud Storage Server Has Also Attracted Enormous Attention From Researchers. Proofs Of Storage (POS) Is The Main Technique Introduced To Address This Problem. Publicly Verifiable POS Allowing A Third Party To Verify The Data Integrity On Behalf Of The Data Owner Significantly Improves The Scalability Of Cloud Service. However, Most Of Existing Publicly Verifiable POS Schemes Are Extremely Slow To Compute Authentication Tags For All Data Blocks Due To Many Expensive Group Exponentiation Operations, Even Much Slower Than Typical Network Uploading Speed, And Thus It Becomes The Bottleneck Of The Setup Phase Of The POS Scheme. In This Article, We Propose A New Variant Formulation Called “Delegatable Proofs Of Storage (DPOS)”. Then, We Construct A Lightweight Privacy-preserving DPOS Scheme, Which On One Side Is As Efficient As Private POS Schemes, And On The Other Side Can Support Third Party Auditor And Can Switch Auditors At Anytime, Close To The Functionalities Of Publicly Verifiable POS Schemes. Compared To Traditional Publicly Verifiable POS Schemes, We Speed Up The Tag Generation Process By At Least Several Hundred Times, Without Sacrificing Efficiency In Any Other Aspect. In Addition, We Extend Our Scheme To Support Fully Dynamic Operations With High Efficiency, Reducing The Computation Of Any Data Update To O(log N) And Simultaneously Only Requiring Constant Communication Costs. We Prove That Our Scheme Is Sound And Privacy Preserving Against Auditor In The Standard Model. Experimental Results Verify The Efficient Performance Of Our Scheme.
With The Popularity Of Wearable Devices, Along With The Development Of Clouds And Cloudlet Technology, There Has Been Increasing Need To Provide Better Medical Care. The Processing Chain Of Medical Data Mainly Includes Data Collection, Data Storage And Data Sharing, Etc.
Cloud Computing Is A Scalable And Efficient Technology For Providing Different Services. For Better Reconfigurability And Other Purposes, Users Build Virtual Networks In Cloud Environments. Since Some Applications Bring Heavy Pressure To Cloud Datacenter Networks, It Is Necessary To Recognize And Optimize Virtual Networks With Different Applications. In Some Cloud Environments, Cloud Providers Are Not Allowed To Monitor User Private Information In Cloud Instances. Therefore, In This Paper, We Present A Virtual Network Recognition And Optimization Method To Improve Quality-of-service (QoS) Of Cloud Services. We First Introduce A Community Detection Method To Recognize Virtual Networks From The Cloud Datacenter Network. Then, We Design A Scheduling Strategy By Combining SDN-based Network Management And Instance Placement To Improve The Service-level Agreements (SLA) Fulfillment. Our Experimental Result Shows That We Can Achieve A Recognition Accuracy As High As 80 Percent To Find Out The Virtual Networks, And The Scheduling Strategy Increases The Number Of SLA Fulfilled Virtual Networks.
Ciphertext-policy Attribute-based Encryption (CP-ABE) Is A Very Promising Encryption Technique For Secure Data Sharing In The Context Of Cloud Computing. Data Owner Is Allowed To Fully Control The Access Policy Associated With His Data Which To Be Shared. However, CP-ABE Is Limited To A Potential Security Risk That Is Known As Key Escrow Problem, Whereby The Secret Keys Of Users Have To Be Issued By A Trusted Key Authority. Besides, Most Of The Existing CP-ABE Schemes Cannot Support Attribute With Arbitrary State. In This Paper, We Revisit Attribute-based Data Sharing Scheme In Order To Solve The Key Escrow Issue But Also Improve The Expressiveness Of Attribute, So That The Resulting Scheme Is More Friendly To Cloud Computing Applications. We Propose An Improved Two-party Key Issuing Protocol That Can Guarantee That Neither Key Authority Nor Cloud Service Provider Can Compromise The Whole Secret Key Of A User Individually. Moreover, We Introduce The Concept Of Attribute With Weight, Being Provided To Enhance The Expression Of Attribute, Which Can Not Only Extend The Expression From Binary To Arbitrary State, But Also Lighten The Complexity Of Access Policy. Therefore, Both Storage Cost And Encryption Complexity For A Ciphertext Are Relieved. The Performance Analysis And The Security Proof Show That The Proposed Scheme Is Able To Achieve Efficient And Secure Data Sharing In Cloud Computing.
In Order To Realize The Sharing Of Data By Multiple Users On The Blockchain, This Paper Proposes An Attribute-based Searchable Encryption With Verifiable Ciphertext Scheme Via Blockchain. The Scheme Uses The Public Key Algorithm To Encrypt The Keyword, The Attribute-based Encryption Algorithm To Encrypt The Symmetric Key, And The Symmetric Key To Encrypt The File. The Keyword Index Is Stored On The Blockchain, And The Ciphertext Of The Symmetric Key And File Are Stored On The Cloud Server. The Scheme Uses Searchable Encryption Technology To Achieve Secure Search On The Blockchain, Uses The Immutability Of The Blockchain To Ensure The Security Of The Keyword Ciphertext, Uses Verify Algorithm Guarantees The Integrity Of The Data On The Cloud. When The User's Attributes Need To Be Changed Or The Ciphertext Access Structure Is Changed, The Scheme Uses Proxy Re-encryption Technology To Implement The User's Attribute Revocation, And The Authority Center Is Responsible For The Whole Attribute Revocation Process. The Security Proof Shows That The Scheme Can Achieve Ciphertext Security, Keyword Security And Anti-collusion. In Addition, The Numerical Results Show That The Proposed Scheme Is Effective.
Elasticity Has Now Become The Elemental Feature Of Cloud Computing As It Enables The Ability To Dynamically Add Or Remove Virtual Machine Instances When Workload Changes. However, Effective Virtualized Resource Management Is Still One Of The Most Challenging Tasks. When The Workload Of A Service Increases Rapidly, Existing Approaches Cannot Respond To The Growing Performance Requirement Efficiently Because Of Either Inaccuracy Of Adaptation Decisions Or The Slow Process Of Adjustments, Both Of Which May Result In Insufficient Resource Provisioning. As A Consequence, The Quality Of Service (QoS) Of The Hosted Applications May Degrade And The Service Level Objective (SLO) Will Be Thus Violated. In This Paper, We Introduce SPRNT, A Novel Resource Management Framework, To Ensure High-level QoS In The Cloud Computing System. SPRNT Utilizes An Aggressive Resource Provisioning Strategy Which Encourages SPRNT To Substantially Increase The Resource Allocation In Each Adaptation Cycle When Workload Increases. This Strategy First Provisions Resources Which Are Possibly More Than Actual Demands, And Then Reduces The Over-provisioned Resources If Needed. By Applying The Aggressive Strategy, SPRNT Can Satisfy The Increasing Performance Requirement In The First Place So That The QoS Can Be Kept At A High Level. The Experimental Results Show That SPRNT Achieves Up To 7.7× Speedup In Adaptation Time, Compared With Existing Efforts. By Enabling Quick Adaptation, SPRNT Limits The SLO Violation Rate Up To 1.3 Percent Even When Dealing With Rapidly Increasing Workload.
The Emergence Of Cloud Computing Services Has Led To An Increased Interest In The Technology Among The General Public And Enterprises Marketing These Services. Although There Is A Need For Studies With A Managerial Relevance For This Emerging Market, The Lack Of Market Analysis Hampers Such Investigations. Therefore, This Study Focuses On The End-user Market For Cloud Computing In Korea. We Conduct A Quantitative Analysis To Show Consumer Adoption Behavior For These Services, Particularly Infrastructure As A Service (IaaS). Bayesian Mixed Logit Model And The Multivariate Probit Model Are Used To Analyze The Data Collected By A Conjoint Survey. From This Analysis, We Find That The Service Fee And Stability Are The Most Critical Adoption Factors. We Also Present An Analysis On The Relationship Between Terminal Devices And IaaS, Classified By Core Attributes Such As Price, Stability, And Storage Capacity. From These Relationships, We Find That Larger Storage Capacity Is More Important For Mobile Devices Such As Laptops Than Desktops. Based On The Results Of The Analysis, This Study Also Recommends Useful Strategies To Enable Enterprise Managers To Focus On More Appropriate Service Attributes, And To Target Suitable Terminal Device Markets Matching The Features Of The Service
Cloud Computing Enables Enterprises And Individu-1 Als To Outsource And Share Their Data. This Way, Cloud Computing 2 Eliminates The Heavy Workload Of Local Information Infrastruc-3 Ture. Attribute-based Encryption Has Become A Promising Solution 4 For Encrypted Data Access Control In Clouds Due To The Ability 5 To Achieve One-to-many Encrypted Data Sharing. Revocation Is A 6 Critical Requirement For Encrypted Data Access Control Systems. 7 After Outsourcing The Encrypted Attribute-based Ciphertext To The 8 Cloud, The Data Owner May Want To Revoke Some Recipients That 9 Were Authorized Previously, Which Means That The Outsourced 10 Attribute-based Ciphertext Needs To Be Updated To A New One 11 That Is Under The Revoked Policy. The Integrity Issue Arises When 12 The Revocation Is Executed. When A New Ciphertext With The 13 Revoked Access Policy Is Generated By The Cloud Server, The Data 14 Recipient Cannot Be Sure That The Newly Generated Ciphertext 15 Guarantees To Be Decrypted To The Same Plaintext As The Originally 16 Encrypted Data, Since The Cloud Server Is Provided By A Third 17 Party, Which Is Not Fully Trusted. In This Paper, We Consider 18 A New Security Requirement For The Revocable Attribute-based 19 Encryption Schemes: Integrity. We Introduce A Formal Definition 20 And Security Model For The Revocable Attribute-based Encryption 21 With Data Integrity Protection (RABE-DI). Then, We Propose 22 A Concrete RABE-DI Scheme And Prove Its Confidentiality And 23 Integrity Under The Defined Security Model. Finally, We Present 24 An Implementation Result And Provide Performance Evaluation 25 Which Shows That Our Scheme Is Efficient And Practical.
Nowadays, Large Amount Of Data Is Stored On The Cloud Which Is Required To Be Protected From The Unauthorized Users. To Maintain The Privacy And Security Of Data Various Algorithms Are Used. The Objective Of Every System Is To Achieve Confidentiality, Integrity, Availability (CIA). However, The Existing Centralized Cloud Storage Lacks To Provide These CIA Properties. So, To Enhance The Security Of Data And Storing Techniques, Decentralized Cloud Storage Is Used Along With Blockchain Technology. It Effectively Helps To Protect Data From Tampering Or Deleting A Part Of Data. The Data Stored In Blockchain Is Linked To Each Other By A Chain Of Blocks. Each Block Has Its Hash Value, Which Is Stored In Next Block. Thus It Reduces The Chances Of Data Altering. For This Purpose, SHA-512 Hashing Algorithm Is Used. Hashing Algorithm Is Used In Many Aspects, Where The Security Of Data Is Required Such As Message Digest, Password Verification, Digital Certificates And In Blockchain. By The Combination Of These Methods And Algorithms, Data Becomes More Secure And Reliable. However, With The Help Of Various Algorithms, The Security Of The Data Can Be Enhanced. Also, Advance Encryption Standard (AES) Is Used To Encrypt And Decrypt The Data Due To The Significant Features Of This Algorithm.
With The Ever-increasing Amount Of Data Resided In A Cloud, How To Provide Users With Secure And Practical Query Services Has Become The Key To Improve The Quality Of Cloud Services. Fuzzy Searchable Encryption (FSE) Is Identified As One Of The Most Promising Approaches For Enabling Secure Query Services, Since It Allows Searching Encrypted Data By Using Keywords With Spelling Errors. However, Existing FSE Schemes Are Far From The Practical Use For The Following Reasons: (1) Inflexibility. It Is Hard For Them To Simultaneously Support AND And OR Semantics In A Multi-keyword Query. (2) Inefficiency. They Require Sequentially Scanning A Whole Dataset To Find Matched Files, And Thus Are Difficult To Apply To A Large-scale Dataset. (3) Limited Robustness. It Is Difficult For Them To Resist The Linear Analysis Attack In The Known-background Model. To Fix The Above Problems, This Article Proposes Matrix-based Multi-keyword Fuzzy Search (M2FS) Schemes, Which Support Approximate Keyword Matching By Exploiting The Indecomposable Property Of Primes. Specifically, We First Present A Basic Scheme, Called M2FS-B, Where Multiple Keywords In A Query Or A File Are Constructed As Prime-related Matrices Such That The Result Of Matrix Multiplication Can Be Employed To Determine The Level Of Matching For Different Query Semantics. Then, We Construct An Advanced Scheme, Named M2FS-E, Which Builds A Searchable Index As A Keyword Balanced Binary (KBB) Tree For Dynamic And Parallel Searches, While Adding Random Noises Into A Query Matrix For Enhanced Robustness. Extensive Analyses And Experiments Demonstrate The Validity Of Our M2FS Schemes.
The Contemporary Literature On Cloud Resource Allocation Is Mostly Focused On Studying The Interactions Between Customers And Cloud Managers. Nevertheless, The Recent Growth In The Customers’ Demands And The Emergence Of Private Cloud Providers (CPs) Entice The Cloud Managers To Rent Extra Resources From The CPs So As To Handle Their Backlogged Tasks And Attract More Customers. This Also Renders The Interactions Between The Cloud Managers And The CPs An Important Problem To Study. In This Paper, We Investigate Both Interactions Through A Two-stage Auction Mechanism. For The Interactions Between Customers And Cloud Managers, We Adopt The Options-based Sequential Auctions (OBSAs) To Design The Cloud Resource Allocation Paradigm. As Compared To Existing Works, Our Framework Can Handle Customers With Heterogeneous Demands, Provide Truthfulness As The Dominant Strategy, Enjoy A Simple Winner Determination Procedure, And Preclude The Delayed Entrance Issue. We Also Provide The Performance Analysis Of The OBSAs, Which Is Among The First In Literature. Regarding The Interactions Between Cloud Managers And CPs, We Propose Two Parallel Markets For Resource Gathering, And Capture The Selfishness Of The CPs By Their Offered Prices . We Conduct A Comprehensive Analysis Of The Two Markets And Identify The Bidding Strategies Of The Cloud Managers.
People Endorse The Great Power Of Cloud Computing, But Cannot Fully Trust The Cloud Providers To Host Privacy-sensitive Data, Due To The Absence Of User-to-cloud Controllability. To Ensure Confidentiality, Data Owners Outsource Encrypted Data Instead Of Plaintexts. To Share The Encrypted Files With Other Users, Ciphertext-policy Attribute-based Encryption (CP-ABE) Can Be Utilized To Conduct Fine-grained And Owner-centric Access Control. But This Does Not Sufficiently Become Secure Against Other Attacks. Many Previous Schemes Did Not Grant The Cloud Provider The Capability To Verify Whether A Downloader Can Decrypt. Therefore, These Files Should Be Available To Everyone Accessible To The Cloud Storage. A Malicious Attacker Can Download Thousands Of Files To Launch Economic Denial Of Sustainability (EDoS) Attacks, Which Will Largely Consume The Cloud Resource. The Payer Of The Cloud Service Bears The Expense. Besides, The Cloud Provider Serves Both As The Accountant And The Payee Of Resource Consumption Fee, Lacking The Transparency To Data Owners. These Concerns Should Be Resolved In Real-world Public Cloud Storage. In This Paper, We Propose A Solution To Secure Encrypted Cloud Storages From EDoS Attacks And Provide Resource Consumption Accountability. It Uses CP-ABE Schemes In A Black-box Manner And Complies With Arbitrary Access Policy Of The CP-ABE. We Present Two Protocols For Different Settings, Followed By Performance And Security Analysis.
In Current Healthcare Systems, Electronic Medical Records (EMRs) Are Always Located In Different Hospitals And Controlled By A Centralized Cloud Provider. However, It Leads To Single Point Of Failure As Patients Being The Real Owner Lose Track Of Their Private And Sensitive EMRs. Hence, This Article Aims To Build An Access Control Framework Based On Smart Contract, Which Is Built On The Top Of Distributed Ledger (blockchain), To Secure The Sharing Of EMRs Among Different Entities Involved In The Smart Healthcare System. For This, We Propose Four Forms Of Smart Contracts For User Verification, Access Authorization, Misbehavior Detection, And Access Revocation, Respectively. In This Framework, Considering The Block Size Of Ledger And Huge Amount Of Patient Data, The EMRs Are Stored In Cloud After Being Encrypted Through The Cryptographic Functions Of Elliptic Curve Cryptography (ECC) And Edwards-curve Digital Signature Algorithm (EdDSA), While Their Corresponding Hashes Are Packed Into Blockchain. The Performance Evaluation Based On A Private Ethereum System Is Used To Verify The Efficiency Of Proposed Access Control Framework In The Real-time Smart Healthcare System.
Cloud Computing Provisions Scalable Resources For High Performance Industrial Applications. Cloud Providers Usually Offer Two Types Of Usage Plans: Reserved And On-demand. Reserved Plans Offer Cheaper Resources For Long-term Contracts While On-demand Plans Are Available For Short Or Long Periods But Are More Expensive. To Satisfy Incoming User Demands With Reasonable Costs, Cloud Resources Should Be Allocated Efficiently. Most Existing Works Focus On Either Cheaper Solutions With Reserved Resources That May Lead To Under-provisioning Or Over-provisioning, Or Costly Solutions With On-demand Resources. Since Inefficiency Of Allocating Cloud Resources Can Cause Huge Provisioning Costs And Fluctuation In Cloud Demand, Resource Allocation Becomes A Highly Challenging Problem. In This Paper, We Propose A Hybrid Method To Allocate Cloud Resources According To The Dynamic User Demands. This Method Is Developed As A Two-phase Algorithm That Consists Of Reservation And Dynamic Provision Phases. In This Way, We Minimize The Total Deployment Cost By Formulating Each Phase As An Optimization Problem While Satisfying Quality Of Service. Due To The Uncertain Nature Of Cloud Demands, We Develop A Stochastic Optimization Approach By Modeling User Demands As Random Variables. Our Algorithm Is Evaluated Using Different Experiments And The Results Show Its Efficiency In Dynamically Allocating Cloud Resources.
In Distribute The Key To Both Sender And Receiver To Avoid The Hacking Of Keys. So This Architecture Will Provide High Level Security. This Work Presents Key Distribution To Safeguard High Level Security In Large Networks, New Directions In Classical Cryptography And Symmetric Cryptography. Two Three-party Key Distributions, One With Implicit User Authentication And The Other With Explicit Trusted Centers’ Authentication, Are Proposed To Demonstrate The Merits Of The New Combination. The Project Titled “Efficient Provable Of Secure Key Distribution Management” Is Designed Using Microsoft Visual Studio.Net 2005 As Front End And Microsoft SQL Server 2000 As Back End Which Works In .Net Framework Version 2.0. The Coding Language Used Is C# .Net. We Authenticated Three Parties Into This Project.
A Cloud Storage System, Consisting Of A Collection Of Storage Servers, Provides Long-term Storage Services Over The Internet. Storing Data In A Third Party's Cloud System Causes Serious Concern Over Data Confidentiality. General Encryption Schemes Protect Data Confidentiality, But Also Limit The Functionality Of The Storage System Because A Few Operations Are Supported Over Encrypted Data. Constructing A Secure Storage System That Supports Multiple Functions Is Challenging When The Storage System Is Distributed And Has No Central Authority. We Propose A Threshold Proxy Re-encryption Scheme And Integrate It With A Decentralized Erasure Code Such That A Secure Distributed Storage System Is Formulated. The Distributed Storage System Not Only Supports Secure And Robust Data Storage And Retrieval, But Also Lets A User Forward His Data In The Storage Servers To Another User Without Retrieving The Data Back. The Main Technical Contribution Is That The Proxy Re-encryption Scheme Supports Encoding Operations Over Encrypted Messages As Well As Forwarding Operations Over Encoded And Encrypted Messages. Our Method Fully Integrates Encrypting, Encoding, And Forwarding. We Analyze And Suggest Suitable Parameters For The Number Of Copies Of A Message Dispatched To Storage Servers And The Number Of Storage Servers Queried By A Key Server. These Parameters Allow More Flexible Adjustment Between The Number Of Storage Servers And Robustness.
With The Character Of Low Maintenance, Cloud Computing Provides An Economical And Efficient Solution For Sharing Group Resource Among Cloud Users. Unfortunately, Sharing Data In A Multi-owner Manner While Preserving Data And Identity Privacy From An Untrusted Cloud Is Still A Challenging Issue, Due To The Frequent Change Of The Membership. In This Paper, We Propose A Secure Multi-owner Data Sharing Scheme, Named Mona, For Dynamic Groups In The Cloud. By Leveraging Group Signature And Dynamic Broadcast Encryption Techniques, Any Cloud User Can Anonymously Share Data With Others. Meanwhile, The Storage Overhead And Encryption Computation Cost Of Our Scheme Are Independent With The Number Of Revoked Users. In Addition, We Analyze The Security Of Our Scheme With Rigorous Proofs, And Demonstrate The Efficiency Of Our Scheme In Experiments.
Personal Health Record (PHR) Is An Emerging Patient-centric Model Of Health Information Exchange, Which Is Often Outsourced To Be Stored At A Third Party, Such As Cloud Providers. However, There Have Been Wide Privacy Concerns As Personal Health Information Could Be Exposed To Those Third Party Servers And To Unauthorized Parties. To Assure The Patients' Control Over Access To Their Own PHRs, It Is A Promising Method To Encrypt The PHRs Before Outsourcing. Yet, Issues Such As Risks Of Privacy Exposure, Scalability In Key Management, Flexible Access, And Efficient User Revocation, Have Remained The Most Important Challenges Toward Achieving Fine-grained, Cryptographically Enforced Data Access Control. In This Paper, We Propose A Novel Patient-centric Framework And A Suite Of Mechanisms For Data Access Control To PHRs Stored In Semitrusted Servers. To Achieve Fine-grained And Scalable Data Access Control For PHRs, We Leverage Attribute-based Encryption (ABE) Techniques To Encrypt Each Patient's PHR File. Different From Previous Works In Secure Data Outsourcing, We Focus On The Multiple Data Owner Scenario, And Divide The Users In The PHR System Into Multiple Security Domains That Greatly Reduces The Key Management Complexity For Owners And Users. A High Degree Of Patient Privacy Is Guaranteed Simultaneously By Exploiting Multiauthority ABE. Our Scheme Also Enables Dynamic Modification Of Access Policies Or File Attributes, Supports Efficient On-demand User/attribute Revocation And Break-glass Access Under Emergency Scenarios. Extensive Analytical And Experimental Results Are Presented Which Show The Security, Scalability, And Efficiency Of Our Proposed Scheme.
The Design Of Secure Authentication Protocols Is Quite Challenging, Considering That Various Kinds Of Root Kits Reside In Personal Computers (PCs) To Observe User's Behavior And To Make PCs Untrusted Devices. Involving Human In Authentication Protocols, While Promising, Is Not Easy Because Of Their Limited Capability Of Computation And Memorization. Therefore, Relying On Users To Enhance Security Necessarily Degrades The Usability. On The Other Hand, Relaxing Assumptions And Rigorous Security Design To Improve The User Experience Can Lead To Security Breaches That Can Harm The Users' Trust. In This Paper, We Demonstrate How Careful Visualization Design Can Enhance Not Only The Security But Also The Usability Of Authentication. To That End, We Propose Two Visual Authentication Protocols: One Is A One-time-password Protocol, And The Other Is A Password-based Authentication Protocol. Through Rigorous Analysis, We Verify That Our Protocols Are Immune To Many Of The Challenging Authentication Attacks Applicable In The Literature. Furthermore, Using An Extensive Case Study On A Prototype Of Our Protocols, We Highlight The Potential Of Our Approach For Real-world Deployment: We Were Able To Achieve A High Level Of Usability While Satisfying Stringent Security Requirements.
Recently, Many Enterprises Have Moved Their Data Into The Cloud By Using File Syncing And Sharing (FSS) Services, Which Have Been Deployed For Mobile Users. However, Bring-Your-Own-Device (BYOD) Solutions For Increasingly Deployed Mobile Devices Have Also In Fact Raised A New Challenge For How To Prevent Users From Abusing The FSS Service. In This Paper, We Address This Issue By Using A New System Model Involving Anomaly Detection, Tracing, And Revocation Approaches. The Presented Solution Applies A New Threshold Public Key Based Cryptosystem, Called Partially-ordered Hierarchical Encryption (PHE), Which Implements A Partial-order Key Hierarchy And It Is Similar To Role Hierarchy Widely Used In RBAC. PHE Provides Two Main Security Mechanisms, I.e., Traitor Tracing And Key Revocation, Which Can Greatly Improve The Efficiency Compared To Previous Approaches. The Security And Performance Analysis Shows That PHE Is A Provably Secure Threshold Encryption And Provides Following Salient Management And Performance Benefits: It Can Promise To Efficiently Trace All Possible Traitor Coalitions And Support Public Revocation Not Only For The Users But For The Specified Groups.
With The Fast Development Of Cloud Computing And Its Wide Application, Data Security Plays An Important Role In Cloud Computing. This Paper Brought Up A Novel Data Security Strategy Based On Artificial Immune Algorithm On Architecture Of HDFS For Cloud Computing. Firstly, We Explained The Main Factors Influence Data Security In Cloud Environment. Then We Introduce HDFS Architecture, Data Security Model And Put Forward An Improved Security Model For Cloud Computing. In The Third Section, Artificial Immune Algorithm Related With Negative Selection And Dynamic Selection Algorithm That Adopted In Our System And How They Applied To Cloud Computing Are Depicted In Detail. Finally Simulations Are Taken By Two Steps. Former Simulations Are Carried Out To Prove The Performance Of Artificial Immune Algorithm Brought Up In This Paper, The Latter Simulation Are Running On Cloudsim Platform To Testify That Data Security Strategy Based On Artificial Immune Algorithm For Cloud Computing Is Efficient.
Cloud Computing May Be Defined As Delivery Of Product Rather Than Service. Cloud Computing Is A Internet Based Computing Which Enables Sharing Of Services. Many Users Place Their Data In The Cloud. However, The Fact That Users No Longer Have Physical Possession Of The Possibly Large Size Of Outsourced Data Makes The Data Integrity Protection In Cloud Computing A Very Challenging And Potentially Formidable Task, Especially For Users With Constrained Computing Resources And Capabilities. So Correctness Of Data And Security Is A Prime Concern. This Article Studies The Problem Of Ensuring The Integrity And Security Of Data Storage In Cloud Computing. Security In Cloud Is Achieved By Signing The Data Block Before Sending To The Cloud. Signing Is Performed Using Boneh–Lynn–Shacham (BLS) Algorithm Which Is More Secure Compared To Other Algorithms. To Ensure The Correctness Of Data, We Consider An External Auditor Called As Third Party Auditor (TPA), On Behalf Of The Cloud User, To Verify The Integrity Of The Data Stored In The Cloud. By Utilizing Public Key Based Homomorphic Authenticator With Random Masking Privacy Preserving Public Auditing Can Be Achieved. The Technique Of Bilinear Aggregate Signature Is Used To Achieve Batch Auditing. Batch Auditing Reduces The Computation Overhead. Extensive Security And Performance Analysis Shows The Proposed Schemes Are Provably Secure And Highly Efficient.
Remote Data Trustworthiness Checking Is A Fundamental Advancement In Dispersed Computing. In Recent Times Numerous Works Center Around Providing Data Elements As Well As Open Proof To This Kind Of Conventions. Existing Conventions Can Support The Two Features With The Help Of An Untouchable Evaluator. In A Earlier Work, Propose A Weak Information Honesty Checking Convention That Bolsters Information Elements. Right Now, Adjust To Help Open Undeniable Nature. The Proposed Show Reinforces Open Evident Nature Without Help Of A Pariah Examiner. What's More, The Proposed Way Doesn't Reveal Any Personal Data To Third Party Verifiers. Using A Conventional Investigation, We Show The Precision And Security Of The Show. Starting There Forward, Through Debatable Investigation And Exploratory Results, We Show That The Proposed Show Has A Conventional Presentation.
In Cloud Storage Services, Deduplication Technology Is Commonly Used To Reduce The Space And Bandwidth Requirements Of Services By Eliminating Redundant Data And Storing Only A Single Copy Of Them. Deduplication Is Most Effective When Multiple Users Outsource The Same Data To The Cloud Storage, But It Raises Issues Relating To Security And Ownership. Proof-of-ownership Schemes Allow Any Owner Of The Same Data To Prove To The Cloud Storage Server That He Owns The Data In A Robust Way. However, Many Users Are Likely To Encrypt Their Data Before Outsourcing Them To The Cloud Storage To Preserve Privacy, But This Hampers Deduplication Because Of The Randomization Property Of Encryption. Recently, Several Deduplication Schemes Have Been Proposed To Solve This Problem By Allowing Each Owner To Share The Same Encryption Key For The Same Data. However, Most Of The Schemes Suffer From Security Flaws, Since They Do Not Consider The Dynamic Changes In The Ownership Of Outsourced Data That Occur Frequently In A Practical Cloud Storage Service. In This Paper, We Propose A Novel Server-side Deduplication Scheme For Encrypted Data. It Allows The Cloud Server To Control Access To Outsourced Data Even When The Ownership Changes Dynamically By Exploiting Randomized Convergent Encryption And Secure Ownership Group Key Distribution. This Prevents Data Leakage Not Only To Revoked Users Even Though They Previously Owned That Data, But Also To An Honest-but-curious Cloud Storage Server. In Addition, The Proposed Scheme Guarantees Data Integrity Against Any Tag Inconsistency Attack. Thus, Security Is Enhanced In The Proposed Scheme. The Efficiency Analysis Results Demonstrate That The Proposed Scheme Is Almost As Efficient As The Previous Schemes, While The Additional Computational Overhead Is Negligible.
The Infrastructure Cloud (IaaS) Service Model Offers Improved Resource Flexibility And Availability, Where Tenants - Insulated From The Minutiae Of Hardware Maintenance - Rent Computing Resources To Deploy And Operate Complex Systems. Large-scale Services Running On IaaS Platforms Demonstrate The Viability Of This Model; Nevertheless, Many Organizations Operating On Sensitive Data Avoid Migrating Operations To IaaS Platforms Due To Security Concerns. In This Paper, We Describe A Framework For Data And Operation Security In IaaS, Consisting Of Protocols For A Trusted Launch Of Virtual Machines And Domain-based Storage Protection. We Continue With An Extensive Theoretical Analysis With Proofs About Protocol Resistance Against Attacks In The Defined Threat Model. The Protocols Allow Trust To Be Established By Remotely Attesting Host Platform Configuration Prior To Launching Guest Virtual Machines And Ensure Confidentiality Of Data In Remote Storage, With Encryption Keys Maintained Outside Of The IaaS Domain. Presented Experimental Results Demonstrate The Validity And Efficiency Of The Proposed Protocols. The Framework Prototype Was Implemented On A Test Bed Operating A Public Electronic Health Record System, Showing That The Proposed Protocols Can Be Integrated Into Existing Cloud Environments.
We Propose A New Design For Large-scale Multimedia Content Protection Systems. Our Design Leverages Cloud Infrastructures To Provide Cost Efficiency, Rapid Deployment, Scalability, And Elasticity To Accommodate Varying Workloads. The Proposed System Can Be Used To Protect Different Multimedia Content Types, Including Videos, Images, Audio Clips, Songs, And Music Clips. The System Can Be Deployed On Private And/or Public Clouds. Our System Has Two Novel Components: (i) Method To Create Signatures Of Videos, And (ii) Distributed Matching Engine For Multimedia Objects. The Signature Method Creates Robust And Representative Signatures Of Videos That Capture The Depth Signals In These Videos And It Is Computationally Efficient To Compute And Compare As Well As It Requires Small Storage. The Distributed Matching Engine Achieves High Scalability And It Is Designed To Support Different Multimedia Objects. We Implemented The Proposed System And Deployed It On Two Clouds: Amazon Cloud And Our Private Cloud. Our Experiments With More Than 11,000 Videos And 1 Million Images Show The High Accuracy And Scalability Of The Proposed System. In Addition, We Compared Our System To The Protection System Used By YouTube And Our Results Show That The YouTube Protection System Fails To Detect Most Copies Of Videos, While Our System Detects More Than 98% Of Them.
This Paper Proposes A Service Operator-aware Trust Scheme (SOTS) For Resource Matchmaking Across Multiple Clouds. Through Analyzing The Built-in Relationship Between The Users, The Broker, And The Service Resources, This Paper Proposes A Middleware Framework Of Trust Management That Can Effectively Reduces User Burden And Improve System Dependability. Based On Multidimensional Resource Service Operators, We Model The Problem Of Trust Evaluation As A Process Of Multi-attribute Decision-making, And Develop An Adaptive Trust Evaluation Approach Based On Information Entropy Theory. This Adaptive Approach Can Overcome The Limitations Of Traditional Trust Schemes, Whereby The Trusted Operators Are Weighted Manually Or Subjectively. As A Result, Using SOTS, The Broker Can Efficiently And Accurately Prepare The Most Trusted Resources In Advance, And Thus Provide More Dependable Resources To Users. Our Experiments Yield Interesting And Meaningful Observations That Can Facilitate The Effective Utilization Of SOTS In A Large-scale Multi-cloud Environment.
Data Deduplication Is One Of Important Data Compression Techniques For Eliminating Duplicate Copies Of Repeating Data, And Has Been Widely Used In Cloud Storage To Reduce The Amount Of Storage Space And Save Bandwidth. To Protect The Confidentiality Of Sensitive Data While Supporting Deduplication, The Convergent Encryption Technique Has Been Proposed To Encrypt The Data Before Outsourcing. To Better Protect Data Security, This Paper Makes The First Attempt To Formally Address The Problem Of Authorized Data Deduplication. Different From Traditional Deduplication Systems, The Differential Privileges Of Users Are Further Considered In Duplicate Check Besides The Data Itself. We Also Present Several New Deduplication Constructions Supporting Authorized Duplicate Check In A Hybrid Cloud Architecture. Security Analysis Demonstrates That Our Scheme Is Secure In Terms Of The Definitions Specified In The Proposed Security Model. As A Proof Of Concept, We Implement A Prototype Of Our Proposed Authorized Duplicate Check Scheme And Conduct Testbed Experiments Using Our Prototype. We Show That Our Proposed Authorized Duplicate Check Scheme Incurs Minimal Overhead Compared To Normal Operations.
Cloud Computing Is Becoming Popular As The Next Infrastructure Of Computing Platform. Despite The Promising Model And Hype Surrounding, Security Has Become The Major Concern That People Hesitate To Transfer Their Applications To Clouds. Concretely, Cloud Platform Is Under Numerous Attacks. As A Result, It Is Definitely Expected To Establish A Firewall To Protect Cloud From These Attacks. However, Setting Up A Centralized Firewall For A Whole Cloud Data Center Is Infeasible From Both Performance And Financial Aspects. In This Paper, We Propose A Decentralized Cloud Firewall Framework For Individual Cloud Customers. We Investigate How To Dynamically Allocate Resources To Optimize Resources Provisioning Cost, While Satisfying QoS Requirement Specified By Individual Customers Simultaneously. Moreover, We Establish Novel Queuing Theory Based Model M/Geo/1 And M/Geo/m For Quantitative System Analysis, Where The Service Times Follow A Geometric Distribution. By Employing Z-transform And Embedded Markov Chain Techniques, We Obtain A Closed-form Expression Of Mean Packet Response Time. Through Extensive Simulations And Experiments, We Conclude That An M/Geo/1 Model Reflects The Cloud Firewall Real System Much Better Than A Traditional M/M/1 Model. Our Numerical Results Also Indicate That We Are Able To Set Up Cloud Firewall With Affordable Cost To Cloud Customers.
Interconnected Systems, Such As Web Servers, Database Servers, Cloud Computing Servers And So On, Are Now Under Threads From Network Attackers. As One Of Most Common And Aggressive Means, Denial-of-service (DoS) Attacks Cause Serious Impact On These Computing Systems. In This Paper, We Present A DoS Attack Detection System That Uses Multivariate Correlation Analysis (MCA) For Accurate Network Traffic Characterization By Extracting The Geometrical Correlations Between Network Traffic Features. Our MCA-based DoS Attack Detection System Employs The Principle Of Anomaly Based Detection In Attack Recognition. This Makes Our Solution Capable Of Detecting Known And Unknown DoS Attacks Effectively By Learning The Patterns Of Legitimate Network Traffic Only. Furthermore, A Triangle-area-based Technique Is Proposed To Enhance And To Speed Up The Process Of MCA. The Effectiveness Of Our Proposed Detection System Is Evaluated Using KDD Cup 99 Data Set, And The Influences Of Both Non-normalized Data And Normalized Data On The Performance Of The Proposed Detection System Are Examined. The Results Show That Our System Outperforms Two Other Previously Developed State-of-the-art Approaches In Terms Of Detection Accuracy.
Cloud Computing Is An Emerging Data Interactive Paradigm To Realize Users' Data Remotely Stored In An Online Cloud Server. Cloud Services Provide Great Conveniences For The Users To Enjoy The On-demand Cloud Applications Without Considering The Local Infrastructure Limitations. During The Data Accessing, Different Users May Be In A Collaborative Relationship, And Thus Data Sharing Becomes Significant To Achieve Productive Benefits. The Existing Security Solutions Mainly Focus On The Authentication To Realize That A User's Privative Data Cannot Be Illegally Accessed, But Neglect A Subtle Privacy Issue During A User Challenging The Cloud Server To Request Other Users For Data Sharing. The Challenged Access Request Itself May Reveal The User's Privacy No Matter Whether Or Not It Can Obtain The Data Access Permissions. In This Paper, We Propose A Shared Authority Based Privacy-preserving Authentication Protocol (SAPA) To Address Above Privacy Issue For Cloud Storage. In The SAPA, 1) Shared Access Authority Is Achieved By Anonymous Access Request Matching Mechanism With Security And Privacy Considerations (e.g., Authentication, Data Anonymity, User Privacy, And Forward Security); 2) Attribute Based Access Control Is Adopted To Realize That The User Can Only Access Its Own Data Fields; 3) Proxy Re-encryption Is Applied To Provide Data Sharing Among The Multiple Users. Meanwhile, Universal Composability (UC) Model Is Established To Prove That The SAPA Theoretically Has The Design Correctness. It Indicates That The Proposed Protocol Is Attractive For Multi-user Collaborative Cloud Applications.
Cloud Storage Services Have Become Commercially Popular Due To Their Overwhelming Advantages. To Provide Ubiquitous Always-on Access, A Cloud Service Provider (CSP) Maintains Multiple Replicas For Each Piece Of Data On Geographically Distributed Servers. A Key Problem Of Using The Replication Technique In Clouds Is That It Is Very Expensive To Achieve Strong Consistency On A Worldwide Scale. In This Paper, We First Present A Novel Consistency As A Service (CaaS) Model, Which Consists Of A Large Data Cloud And Multiple Small Audit Clouds. In The CaaS Model, A Data Cloud Is Maintained By A CSP, And A Group Of Users That Constitute An Audit Cloud Can Verify Whether The Data Cloud Provides The Promised Level Of Consistency Or Not. We Propose A Two-level Auditing Architecture, Which Only Requires A Loosely Synchronized Clock In The Audit Cloud. Then, We Design Algorithms To Quantify The Severity Of Violations With Two Metrics: The Commonality Of Violations, And The Staleness Of The Value Of A Read. Finally, We Devise A Heuristic Auditing Strategy (HAS) To Reveal As Many Violations As Possible. Extensive Experiments Were Performed Using A Combination Of Simulations And Realcloud Deployments To Validate HAS.
With The Increasing Popularity Of Cloud Computing As A Solution For Building High-quality Applications On Distributed Components, Efficiently Evaluating User-side Quality Of Cloud Components Becomes An Urgent And Crucial Research Problem. However, Invoking All The Available Cloud Components From User-side For Evaluation Purpose Is Expensive And Impractical. To Address This Critical Challenge, We Propose A Neighborhood-based Approach, Called CloudPred, For Collaborative And Personalized Quality Prediction Of Cloud Components. CloudPred Is Enhanced By Feature Modeling On Both Users And Components. Our Approach CloudPred Requires No Additional Invocation Of Cloud Components On Behalf Of The Cloud Application Designers. The Extensive Experimental Results Show That CloudPred Achieves Higher QoS Prediction Accuracy Than Other Competing Methods. We Also Publicly Release Our Large-scale QoS Dataset For Future Related Research In Cloud Computing.
Data Sharing Is An Important Functionality In Cloud Storage. In This Paper, We Show How To Securely, Efficiently, And Flexibly Share Data With Others In Cloud Storage. We Describe New Public-key Cryptosystems That Produce Constant-size Ciphertexts Such That Efficient Delegation Of Decryption Rights For Any Set Of Ciphertexts Are Possible. The Novelty Is That One Can Aggregate Any Set Of Secret Keys And Make Them As Compact As A Single Key, But Encompassing The Power Of All The Keys Being Aggregated. In Other Words, The Secret Key Holder Can Release A Constant-size Aggregate Key For Flexible Choices Of Ciphertext Set In Cloud Storage, But The Other Encrypted Files Outside The Set Remain Confidential. This Compact Aggregate Key Can Be Conveniently Sent To Others Or Be Stored In A Smart Card With Very Limited Secure Storage. We Provide Formal Security Analysis Of Our Schemes In The Standard Model. We Also Describe Other Application Of Our Schemes. In Particular, Our Schemes Give The First Public-key Patient-controlled Encryption For Flexible Hierarchy, Which Was Yet To Be Known.
We Present Anchor, A General Resource Management Architecture That Uses The Stable Matching Framework To Decouple Policies From Mechanisms When Mapping Virtual Machines To Physical Servers. In Anchor, Clients And Operators Are Able To Express A Variety Of Distinct Resource Management Policies As They Deem Fit, And These Policies Are Captured As Preferences In The Stable Matching Framework. The Highlight Of Anchor Is A New Many-to-one Stable Matching Theory That Efficiently Matches VMs With Heterogeneous Resource Needs To Servers, Using Both Offline And Online Algorithms. Our Theoretical Analyses Show The Convergence And Optimality Of The Algorithm. Our Experiments With A Prototype Implementation On A 20-node Server Cluster, As Well As Large-scale Simulations Based On Real-world Workload Traces, Demonstrate That The Architecture Is Able To Realize A Diverse Set Of Policy Objectives With Good Performance And Practicality.
Cloud Computing Promises To Increase The Velocity With Which Application Are Deployed, Increase Innovation And Lower Costs, All While Increasing Business Agility And Hence Envisioned As The Next Generation Architecture Of IT Enterprise. Nature Of Cloud Computing Builds An Established Trend For Driving Cost Out Of The Delivery Of Services While Increasing The Speed And Agility With Which Services Are Deployed. Cloud Computing Incorporates Virtualization, On Demand Deployment, Internet Delivery Of Services And Open Source Software .From Another Perspective, Everything Is New Because Cloud Computing Changes How We Invent, Develop, Deploy, Scale, Update, Maintain And Pay For Application And The Infrastructure On Which They Run. Because Of These Benefits Of Cloud Computing, It Requires An Effective And Flexible Dynamic Security Scheme To Ensure The Correctness Of Users’ Data In The Cloud. Quality Of Service Is An Important Aspect And Hence, Extensive Cloud Data Security And Performance Is Required.
Cloud Computing Has Emerging As A Promising Pattern For Data Outsourcing And High-quality Data Services. However, Concerns Of Sensitive Information On Cloud Potentially Causes Privacy Problems. Data Encryption Protects Data Security To Some Extent, But At The Cost Of Compromised Efficiency. Searchable Symmetric Encryption (SSE) Allows Retrieval Of Encrypted Data Over Cloud. In This Paper, We Focus On Addressing Data Privacy Issues Using SSE. For The First Time, We Formulate The Privacy Issue From The Aspect Of Similarity Relevance And Scheme Robustness. We Observe That Server-side Ranking Based On Order-preserving Encryption (OPE) Inevitably Leaks Data Privacy. To Eliminate The Leakage, We Propose A Two-round Searchable Encryption (TRSE) Scheme That Supports Top-(k) Multikeyword Retrieval. In TRSE, We Employ A Vector Space Model And Homomorphic Encryption. The Vector Space Model Helps To Provide Sufficient Search Accuracy, And The Homomorphic Encryption Enables Users To Involve In The Ranking While The Majority Of Computing Work Is Done On The Server Side By Operations Only On Ciphertext. As A Result, Information Leakage Can Be Eliminated And Data Security Is Ensured. Thorough Security And Performance Analysis Show That The Proposed Scheme Guarantees High Security And Practical Efficiency.
Cloud Computing Has Emerged As One Of The Most Influential Paradigms In The IT Industry In Recent Years. Since This New Computing Technology Requires Users To Entrust Their Valuable Data To Cloud Providers, There Have Been Increasing Security And Privacy Concerns On Outsourced Data. Several Schemes Employing Attribute-based Encryption (ABE) Have Been Proposed For Access Control Of Outsourced Data In Cloud Computing; However, Most Of Them Suffer From Inflexibility In Implementing Complex Access Control Policies. In Order To Realize Scalable, Flexible, And Fine-grained Access Control Of Outsourced Data In Cloud Computing, In This Paper, We Propose Hierarchical Attribute-set-based Encryption (HASBE) By Extending Ciphertext-policy Attribute-set-based Encryption (ASBE) With A Hierarchical Structure Of Users. The Proposed Scheme Not Only Achieves Scalability Due To Its Hierarchical Structure, But Also Inherits Flexibility And Fine-grained Access Control In Supporting Compound Attributes Of ASBE. In Addition, HASBE Employs Multiple Value Assignments For Access Expiration Time To Deal With User Revocation More Efficiently Than Existing Schemes. We Formally Prove The Security Of HASBE Based On Security Of The Ciphertext-policy Attribute-based Encryption (CP-ABE) Scheme By Bethencourt And Analyze Its Performance And Computational Complexity. We Implement Our Scheme And Show That It Is Both Efficient And Flexible In Dealing With Access Control For Outsourced Data In Cloud Computing With Comprehensive Experiments.
Encryption Is The Technique Of Hiding Private Or Sensitive Information Within Something That Appears To Be Nothing Be A Usual. If A Person Views That Cipher Text, He Or She Will Have No Idea That There Is Any Secret Information. What Encryption Essentially Does Is Exploit Human Perception, Human Senses Are Not Trained To Look For Files That Have Information Inside Of Them. What This System Does Is, It Lets User To Send Text As Secrete Message And Gives A Key Or A Password To Lock The Text, What This Key Does Is, It Encrypts The Text, So That Even If It Is Hacked By Hacker It Will Not Be Able To Read The Text. Receiver Will Need The Key To Decrypt The Hidden Text. User Then Sends The Key To The Receiver And Then He Enters The Key Or Password For Decryption Of Text, He Then Presses Decrypt Key To Get Secret Text From The Sender. Diffie-Hellman Key Exchange Offers The Best Of Both As It Uses Public Key Techniques To Allow The Exchange Of A Private Encryption Key. By Using This Method, You Can Double Ensure That Your Secret Message Is Sent Secretly Without Outside Interference Of Hackers Or Crackers. If Sender Sends This Cipher Text In Public Others Will Not Know What Is It, And It Will Be Received By Receiver. The System Uses Online Database To Store All Related Information. As, The Project Files And A Database File Will Be Stored Into The Azure Cloud, The Project Will Be Accessed In The Web Browser Through Azure Link.
With The Advent Of Internet, Various Online Attacks Has Been Increased And Among Them The Most Popular Attack Is Phishing. Phishing Is An Attempt By An Individual Or A Group To Get Personal Confidential Information Such As Passwords, Credit Card Information From Unsuspecting Victims For Identity Theft, Financial Gain And Other Fraudulent Activities. Fake Websites Which Appear Very Similar To The Original Ones Are Being Hosted To Achieve This. In This Paper We Have Proposed A New Approach Named As "A Novel Anti-phishing Framework Based On Visual Cryptography "to Solve The Problem Of Phishing. Here An Image Based Authentication Using Visual Cryptography Is Implemented. The Use Of Visual Cryptography Is Explored To Preserve The Privacy Of An Image Captcha By Decomposing The Original Image Captcha Into Two Shares (known As Sheets) That Are Stored In Separate Database Servers(one With User And One With Server) Such That The Original Image Captcha Can Be Revealed Only When Both Are Simultaneously Available; The Individual Sheet Images Do Not Reveal The Identity Of The Original Image Captcha. Once The Original Image Captcha Is Revealed To The User It Can Be Used As The Password. Using This Website Cross Verifies Its Identity And Proves That It Is A Genuine Website Before The End Users.
Nowadays, Large Amounts Of Data Are Stored With Cloud Service Providers. Third-party Auditors (TPAs), With The Help Of Cryptography, Are Often Used To Verify This Data. However, Most Auditing Schemes Don't Protect Cloud User Data From TPAs. A Review Of The State Of The Art And Research In Cloud Data Auditing Techniques Highlights Integrity And Privacy Challenges, Current Solutions, And Future Research Directions.
With The Recent Advancement In Cloud Computing Technology, Cloud Computing Allows Users To Upgrade And Downgrade Their Resource Usage Based On Their Needs. Most Of These Benefits Are Achieved From Resource Multiplexing Through Virtualization Technology In The Cloud Model. Using The Virtualization Technology The Data Center Resources Can Be Dynamically Allocated Based On Application Demands. The Concept Of "green Computing" And Skewness Is Introduced To Optimize The Number Of Servers In Use And To Measure The Unevenness In The Multi-dimensional Resource Utilization Of A Server Respectively. By Minimizing Skewness, The Different Types Of Workloads Can Be Combined Effectively And The Overall Utilization Of Server Resources Can Be Improved.
Real-world Applications Of Record Linkage Often Require Matching To Be Robust In Spite Of Small Variations In String Fields. For Example, Two Health Care Providers Should Be Able To Detect A Patient In Common, Even If One Record Contains A Typo Or Transcription Error. In The Privacy-preserving Setting, However, The Problem Of Approximate String Matching Has Been Cast As A Trade-off Between Security And Practicality, And The Literature Has Mainly Focused On Bloom Filter Encodings, An Approach Which Can Leak Significant Information About The Underlying Records. We Present A Novel Public-key Construction For Secure Two-party Evaluation Of Threshold Functions In Restricted Domains Based On Embeddings Found In The Message Spaces Of Additively Homomorphic Encryption Schemes. We Use This To Construct An Efficient Two-party Protocol For Privately Computing The Threshold Dice Coefficient. Relative To The Approach Of Bloom Filter Encodings, Our Proposal Offers Formal Security Guarantees And Greater Matching Accuracy. We Implement The Protocol And Demonstrate The Feasibility Of This Approach In Linking Medium-sized Patient Databases With Tens Of Thousands Of Records.
Cyber-attacks Are Exponentially Increasing Daily With The Advancements Of Technology. Therefore, The Detection And Prediction Of Cyber-attacks Are Very Important For Every Organization That Is Dealing With Sensitive Data For Business Purposes. In This Paper, We Present A Framework On Cyber Security Using A Data Mining Technique To Predict Cyber-attacks That Can Be Helpful To Take Proper Interventions To Reduce The Cyber-attacks. The Two Main Components Of The Framework Are The Detection And Prediction Of Cyber-attacks. The Framework First Extracts The Patterns Related To Cyber-attacks From Historical Data Using A J48 Decision Tree Algorithm And Then Builds A Prediction Model To Predict The Future Cyber-attacks. We Then Apply The Framework On Publicly Available Cyber Security Datasets Provided By The Canadian Institute Of Cybersecurity. In The Datasets, Several Kinds Of Cyber-attacks Are Presented Including DDoS, Port Scan, Bot, Brute Force, SQL Injection, And Heartbleed. The Proposed Framework Correctly Detects The Cyber-attacks And Provides The Patterns Related To Cyber-attacks. The Overall Accuracy Of The Proposed Prediction Model To Detect Cyber-attacks Is Around 99%. The Extracted Patterns Of The Prediction Model On Historical Data Can Be Applied To Predict Any Future Cyber-attacks. The Experimental Results Of The Prediction Model Indicate The Superiority Of The Model To Detect Any Future Cyber-attacks.
Fraud Detection From Massive User Behaviors Is Often Regarded As Trying To Find A Needle In A Haystack. In This Paper, We Suggest Abnormal Behavioral Patterns Can Be Better Revealed If Both Sequential And Interaction Behaviors Of Users Can Be Modeled Simultaneously, Which However Has Rarely Been Addressed In Prior Work. Along This Line, We Propose A COllective Sequence And INteraction (COSIN) Model, In Which The Behavioral Sequences And Interactions Between Source And Target Users In A Dynamic Interaction Network Are Modeled Uniformly In A Probabilistic Graphical Model. More Specifically, The Sequential Schema Is Modeled With A Hierarchical Hidden Markov Model, And Meanwhile It Is Shifted To The Interaction Schema To Generate The Interaction Counts Through Poisson Factorization. A Hybrid Gibbs-Variational Algorithm Is Then Proposed For Efficient Parameter Estimation Of The COSIN Model. We Conduct Extensive Experiments On Both Synthetic And Real-world Telecom Datasets In Different Scales, And The Results Show That The Proposed Model Outperforms Some Competitive Baseline Methods And Is Scalable. A Case Is Further Presented To Show The Precious Explainability Of The Model.
The Protection Of Sensitive And Confidential Data Become A Challenging Task In The Present Scenario As More And More Digital Data Is Stored And Transmitted Between The End Users. The Privacy Is Vitally Necessary In Case Of Medical Data, Which Contains The Important Information Of The Patients. In This Article, A Novel Biometric Inspired Medical Encryption Technique Is Proposed Based On Newly Introduced Parameterized All Phase Orthogonal Transformation (PR-APBST), Singular Value, And QR Decomposition. The Proposed Technique Utilizes The Biometrics Of The Patient/owner To Generate A Key Management System To Obtain The Parameters Involved In The Proposed Technique. The Medical Image Is Then Encrypted Employing PR-APBST, QR And Singular Value Decomposition And Is Ready For Secure Transmission Or Storage. Finally, A Reliable Decryption Process Is Employed To Reconstruct The Original Medical Image From The Encrypted Image. The Validity And Feasibility Of The Proposed Framework Have Been Demonstrated Using An Extensive Experiments On Various Medical Images And Security Analysis.
Image Based Social Networks Are Among The Most Popular Social Networking Services In Recent Years. With A Tremendous Amount Of Images Uploaded Everyday, Understanding Users' Preferences On User-generated Images And Making Recommendations Have Become An Urgent Need. In Fact, Many Hybrid Models Have Been Proposed To Fuse Various Kinds Of Side Information (e.g., Image Visual Representation, Social Network) And User-item Historical Behavior For Enhancing Recommendation Performance. However, Due To The Unique Characteristics Of The User Generated Images In Social Image Platforms, The Previous Studies Failed To Capture The Complex Aspects That Influence Users' Preferences In A Unified Framework. Moreover, Most Of These Hybrid Models Relied On Predefined Weights In Combining Different Kinds Of Information, Which Usually Resulted In Sub-optimal Recommendation Performance. To This End, In This Paper, We Develop A Hierarchical Attention Model For Social Contextual Image Recommendation. In Addition To Basic Latent User Interest Modeling In The Popular Matrix Factorization Based Recommendation, We Identify Three Key Aspects (i.e., Upload History, Social Influence, And Owner Admiration) That Affect Each User's Latent Preferences, Where Each Aspect Summarizes A Contextual Factor From The Complex Relationships Between Users And Images. After That, We Design A Hierarchical Attention Network That Naturally Mirrors The Hierarchical Relationship (elements In Each Aspects Level, And The Aspect Level) Of Users' Latent Interests With The Identified Key Aspects. Specifically, By Taking Embeddings From State-of-the-art Deep Learning Models That Are Tailored For Each Kind Of Data, The Hierarchical Attention Network Could Learn To Attend Differently To More Or Less Content. Finally, Extensive Experimental Results On Real-world Datasets Clearly Show The Superiority Of Our Proposed Model
The COVID-19 Epidemic Has Caused A Large Number Of Human Losses And Havoc In The Economic, Social, Societal, And Health Systems Around The World. Controlling Such Epidemic Requires Understanding Its Characteristics And Behavior, Which Can Be Identified By Collecting And Analyzing The Related Big Data. Big Data Analytics Tools Play A Vital Role In Building Knowledge Required In Making Decisions And Precautionary Measures. However, Due To The Vast Amount Of Data Available On COVID-19 From Various Sources, There Is A Need To Review The Roles Of Big Data Analysis In Controlling The Spread Of COVID-19, Presenting The Main Challenges And Directions Of COVID-19 Data Analysis, As Well As Providing A Framework On The Related Existing Applications And Studies To Facilitate Future Research On COVID-19 Analysis. Therefore, In This Paper, We Conduct A Literature Review To Highlight The Contributions Of Several Studies In The Domain Of COVID-19-based Big Data Analysis. The Study Presents As A Taxonomy Several Applications Used To Manage And Control The Pandemic. Moreover, This Study Discusses Several Challenges Encountered When Analyzing COVID-19 Data. The Findings Of This Paper Suggest Valuable Future Directions To Be Considered For Further Research And Applications.
As Most Of The People Require Review About A Product Before Spending Their Money On The Product. So People Come Across Various Reviews In The Website But These Reviews Are Genuine Or Fake Is Not Identified By The User. In Some Review Websites Some Good Reviews Are Added By The Product Company People Itself In Order To Make In Order To Produce False Positive Product Reviews. They Give Good Reviews For Many Different Products Manufactured By Their Own Firm. User Will Not Be Able To Find Out Whether The Review Is Genuine Or Fake. To Find Out Fake Review In The Website This “Fake Product Review Monitoring And Removal For Genuine Online Product Reviews Using Opinion Mining” System Is Introduced. This System Will Find Out Fake Reviews Made By Posting Fake Comments About A Product By Identifying The IP Address Along With Review Posting Patterns. User Will Login To The System Using His User Id And Password And Will View Various Products And Will Give Review About The Product. To Find Out The Review Is Fake Or Genuine, System Will Find Out The IP Address Of The User If The System Observe Fake Review Send By The Same IP Address Many A Times It Will Inform The Admin To Remove That Review From The System. This System Uses Data Mining Methodology. This System Helps The User To Find Out Correct Review Of The Product.
Classification Based Data Mining Plays Important Role In Various Healthcare Services. In Healthcare Field, The Important And Challenging Task Is To Diagnose Health Conditions And Proper Treatment Of Disease At The Early Stage. There Are Various Diseases That Can Be Diagnosed Early And Can Be Treated At The Early Stage. As For Example, Thyroid Diseases. The Traditional Ways Of Diagnosing Thyroid Diseases Depends On Clinical Examination And Many Blood Tests. The Main Task Is To Detect Disease Diagnosis At The Early Stages With Higher Accuracy. Data Mining Techniques Plays An Important Role In Healthcare Field For Making Decision, Disease Diagnosis And Providing Better Treatment For The Patients At Low Cost. Thyroid Disease Classification Is An Important Task. The Purpose Of This Study Is Predication Of Thyroid Disease Using Different Classification Techniques And Also To Find The TSH, T3,T4 Correlation Towards Hyperthyroidism And Hyporthyroidism And Also To Finding The TSH, T3,T4 Correlation With Gender Towards Hyperthyroidism And Hyporthyroidism.
General Health Examination Is An Integral Part Of Healthcare In Many Countries. Identifying The Participants At Risk Is Important For Early Warning And Preventive Intervention. The Fundamental Challenge Of Learning A Classification Model For Risk Prediction Lies In The Unlabeled Data That Constitutes The Majority Of The Collected Dataset. Particularly, The Unlabeled Data Describes The Participants In Health Examinations Whose Health Conditions Can Vary Greatly From Healthy To Very-ill. There Is No Ground Truth For Differentiating Their States Of Health. In This Paper, We Propose A Graph-based, Semi-supervised Learning Algorithm Called SHG-Health (Semi-supervised Heterogeneous Graph On Health) For Risk Predictions To Classify A Progressively Developing Situation With The Majority Of The Data Unlabeled. An Efficient Iterative Algorithm Is Designed And The Proof Of Convergence Is Given. Extensive Experiments Based On Both Real Health Examination Datasets And Synthetic Datasets Are Performed To Show The Effectiveness And Efficiency Of Our Method.
Ranking Of Association Rules Is Currently An Interesting Topic In Data Mining And Bioinformatics. The Huge Number Of Evolved Rules Of Items (or, Genes) By Association Rule Mining (ARM) Algorithms Makes Confusion To The Decision Maker. In This Article, We Propose A Weighted Rule-mining Technique (say, $RANWAR$ Or Rank-based Weighted Association Rule-mining) To Rank The Rules Using Two Novel Rule-interestingness Measures, Viz., Rank-based Weighted Condensed Support $(wcs)$ And Weighted Condensed Confidence $(wcc)$ Measures To Bypass The Problem. These Measures Are Basically Depended On The Rank Of Items (genes). Using The Rank, We Assign Weight To Each Item. $RANWAR$ Generates Much Less Number Of Frequent Itemsets Than The State-of-the-art Association Rule Mining Algorithms. Thus, It Saves Time Of Execution Of The Algorithm. We Run $RANWAR$ On Gene Expression And Methylation Datasets. The Genes Of The Top Rules Are Biologically Validated By Gene Ontologies (GOs) And KEGG Pathway Analyses. Many Top Ranked Rules Extracted From $RANWAR$ That Hold Poor Ranks In Traditional Apriori, Are Highly Biologically Significant To The Related Diseases. Finally, The Top Rules Evolved From $RANWAR$ , That Are Not In Apriori, Are Reported.
The Task Of Outlier Detection Is To Identify Data Objects That Are Markedly Different From Or Inconsistent With The Normal Set Of Data. Most Existing Solutions Typically Build A Model Using The Normal Data And Identify Outliers That Do Not Fit The Represented Model Very Well. However, In Addition To Normal Data, There Also Exist Limited Negative Examples Or Outliers In Many Applications, And Data May Be Corrupted Such That The Outlier Detection Data Is Imperfectly Labeled. These Make Outlier Detection Far More Difficult Than The Traditional Ones. This Paper Presents A Novel Outlier Detection Approach To Address Data With Imperfect Labels And Incorporate Limited Abnormal Examples Into Learning. To Deal With Data With Imperfect Labels, We Introduce Likelihood Values For Each Input Data Which Denote The Degree Of Membership Of An Example Toward The Normal And Abnormal Classes Respectively. Our Proposed Approach Works In Two Steps. In The First Step, We Generate A Pseudo Training Dataset By Computing Likelihood Values Of Each Example Based On Its Local Behavior. We Present Kernel \(k\) -means Clustering Method And Kernel LOF-based Method To Compute The Likelihood Values. In The Second Step, We Incorporate The Generated Likelihood Values And Limited Abnormal Examples Into SVDD-based Learning Framework To Build A More Accurate Classifier For Global Outlier Detection. By Integrating Local And Global Outlier Detection, Our Proposed Method Explicitly Handles Data With Imperfect Labels And Enhances The Performance Of Outlier Detection. Extensive Experiments On Real Life Datasets Have Demonstrated That Our Proposed Approaches Can Achieve A Better Tradeoff Between Detection Rate And False Alarm Rate As Compared To State-of-the-art Outlier Detection Approaches.
Keyword Queries On Databases Provide Easy Access To Data, But Often Suffer From Low Ranking Quality, I.e., Low Precision And/or Recall, As Shown In Recent Benchmarks. It Would Be Useful To Identify Queries That Are Likely To Have Low Ranking Quality To Improve The User Satisfaction. For Instance, The System May Suggest To The User Alternative Queries For Such Hard Queries. In This Paper, We Analyze The Characteristics Of Hard Queries And Propose A Novel Framework To Measure The Degree Of Difficulty For A Keyword Query Over A Database, Considering Both The Structure And The Content Of The Database And The Query Results. We Evaluate Our Query Difficulty Prediction Model Against Two Effectiveness Benchmarks For Popular Keyword Search Ranking Methods. Our Empirical Results Show That Our Model Predicts The Hard Queries With High Accuracy. Further, We Present A Suite Of Optimizations To Minimize The Incurred Time Overhead.
In Our Proposed System Is Identifying Reliable Information In The Medical Domain Stand As Building Blocks For A Healthcare System That Is Up-to-date With The Latest Discoveries. By Using The Tools Such As NLP, ML Techniques. In This Research, Focus On Diseases And Treatment Information, And The Relation That Exists Between These Two Entities. The Main Goal Of This Research Is To Identify The Disease Name With The Symptoms Specified And Extract The Sentence From The Article And Get The Relation That Exists Between Disease-Treatment And Classify The Information Into Cure, Prevent, Side Effect To The User.This Electronic Document Is A “live” Template. The Various Components Of Your Paper [title, Text, Heads, Etc.] Are Already Defined On The Style Sheet, As Illustrated By The Portions Given In This Document.
Feature Selection Involves Identifying A Subset Of The Most Useful Features That Produces Compatible Results As The Original Entire Set Of Features. A Feature Selection Algorithm May Be Evaluated From Both The Efficiency And Effectiveness Points Of View. While The Efficiency Concerns The Time Required To Find A Subset Of Features, The Effectiveness Is Related To The Quality Of The Subset Of Features. Based On These Criteria, A Fast Clustering-based Feature Selection Algorithm (FAST) Is Proposed And Experimentally Evaluated In This Paper. The FAST Algorithm Works In Two Steps. In The First Step, Features Are Divided Into Clusters By Using Graph-theoretic Clustering Methods. In The Second Step, The Most Representative Feature That Is Strongly Related To Target Classes Is Selected From Each Cluster To Form A Subset Of Features. Features In Different Clusters Are Relatively Independent, The Clustering-based Strategy Of FAST Has A High Probability Of Producing A Subset Of Useful And Independent Features. To Ensure The Efficiency Of FAST, We Adopt The Efficient Minimum-spanning Tree (MST) Clustering Method. The Efficiency And Effectiveness Of The FAST Algorithm Are Evaluated Through An Empirical Study. Extensive Experiments Are Carried Out To Compare FAST And Several Representative Feature Selection Algorithms, Namely, FCBF, ReliefF, CFS, Consist, And FOCUS-SF, With Respect To Four Types Of Well-known Classifiers, Namely, The Probability-based Naive Bayes, The Tree-based C4.5, The Instance-based IB1, And The Rule-based RIPPER Before And After Feature Selection. The Results, On 35 Publicly Available Real-world High-dimensional Image, Microarray, And Text Data, Demonstrate That The FAST Not Only Produces Smaller Subsets Of Features But Also Improves The Performances Of The Four Types Of Classifiers.
Pattern Classification Systems Are Commonly Used In Adversarial Applications, Like Biometric Authentication, Network Intrusion Detection, And Spam Filtering, In Which Data Can Be Purposely Manipulated By Humans To Undermine Their Operation. As This Adversarial Scenario Is Not Taken Into Account By Classical Design Methods, Pattern Classification Systems May Exhibit Vulnerabilities, Whose Exploitation May Severely Affect Their Performance, And Consequently Limit Their Practical Utility. Extending Pattern Classification Theory And Design Methods To Adversarial Settings Is Thus A Novel And Very Relevant Research Direction, Which Has Not Yet Been Pursued In A Systematic Way. In This Paper, We Address One Of The Main Open Issues: Evaluating At Design Phase The Security Of Pattern Classifiers, Namely, The Performance Degradation Under Potential Attacks They May Incur During Operation. We Propose A Framework For Empirical Evaluation Of Classifier Security That Formalizes And Generalizes The Main Ideas Proposed In The Literature, And Give Examples Of Its Use In Three Real Applications. Reported Results Show That Security Evaluation Can Provide A More Complete Understanding Of The Classifier’s Behavior In Adversarial Environments, And Lead To Better Design Choices.
Since Past Few Years There Is Tremendous Advancement In Electronic Commerce Technology, And The Use Of Credit Cards Has Dramatically Increased. As Credit Card Becomes The Most Popular Mode Of Payment For Both Online As Well As Regular Purchase, Cases Of Fraud Associated With It Are Also Rising. In This Paper We Present The Necessary Theory To Detect Fraud In Credit Card Transaction Processing Using A Hidden Markov Model (HMM). An HMM Is Initially Trained With The Normal Behavior Of A Cardholder. If An Incoming Credit Card Transaction Is Not Accepted By The Trained HMM With Sufficiently High Probability, It Is Considered To Be Fraudulent. At The Same Time, We Try To Ensure That Genuine Transactions Are Not Rejected By Using An Enhancement To It(Hybrid Model).In Further Sections We Compare Different Methods For Fraud Detection And Prove That Why HMM Is More Preferred Method Than Other Methods.
Several Anonymization Techniques, Such As Generalization And Bucketization, Have Been Designed For Privacy Preserving Microdata Publishing. Recent Work Has Shown That Generalization Loses Considerable Amount Of Information, Especially For High-dimensional Data. Bucketization, On The Other Hand, Does Not Prevent Membership Disclosure And Does Not Apply For Data That Do Not Have A Clear Separation Between Quasi-identifying Attributes And Sensitive Attributes. In This Paper, We Present A Novel Technique Called Slicing, Which Partitions The Data Both Horizontally And Vertically. We Show That Slicing Preserves Better Data Utility Than Generalization And Can Be Used For Membership Disclosure Protection. Another Important Advantage Of Slicing Is That It Can Handle High-dimensional Data. We Show How Slicing Can Be Used For Attribute Disclosure Protection And Develop An Efficient Algorithm For Computing The Sliced Data That Obey The ℓ-diversity Requirement. Our Workload Experiments Confirm That Slicing Preserves Better Utility Than Generalization And Is More Effective Than Bucketization In Workloads Involving The Sensitive Attribute. Our Experiments Also Demonstrate That Slicing Can Be Used To Prevent Membership Disclosure.
In Recent Years, We Have Witnessed A Flourish Of Review Websites. It Presents A Great Opportunity To Share Our Viewpoints For Various Products We Purchase. However, We Face An Information Overloading Problem. How To Mine Valuable Information From Reviews To Understand A User's Preferences And Make An Accurate Recommendation Is Crucial. Traditional Recommender Systems (RS) Consider Some Factors, Such As User's Purchase Records, Product Category, And Geographic Location. In This Work, We Propose A Sentiment-based Rating Prediction Method (RPS) To Improve Prediction Accuracy In Recommender Systems. Firstly, We Propose A Social User Sentimental Measurement Approach And Calculate Each User's Sentiment On Items/products. Secondly, We Not Only Consider A User's Own Sentimental Attributes But Also Take Interpersonal Sentimental Influence Into Consideration. Then, We Consider Product Reputation, Which Can Be Inferred By The Sentimental Distributions Of A User Set That Reflect Customers' Comprehensive Evaluation. At Last, We Fuse Three Factors-user Sentiment Similarity, Interpersonal Sentimental Influence, And Item's Reputation Similarity-into Our Recommender System To Make An Accurate Rating Prediction. We Conduct A Performance Evaluation Of The Three Sentimental Factors On A Real-world Dataset Collected From Yelp. Our Experimental Results Show The Sentiment Can Well Characterize User Preferences, Which Helps To Improve The Recommendation Performance
Association Rule Mining And Frequent Itemset Mining Are Two Popular And Widely Studied Data Analysis Techniques For A Range Of Applications. In This Paper, We Focus On Privacy-preserving Mining On Vertically Partitioned Databases. In Such A Scenario, Data Owners Wish To Learn The Association Rules Or Frequent Itemsets From A Collective Data Set And Disclose As Little Information About Their (sensitive) Raw Data As Possible To Other Data Owners And Third Parties. To Ensure Data Privacy, We Design An Efficient Homomorphic Encryption Scheme And A Secure Comparison Scheme. We Then Propose A Cloud-aided Frequent Itemset Mining Solution, Which Is Used To Build An Association Rule Mining Solution. Our Solutions Are Designed For Outsourced Databases That Allow Multiple Data Owners To Efficiently Share Their Data Securely Without Compromising On Data Privacy. Our Solutions Leak Less Information About The Raw Data Than Most Existing Solutions. In Comparison To The Only Known Solution Achieving A Similar Privacy Level As Our Proposed Solutions, The Performance Of Our Proposed Solutions Is Three To Five Orders Of Magnitude Higher. Based On Our Experiment Findings Using Different Parameters And Data Sets, We Demonstrate That The Run Time In Each Of Our Solutions Is Only One Order Higher Than That In The Best Non-privacy-preserving Data Mining Algorithms. Since Both Data And Computing Work Are Outsourced To The Cloud Servers, The Resource Consumption At The Data Owner End Is Very Low.
HIERARCHAL TENSOR GEOSPATIAL DATA- A Hierarchal Tensor Based Approach To Compressing, Updating And Querying Geospatial Data.
Label Powerset (LP) Method Is One Category Of Multi-label Learning Algorithm. This Paper Presents A Basis Expansions Model For Multi-label Classification, Where A Basis Function Is An LP Classifier Trained On A Random K-labelset. The Expansion Coefficients Are Learned To Minimize The Global Error Between The Prediction And The Ground Truth. We Derive An Analytic Solution To Learn The Coefficients Efficiently. We Further Extend This Model To Handle The Cost-sensitive Multi-label Classification Problem, And Apply It In Social Tagging To Handle The Issue Of The Noisy Training Set By Treating The Tag Counts As The Misclassification Costs. We Have Conducted Experiments On Several Benchmark Datasets And Compared Our Method With Other State-of-the-art Multi-label Learning Methods. Experimental Results On Both Multi-label Classification And Cost-sensitive Social Tagging Demonstrate That Our Method Has Better Performance Than Other Methods
Computer Vision-based Food Recognition Could Be Used To Estimate A Meal's Carbohydrate Content For Diabetic Patients. This Study Proposes A Methodology For Automatic Food Recognition, Based On The Bag-of-features (BoF) Model. An Extensive Technical Investigation Was Conducted For The Identification And Optimization Of The Best Performing Components Involved In The BoF Architecture, As Well As The Estimation Of The Corresponding Parameters. For The Design And Evaluation Of The Prototype System, A Visual Dataset With Nearly 5000 Food Images Was Created And Organized Into 11 Classes. The Optimized System Computes Dense Local Features, Using The Scale-invariant Feature Transform On The HSV Color Space, Builds A Visual Dictionary Of 10000 Visual Words By Using The Hierarchical K-means Clustering And Finally Classifies The Food Images With A Linear Support Vector Machine Classifier. The System Achieved Classification Accuracy Of The Order Of 78%, Thus Proving The Feasibility Of The Proposed Approach In A Very Challenging Image Dataset.
We Propose A Novel Method For Automatic Annotation, Indexing And Annotation-based Retrieval Of Images. The New Method, That We Call Markovian Semantic Indexing (MSI), Is Presented In The Context Of An Online Image Retrieval System. Assuming Such A System, The Users' Queries Are Used To Construct An Aggregate Markov Chain (AMC) Through Which The Relevance Between The Keywords Seen By The System Is Defined. The Users' Queries Are Also Used To Automatically Annotate The Images. A Stochastic Distance Between Images, Based On Their Annotation And The Keyword Relevance Captured In The AMC, Is Then Introduced. Geometric Interpretations Of The Proposed Distance Are Provided And Its Relation To A Clustering In The Keyword Space Is Investigated. By Means Of A New Measure Of Markovian State Similarity, The Mean First Cross Passage Time (CPT), Optimality Properties Of The Proposed Distance Are Proved. Images Are Modeled As Points In A Vector Space And Their Similarity Is Measured With MSI. The New Method Is Shown To Possess Certain Theoretical Advantages And Also To Achieve Better Precision Versus Recall Results When Compared To Latent Semantic Indexing (LSI) And Probabilistic Latent Semantic Indexing (pLSI) Methods In Annotation-Based Image Retrieval (ABIR) Tasks.
We Propose A Protocol For Secure Mining Of Association Rules In Horizontally Distributed Databases. The Current Leading Protocol Is That Of Kantarcioglu And Clifton . Our Protocol, Like Theirs, Is Based On The Fast Distributed Mining (FDM)algorithm Of Cheung Et Al. , Which Is An Unsecured Distributed Version Of The Apriori Algorithm. The Main Ingredients In Our Protocol Are Two Novel Secure Multi-party Algorithms-one That Computes The Union Of Private Subsets That Each Of The Interacting Players Hold, And Another That Tests The Inclusion Of An Element Held By One Player In A Subset Held By Another. Our Protocol Offers Enhanced Privacy With Respect To The Protocol In . In Addition, It Is Simpler And Is Significantly More Efficient In Terms Of Communication Rounds, Communication Cost And Computational Cost.
Web-page Recommendation Plays An Important Role In Intelligent Web Systems. Useful Knowledge Discovery From Web Usage Data And Satisfactory Knowledge Representation For Effective Web-page Recommendations Are Crucial And Challenging. This Paper Proposes A Novel Method To Efficiently Provide Better Web-page Recommendation Through Semantic-enhancement By Integrating The Domain And Web Usage Knowledge Of A Website. Two New Models Are Proposed To Represent The Domain Knowledge. The First Model Uses An Ontology To Represent The Domain Knowledge. The Second Model Uses One Automatically Generated Semantic Network To Represent Domain Terms, Web-pages, And The Relations Between Them. Another New Model, The Conceptual Prediction Model, Is Proposed To Automatically Generate A Semantic Network Of The Semantic Web Usage Knowledge, Which Is The Integration Of Domain Knowledge And Web Usage Knowledge. A Number Of Effective Queries Have Been Developed To Query About These Knowledge Bases. Based On These Queries, A Set Of Recommendation Strategies Have Been Proposed To Generate Web-page Candidates. The Recommendation Results Have Been Compared With The Results Obtained From An Advanced Existing Web Usage Mining (WUM) Method. The Experimental Results Demonstrate That The Proposed Method Produces Significantly Higher Performance Than The WUM Method.
Visualization Of Massively Large Datasets Presents Two Significant Problems. First, The Dataset Must Be Prepared For Visualization, And Traditional Dataset Manipulation Methods Fail Due To Lack Of Temporary Storage Or Memory. The Second Problem Is The Presentation Of The Data In The Visual Media, Particularly Real-time Visualization Of Streaming Time Series Data. An Ongoing Research Project Addresses Both These Problems, Using Data From Two National Repositories. This Work Is Presented Here, With The Results Of The Current Effort Summarized And Future Plans, Including 3D Visualization, Outlined.
Improving Quality Of Service (QoS) Of Low Power And Lossy Networks (LLNs) In Internet Of Things (IoT) Is A Major Challenge. Cluster-based Routing Technique Is An Effective Approach To Achieve This Goal. This Paper Proposes A QoS-aware Clustering-based Routing (QACR) Mechanism For LLNs In Fog-enabled IoT Which Provides A Clustering, A Cluster Head (CH) Election, And A Routing Path Selection Technique. The Clustering Adopts The Community Detection Algorithm That Partitions The Network Into Clusters With Available Nodes' Connectivity. The CH Election And Relay Node Selection Both Are Weighted By The Rank Of The Nodes Which Take Node's Energy, Received Signal Strength, Link Quality, And Number Of Cluster Members Into Consideration As The Ranking Metrics. The Number Of CHs In A Cluster Is Adaptive And Varied According To A Cluster State To Balance The Energy Consumption Of Nodes. Besides, The Protocol Uses The CH Role Handover Technique During CH Election That Decreases The Control Messages For The Periodic Election And Cluster Formation In Detail. An Evaluation Of The QACR Has Performed Through Simulations For Various Scenarios. The Obtained Results Show That The QACR Improves The QoS In Terms Of Packet Delivery Ratio, Latency, And Network Lifetime Compared To The Existing Protocols.
Rapid Growth Of Internet Of Things (IoT) Devices Dealing With Sensitive Data Has Led To The Emergence Of New Access Control Technologies In Order To Maintain This Data Safe From Unauthorized Use. In Particular, A Dynamic IoT Environment, Characterized By A High Signaling Overhead Caused By Subscribers' Mobility, Presents A Significant Concern To Ensure Secure Data Distribution To Legitimate Subscribers. Hence, For Such Dynamic Environments, Group Key Management (GKM) Represents The Fundamental Mechanism For Managing The Dissemination Of Keys For Access Control And Secure Data Distribution. However, Existing Access Control Schemes Based On GKM And Dedicated To IoT Are Mainly Based On Centralized Models, Which Fail To Address The Scalability Challenge Introduced By The Massive Scale Of IoT Devices And The Increased Number Of Subscribers. Besides, None Of The Existing GKM Schemes Supports The Independence Of The Members In The Same Group. They Focus Only On Dependent Symmetric Group Keys Per Subgroup Communication, Which Is Inefficient For Subscribers With A Highly Dynamic Behavior. To Deal With These Challenges, We Introduce A Novel Decentralized Lightweight Group Key Management Architecture For Access Control In The IoT Environment (DLGKM-AC). Based On A Hierarchical Architecture, Composed Of One Key Distribution Center (KDC) And Several Sub Key Distribution Centers (SKDCs), The Proposed Scheme Enhances The Management Of Subscribers' Groups And Alleviate The Rekeying Overhead On The KDC. Moreover, A New Master Token Management Protocol For Managing Keys Dissemination Across A Group Of Subscribers Is Introduced. This Protocol Reduces Storage, Computation, And Communication Overheads During Join/leave Events. The Proposed Approach Accommodates A Scalable IoT Architecture, Which Mitigates The Single Point Of Failure By Reducing The Load Caused By Rekeying At The Core Network. DLGKM-AC Guarantees Secure Group Communication By Preventing Collusion Attacks And Ensuring Backward/forward Secrecy. Simulation Results And Analysis Of The Proposed Scheme Show Considerable Resource Gain In Terms Of Storage, Computation, And Communication Overheads.
In A Distributed Software Defined Networking (SDN) Architecture, The Quality Of Service (QoS) Experienced By A Traffic Flow Through An SDN Switch Is Primarily Dependant On The SDN Controller To Which That Switch Is Mapped. We Propose A New Controller-quality Metric Known As The Quality Of Controller (QoC) Which Is Defined Based On The Controller’s Reliability And Response Time. We Model The Controller Reliability Based On Bayesian Inference While Its Response Time Is Modelled As A Linear Approximation Of The M/M/1 Queue. We Develop A QoC-aware Approach For Solving (i) The Switch-controller Mapping Problem And, (ii) Control Traffic Distribution Among The Mapped Controllers. Each Switch Is Mapped To Multiple Controllers To Enable Resilience With The Switch-controller Mapping And Control Traffic Distribution Based On The QoC Metric Which Is The Combined Cost Of Controller Reliability And Response Time. We First Develop An Optimization Programming Formulation That Maximizes The Minimum QoC Among The Set Of Controllers To Solve The Above Problem. Since The Optimization Problem Is Computationally Prohibitive For Large Networks, We Develop A Heuristic Algorithm — Qoc-Aware Switch-coNTroller Mapping (QuANTuM) — That Solves The Problem Of Switch-controller Mapping And Control Traffic Distribution In Two Stages Such That The Minimum Of The Controller QoC Is Maximized. Through Simulations, We Show That The Heuristic Results Are Within 18% Of The Optimum While Achieving A Fair Control Traffic Distribution With A QoC Min-max Ratio Of Up To 95%.
The Traditional Rumor Diffusion Model Primarily Studies The Rumor Itself And User Behavior As The Entry Points. The Complexity Of User Behavior, Multidimensionality Of The Communication Space, Imbalance Of The Data Samples, And Symbiosis And Competition Between Rumor And Anti-rumor Are Challenges Associated With The In-depth Study On Rumor Communication. Given These Challenges, This Study Proposes A Group Behavior Model For Rumor And Anti-rumor. First, This Study Considers The Diversity And Complexity Of The Rumor Propagation Feature Space And The Advantages Of Representation Learning In The Feature Extraction Of Data. Further, We Adopt The Corresponding Representation Learning Methods For Their Content And Structure Of The Rumor And Anti-rumor To Reduce The Spatial Feature Dimension Of The Rumor-spreading Data And To Uniformly And Densely Express The Full-featured Information Feature Representation. Second, This Paper Introduces An Evolutionary Game Theory, Which Is Combined With The User-influenced Rumor And Anti-rumor, To Reflect The Conflict And Symbiotic Relationship Between Rumor And Anti-rumor. We Obtain A Network Structural Feature Expression Of The Influence Degree Of Users On Rumor And Anti-rumor When Expressing The Structural Characteristics Of Group Communication Relationships. Finally, Aiming At The Timeliness Of Rumor Topic Evolution, The Whole Model Is Proposed. Time Slice And Discretize The Life Cycle Of Rumor Is Used To Synthesize The Full-featured Information Feature Representation Of Rumor And Anti-rumor. The Experiments Denote That The Model Can Not Only Effectively Analyze User Group Behavior Regarding Rumor But Also Accurately Reflect The Competition And Symbiotic Relation Between Rumor And Anti-rumor Diffusion.
Wireless Sensor Networks (WSNs) Will Be Integrated Into The Future Internet As One Of The Components Of The Internet Of Things, And Will Become Globally Addressable By Any Entity Connected To The Internet. Despite The Great Potential Of This Integration, It Also Brings New Threats, Such As The Exposure Of Sensor Nodes To Attacks Originating From The Internet. In This Context, Lightweight Authentication And Key Agreement Protocols Must Be In Place To Enable End-to-end Secure Communication. Recently, Amin Et Al. Proposed A Three-factor Mutual Authentication Protocol For WSNs. However, We Identified Several Flaws In Their Protocol. We Found That Their Protocol Suffers From Smart Card Loss Attack Where The User Identity And Password Can Be Guessed Using Offline Brute Force Techniques. Moreover, The Protocol Suffers From Known Session-specific Temporary Information Attack, Which Leads To The Disclosure Of Session Keys In Other Sessions. Furthermore, The Protocol Is Vulnerable To Tracking Attack And Fails To Fulfill User Untraceability. To Address These Deficiencies, We Present A Lightweight And Secure User Authentication Protocol Based On The Rabin Cryptosystem, Which Has The Characteristic Of Computational Asymmetry. We Conduct A Formal Verification Of Our Proposed Protocol Using ProVerif In Order To Demonstrate That Our Scheme Fulfills The Required Security Properties. We Also Present A Comprehensive Heuristic Security Analysis To Show That Our Protocol Is Secure Against All The Possible Attacks And Provides The Desired Security Features. The Results We Obtained Show That Our New Protocol Is A Secure And Lightweight Solution For Authentication And Key Agreement For Internet-integrated WSNs.
Underwater Wireless Sensor Networks (UWSNs) Have Been Showed As A Promising Technology To Monitor And Explore The Oceans In Lieu Of Traditional Undersea Wireline Instruments. Nevertheless, The Data Gathering Of UWSNs Is Still Severely Limited Because Of The Acoustic Channel Communication Characteristics. One Way To Improve The Data Collection In UWSNs Is Through The Design Of Routing Protocols Considering The Unique Characteristics Of The Underwater Acoustic Communication And The Highly Dynamic Network Topology. In This Paper, We Propose The GEDAR Routing Protocol For UWSNs. GEDAR Is An Anycast, Geographic And Opportunistic Routing Protocol That Routes Data Packets From Sensor Nodes To Multiple Sonobuoys (sinks) At The Sea's Surface. When The Node Is In A Communication Void Region, GEDAR Switches To The Recovery Mode Procedure Which Is Based On Topology Control Through The Depth Adjustment Of The Void Nodes, Instead Of The Traditional Approaches Using Control Messages To Discover And Maintain Routing Paths Along Void Regions. Simulation Results Show That GEDAR Significantly Improves The Network Performance When Compared With The Baseline Solutions, Even In Hard And Difficult Mobile Scenarios Of Very Sparse And Very Dense Networks And For High Network Traffic Loads.
Uploading Data Streams To A Resource-rich Cloud Server For Inner Product Evaluation, An Essential Building Block In Many Popular Stream Applications (e.g., Statistical Monitoring), Is Appealing To Many Companies And Individuals. On The Other Hand, Verifying The Result Of The Remote Computation Plays A Crucial Role In Addressing The Issue Of Trust. Since The Outsourced Data Collection Likely Comes From Multiple Data Sources, It Is Desired For The System To Be Able To Pinpoint The Originator Of Errors By Allotting Each Data Source A Unique Secret Key, Which Requires The Inner Product Verification To Be Performed Under Any Two Parties' Different Keys. However, The Present Solutions Either Depend On A Single Key Assumption Or Powerful Yet Practically-inefficient Fully Homomorphic Cryptosystems. In This Paper, We Focus On The More Challenging Multi-key Scenario Where Data Streams Are Uploaded By Multiple Data Sources With Distinct Keys. We First Present A Novel Homomorphic Verifiable Tag Technique To Publicly Verify The Outsourced Inner Product Computation On The Dynamic Data Streams, And Then Extend It To Support The Verification Of Matrix Product Computation. We Prove The Security Of Our Scheme In The Random Oracle Model. Moreover, The Experimental Result Also Shows The Practicability Of Our Design.
The Diffusion LMS Algorithm Has Been Extensively Studied In Recent Years. This Efficient Strategy Allows To Address Distributed Optimization Problems Over Networks In The Case Where Nodes Have To Collaboratively Estimate A Single Parameter Vector. Nevertheless, There Are Several Problems In Practice That Are Multitask-oriented In The Sense That The Optimum Parameter Vector May Not Be The Same For Every Node. This Brings Up The Issue Of Studying The Performance Of The Diffusion LMS Algorithm When It Is Run, Either Intentionally Or Unintentionally, In A Multitask Environment. In This Paper, We Conduct A Theoretical Analysis On The Stochastic Behavior Of Diffusion LMS In The Case Where The Single-task Hypothesis Is Violated. We Analyze The Competing Factors That Influence The Performance Of Diffusion LMS In The Multitask Environment, And Which Allow The Algorithm To Continue To Deliver Performance Superior To Non-cooperative Strategies In Some Useful Circumstances. We Also Propose An Unsupervised Clustering Strategy That Allows Each Node To Select, Via Adaptive Adjustments Of Combination Weights, The Neighboring Nodes With Which It Can Collaborate To Estimate A Common Parameter Vector. Simulations Are Presented To Illustrate The Theoretical Results, And To Demonstrate The Efficiency Of The Proposed Clustering Strategy.
In This Project We Are Conducting The Investigation Studies Over The IT Auditing For Assuming The Security For Cloud Computing. During This Investigation, We Are Implementing Working Of IT Auditing Mechanism Over The Cloud Computing Framework In Order To Assure The Desire Level Of Security
The Delay-tolerant-network (DTN) Model Is Becoming A Viable Communication Alternative To The Traditional Infrastructural Model For Modern Mobile Consumer Electronics Equipped With Short-range Communication Technologies Such As Bluetooth, NFC, And Wi-Fi Direct. Proximity Malware Is A Class Of Malware That Exploits The Opportunistic Contacts And Distributed Nature Of DTNs For Propagation. Behavioral Characterization Of Malware Is An Effective Alternative To Pattern Matching In Detecting Malware, Especially When Dealing With Polymorphic Or Obfuscated Malware. In This Paper, We First Propose A General Behavioral Characterization Of Proximity Malware Which Based On Naive Bayesian Model, Which Has Been Successfully Applied In Non-DTN Settings Such As Filtering Email Spams And Detecting Botnets. We Identify Two Unique Challenges For Extending Bayesian Malware Detection To DTNs ("insufficient Evidence Versus Evidence Collection Risk" And "filtering False Evidence Sequentially And Distributedly"), And Propose A Simple Yet Effective Method, Look Ahead, To Address The Challenges. Furthermore, We Propose Two Extensions To Look Ahead, Dogmatic Filtering, And Adaptive Look Ahead, To Address The Challenge Of "malicious Nodes Sharing False Evidence." Real Mobile Network Traces Are Used To Verify The Effectiveness Of The Proposed Methods.
Overlay Network Topology Together With Peer/data Organization And Search Algorithm Are The Crucial Components Of Unstructured Peer-to-peer (P2P) Networks As They Directly Affect The Efficiency Of Search On Such Networks. Scale-free (powerlaw) Overlay Network Topologies Are Among Structures That Offer High Performance For These Networks. A Key Problem For These Topologies Is The Existence Of Hubs, Nodes With High Connectivity. Yet, The Peers In A Typical Unstructured P2P Network May Not Be Willing Or Able To Cope With Such High Connectivity And Its Associated Load. Therefore, Some Hard Cutoffs Are Often Imposed On The Number Of Edges That Each Peer Can Have, Restricting Feasible Overlays To Limited Or Truncated Scale-free Networks. In This Paper, We Analyze The Growth Of Such Limited Scale-free Networks And Propose Two Different Algorithms For Constructing Perfect Scale-free Overlay Network Topologies At Each Instance Of Such Growth. Our Algorithms Allow The User To Define The Desired Scalefree Exponent (gamma). They Also Induce Low Communication Overhead When Network Grows From One Size To Another. Using Extensive Simulations, We Demonstrate That These Algorithms Indeed Generate Perfect Scale Free Networks (at Each Step Of Network Growth) That Provide Better Search Efficiency In Various Search Algorithms Than The Networks Generated By The Existing Solutions.
Routing Protocol Is Taking A Vital Role In The Modern Internet Era. A Routing Protocol Determines How The Routers Communicate With Each Other To Forward The Packets By Taking The Optimal Path To Travel From A Source Node To A Destination Node. In This Paper We Have Explored Two Eminent Protocols Namely, Enhanced Interior Gateway Routing Protocol (EIGRP) And Open Shortest Path First (OSPF) Protocols. Evaluation Of These Routing Protocols Is Performed Based On The Quantitative Metrics Such As Convergence Time, Jitter, End-to- End Delay, Throughput And Packet Loss Through The Simulated Network Models. The Evaluation Results Show That EIGRP Routing Protocol Provides A Better Performance Than OSPF Routing Protocol For Real Time Applications. Through Network Simulations We Have Proved That EIGRP Is More CPU Intensive Than OSPF And Hence Uses A Lot Of System Power. Therefore EIGRP Is A Greener Routing Protocol And Provides For Greener Internetworking.
Message Authentication Is One Of The Most Effective Ways To Thwart Unauthorized And Corrupted Messages From Being Forwarded In Wireless Sensor Networks (WSNs). For This Reason, Many Message Authentication Schemes Have Been Developed, Based On Either Symmetric-key Cryptosystems Or Public-key Cryptosystems. Most Of Them, However, Have The Limitations Of High Computational And Communication Overhead In Addition To Lack Of Scalability And Resilience To Node Compromise Attacks. To Address These Issues, A Polynomial-based Scheme Was Recently Introduced. However, This Scheme And Its Extensions All Have The Weakness Of A Built-in Threshold Determined By The Degree Of The Polynomial: When The Number Of Messages Transmitted Is Larger Than This Threshold, The Adversary Can Fully Recover The Polynomial. In This Paper, We Propose A Scalable Authentication Scheme Based On Elliptic Curve Cryptography (ECC). While Enabling Intermediate Nodes Authentication, Our Proposed Scheme Allows Any Node To Transmit An Unlimited Number Of Messages Without Suffering The Threshold Problem. In Addition, Our Scheme Can Also Provide Message Source Privacy. Both Theoretical Analysis And Simulation Results Demonstrate That Our Proposed Scheme Is More Efficient Than The Polynomial-based Approach In Terms Of Computational And Communication Overhead Under Comparable Security Levels While Providing Message Source Privacy.
Message Authentication Is One Of The Most Effective Ways To Thwart Unauthorized And Corrupted Messages From Being Forwarded In Wireless Sensor Networks (WSNs). For This Reason, Many Message Authentication Schemes Have Been Developed, Based On Either Symmetric-key Cryptosystems Or Public-key Cryptosystems. Most Of Them, However, Have The Limitations Of High Computational And Communication Overhead In Addition To Lack Of Scalability And Resilience To Node Compromise Attacks. To Address These Issues, A Polynomial-based Scheme Was Recently Introduced. However, This Scheme And Its Extensions All Have The Weakness Of A Built-in Threshold Determined By The Degree Of The Polynomial: When The Number Of Messages Transmitted Is Larger Than This Threshold, The Adversary Can Fully Recover The Polynomial. In This Paper, We Propose A Scalable Authentication Scheme Based On Elliptic Curve Cryptography (ECC). While Enabling Intermediate Nodes Authentication, Our Proposed Scheme Allows Any Node To Transmit An Unlimited Number Of Messages Without Suffering The Threshold Problem. In Addition, Our Scheme Can Also Provide Message Source Privacy. Both Theoretical Analysis And Simulation Results Demonstrate That Our Proposed Scheme Is More Efficient Than The Polynomial-based Approach In Terms Of Computational And Communication Overhead Under Comparable Security Levels While Providing Message Source Privacy.
Wireless Sensor Networks (WSNs) Are Increasingly Used In Many Applications, Such As Volcano And Fire Monitoring, Urban Sensing, And Perimeter Surveillance. In A Large WSN, In-network Data Aggregation (i.e., Combining Partial Results At Intermediate Nodes During Message Routing) Significantly Reduces The Amount Of Communication Overhead And Energy Consumption. The Research Community Proposed A Loss-resilient Aggregation Framework Called Synopsis Diffusion, Which Uses Duplicate-insensitive Algorithms On Top Of Multipath Routing Schemes To Accurately Compute Aggregates (e.g., Predicate Count Or Sum). However, This Aggregation Framework Does Not Address The Problem Of False Subaggregate Values Contributed By Compromised Nodes. This Attack May Cause Large Errors In The Aggregate Computed At The Base Station, Which Is The Root Node In The Aggregation Hierarchy. In This Paper, We Make The Synopsis Diffusion Approach Secure Against The Above Attack Launched By Compromised Nodes. In Particular, We Present An Algorithm To Enable The Base Station To Securely Compute Predicate Count Or Sum Even In The Presence Of Such An Attack. Our Attack-resilient Computation Algorithm Computes The True Aggregate By Filtering Out The Contributions Of Compromised Nodes In The Aggregation Hierarchy. Extensive Analysis And Simulation Study Show That Our Algorithm Outperforms Other Existing Approaches.
Mobile Nodes In Military Environments Such As A Battlefield Or A Hostile Region Are Likely To Suffer From Intermittent Network Connectivity And Frequent Partitions. Disruption-tolerant Network (DTN) Technologies Are Becoming Successful Solutions That Allow Wireless Devices Carried By Soldiers To Communicate With Each Other And Access The Confidential Information Or Command Reliably By Exploiting External Storage Nodes. Some Of The Most Challenging Issues In This Scenario Are The Enforcement Of Authorization Policies And The Policies Update For Secure Data Retrieval. Ciphertext-policy Attribute-based Encryption (CP-ABE) Is A Promising Cryptographic Solution To The Access Control Issues. However, The Problem Of Applying CP-ABE In Decentralized DTNs Introduces Several Security And Privacy Challenges With Regard To The Attribute Revocation, Key Escrow, And Coordination Of Attributes Issued From Different Authorities. In This Paper, We Propose A Secure Data Retrieval Scheme Using CP-ABE For Decentralized DTNs Where Multiple Key Authorities Manage Their Attributes Independently. We Demonstrate How To Apply The Proposed Mechanism To Securely And Efficiently Manage The Confidential Data Distributed In The Disruption-tolerant Military Network.
Distributed Denial-of-service (DDoS) Attacks Are A Major Threat To Security Issues. The Control And Resolving Of DDoS Attacks Is Difficult In A Distributed Network. The Primary Problem Till Date Is The Attacks Are Detected Close To The Victim And Hence Cannot Be Resolved. It Is Essential To Detect Them Early In Order To Protect Vulnerable Resources Or Potential Victims. FireCol Comprises Multiple Intrusion Prevention Systems (IPSs) Located At The Internet Service Providers (ISPs) Level. These Multiple Intrusion Prevention Systems (IPSs) Act As Traffic Filters. Based On Threshold Values It Passes Information. The Efficient System Of FireCol Is Demonstrated As A Scalable System With Low Overhead.
In This Paper, We Study User Profile Matching With Privacy-preservation In Mobile Social Networks (MSNs) And Introduce A Family Of Novel Profile Matching Protocols. We First Propose An Explicit Comparison-based Profile Matching Protocol (eCPM) Which Runs Between Two Parties, An Initiator And A Responder. The ECPM Enables The Initiator To Obtain The Comparison-based Matching Result About A Specified Attribute In Their Profiles, While Preventing Their Attribute Values From Disclosure. We Then Propose An Implicit Comparison-based Profile Matching Protocol (iCPM) Which Allows The Initiator To Directly Obtain Some Messages Instead Of The Comparison Result From The Responder. The Messages Unrelated To User Profile Can Be Divided Into Multiple Categories By The Responder. The Initiator Implicitly Chooses The Interested Category Which Is Unknown To The Responder. Two Messages In Each Category Are Prepared By The Responder, And Only One Message Can Be Obtained By The Initiator According To The Comparison Result On A Single Attribute. We Further Generalize The ICPM To An Implicit Predicate-based Profile Matching Protocol (iPPM) Which Allows Complex Comparison Criteria Spanning Multiple Attributes. The Anonymity Analysis Shows All These Protocols Achieve The Confidentiality Of User Profiles. In Addition, The ECPM Reveals The Comparison Result To The Initiator And Provides Only Conditional Anonymity; The ICPM And The IPPM Do Not Reveal The Result At All And Provide Full Anonymity. We Analyze The Communication Overhead And The Anonymity Strength Of The Protocols. We Then Present An Enhanced Version Of The ECPM, Called ECPM+, By Combining The ECPM With A Novel Prediction-based Adaptive Pseudonym Change Strategy. The Performance Of The ECPM And The ECPM+ Are Comparatively Studied Through Extensive Trace-based Simulations. Simulation Results Demonstrate That The ECPM+ Achieves Significantly Higher Anonymity Strength With Slightly Larger Number Of Pseudonyms Than The ECPM.
We Investigate An Underlying Mathematical Model And Algorithms For Optimizing The Performance Of A Class Of Distributed Systems Over The Internet. Such A System Consists Of A Large Number Of Clients Who Communicate With Each Other Indirectly Via A Number Of Intermediate Servers. Optimizing The Overall Performance Of Such A System Then Can Be Formulated As A Client-server Assignment Problem Whose Aim Is To Assign The Clients To The Servers In Such A Way To Satisfy Some Prespecified Requirements On The Communication Cost And Load Balancing. We Show That 1) The Total Communication Load And Load Balancing Are Two Opposing Metrics, And Consequently, Their Tradeoff Is Inherent In This Class Of Distributed Systems; 2) In General, Finding The Optimal Client-server Assignment For Some Prespecified Requirements On The Total Load And Load Balancing Is NP-hard, And Therefore; 3) We Propose A Heuristic Via Relaxed Convex Optimization For Finding The Approximate Solution. Our Simulation Results Indicate That The Proposed Algorithm Produces Superior Performance Than Other Heuristics, Including The Popular Normalized Cuts Algorithm.
With The Occurrence Of Internet Of Things (IoT) Era, The Proliferation Of Sensors Coupled With The Increasing Usage Of Wireless Spectrums Especially The ISM Band Makes It Difficult To Deploy Real-life IoT. Currently, The Cognitive Radio Technology Enables Sensors Transmit Data Packets Over The Licensed Spectrum Bands As Well As The Free ISM Bands. The Dynamic Spectrum Access Technology Enables Secondary Users (SUs) Access Wireless Channel Bands That Are Originally Licensed To Primary Users. Due To The High Dynamic Of Spectrum Availability, It Is Challenging To Design An Efficient Routing Approach For SUs In Cognitive Sensor Networks. We Estimate The Spectrum Availability And Spectrum Quality From The View Of Both The Global Statistical Spectrum Usage And The Local Instant Spectrum Status, And Then Introduce Novel Routing Metrics To Consider The Estimation. In Our Novel Routing Metrics, One Retransmission Is Allowed To Restrict The Number Of Rerouting And Then Increase The Routing Performance. Then, The Related Two Routing Algorithms According To The Proposed Routing Metrics Are Designed. Finally, Our Routing Algorithms In Extensive Simulations Are Implemented To Evaluate The Routing Performance, And We Find That The Proposed Algorithms Achieve A Significant Performance Improvement Compared With The Reference Algorithm.
Clustering Is A Useful Technique That Organizes A Large Quantity Of Unordered Text Documents Into A Small Number Of Meaningful And Coherent Cluster, Thereby Providing A Basis For Intuitive And Informative Navigation And Browsing Mechanisms. There Are Some Clustering Methods Which Have To Assume Some Cluster Relationship Among The Data Objects That They Are Applied On. Similarity Between A Pair Of Objects Can Be Defined Either Explicitly Or Implicitly. The Major Difference Between A Traditional Dissimilarity/similarity Measure And Ours Is That The Former Uses Only A Only A Single Viewpoint, Which Is The Origin, While The Latter Utilizes Many Different Viewpoints, Which Are Objects Assumed To Not Be In The Same Cluster With The Two Objects Being Measured. Using Multiple Viewpoints, More Informative Assessment Of Similarity Could Be Achieved. Theoretical Analysis And Empirical Study Are Conducted To Support This Claim. Two Criterion Functions For Document Clustering Are Proposed Based On This New Measure. We Compare Them With Several Well-known Clustering Algorithms That Use Other Popular Similarity Measures On Various Document Collections To Verify The Advantages Of Our Proposal
In This Paper, We Propose A Secret Group-key Generation Scheme In Physical Layer, Where An Arbitrary Number Of Multi-antenna LNs (LN) Exist In Mesh Topology With A Multi-antenna Passive Eavesdropper. In The First Phase Of The Scheme, Pilot Signals Are Transmitted From Selected Antennas Of All Nodes And Each Node Estimates Channels Linked To It. In The Second Phase, Each Node Sequentially Broadcasts A Weighted Combination Of The Estimated Channel Information Using Selected Coefficients. The Other LNs Can Obtain The Channel Information Used For Group-key Generation While The Eavesdropper Cannot. Each Node Then Can Generate A Group Key By Quantizing And Encoding The Estimated Channels Into Keys. We Apply Well-known Quantization Schemes, Such As Scalar And Vector Quantizations, And Compare Their Performance. To Further Enhance The Key-generation Performance, We Also Provide How To Determine The Antennas At Each Node Used For Group-key Generation And The Coefficients Used In The Broadcast Phase. The Simulation Results Verify The Performance Of The Proposed Secret Group-key Generation Scheme Using Various Key-related Metrics. We Also Verify The Practical Robustness Of Our Scheme By Implementing A Testbed Using Universal Software Radio Peripheral. After Generating Secret Common Key Among Three Nodes, We Also Test It Using The National Institute Of Standards And Technology Test Suit. The Generated Key Passes The Test And It Is Random Enough For Communication Secrecy.
Asymmetric Application Layer DDoS Attacks Using Computationally Intensive HTTP Requests Are An Extremely Dangerous Class Of Attacks Capable Of Taking Down Web Servers With Relatively Few Attacking Connections. These Attacks Consume Limited Network Bandwidth And Are Similar To Legitimate Traffic, Which Makes Their Detection Difficult. Existing Detection Mechanisms For These Attacks Use Indirect Representations Of Actual User Behaviour And Complex Modelling Techniques, Which Leads To A Higher False Positive Rate (FPR) And Longer Detection Time, Which Makes Them Unsuitable For Real Time Use. There Is A Need For Simple, Efficient And Adaptable Detection Mechanisms For Asymmetric DDoS Attacks. In This Work, An Attempt Is Made To Model The Actual Behavioural Dynamics Of Legitimate Users Using A Simple Annotated Probabilistic Timed Automata (PTA) Along With A Suspicion Scoring Mechanism For Differentiating Between Legitimate And Malicious Users. This Allows The Detection Mechanism To Be Extremely Fast And Have A Low FPR. In Addition, The Model Can Incrementally Learn From Run-time Traces, Which Makes It Adaptable And Reduces The FPR Further. Experiments On Public Datasets Reveal That Our Proposed Approach Has A High Detection Rate And Low FPR And Adds Negligible Overhead To The Web Server, Which Makes It Ideal For Real Time Use.
DNA Fingerprinting Can Offer Remarkable Benefits, Especially For Point-of-care Diagnostics, Information Forensics, And Analysis. However, The Pressure To Drive Down Costs Is Likely To Lead To Cheap Untrusted Solutions And A Multitude Of Unprecedented Risks. These Risks Will Especially Emerge At The Frontier Between The Cyberspace And DNA Biology. To Address These Risks, We Perform A Forensic-security Assessment Of A Typical DNA-fingerprinting Flow. We Demonstrate, For The First Time, Benchtop Analysis Of Biochemical-level Vulnerabilities In Flows That Are Based On A Standard Quantification Assay Known As Polymerase Chain Reaction (PCR). After Identifying Potential Vulnerabilities, We Realize Attacks Using Benchtop Techniques To Demonstrate Their Catastrophic Impact On The Outcome Of The DNA Fingerprinting. We Also Propose A Countermeasure, In Which DNA Samples Are Each Uniquely Barcoded (using Synthesized DNA Molecules) In Advance Of PCR Analysis, Thus Demonstrating The Feasibility Of Our Approach Using Benchtop Techniques. We Discuss How Molecular Barcoding Could Be Utilized Within A Cyber-biological Framework To Improve DNA-fingerprinting Security Against A Wide Range Of Threats, Including Sample Forgery. We Also Present A Security Analysis Of The DNA Barcoding Mechanism From A Molecular Biology Perspective.
In This Paper, We Propose A Secret Group-key Generation Scheme In Physical Layer, Where An Arbitrary Number Of Multi-antenna LNs (LN) Exist In Mesh Topology With A Multi-antenna Passive Eavesdropper. In The First Phase Of The Scheme, Pilot Signals Are Transmitted From Selected Antennas Of All Nodes And Each Node Estimates Channels Linked To It. In The Second Phase, Each Node Sequentially Broadcasts A Weighted Combination Of The Estimated Channel Information Using Selected Coefficients. The Other LNs Can Obtain The Channel Information Used For Group-key Generation While The Eavesdropper Cannot. Each Node Then Can Generate A Group Key By Quantizing And Encoding The Estimated Channels Into Keys. We Apply Well-known Quantization Schemes, Such As Scalar And Vector Quantizations, And Compare Their Performance. To Further Enhance The Key-generation Performance, We Also Provide How To Determine The Antennas At Each Node Used For Group-key Generation And The Coefficients Used In The Broadcast Phase. The Simulation Results Verify The Performance Of The Proposed Secret Group-key Generation Scheme Using Various Key-related Metrics. We Also Verify The Practical Robustness Of Our Scheme By Implementing A Testbed Using Universal Software Radio Peripheral. After Generating Secret Common Key Among Three Nodes, We Also Test It Using The National Institute Of Standards And Technology Test Suit. The Generated Key Passes The Test And It Is Random Enough For Communication Secrecy.
An Adversary Can Deploy Parasitic Sensor Nodes Into Wireless Sensor Networks To Collect Radio Traffic Distribu-tions And Trace Back Messages To Their Source Nodes. Then, He Can Locate The Monitored Targets Around The Source Nodes With A High Probability. In This Paper, A Source-location Pri-vacy Protection Scheme Based On Anonymity Cloud (SPAC) Is Proposed. We First Design A Light-weight (t,n) -threshold Message Sharing Scheme And Map The Original Message To A Set Of Message Shares Which Are Shorter In Length And Can Be Processed And Delivered With Minimal Energy Consumption. Based On The Shares, The Source Node Constructs An Anonym-ity Cloud With An Irregular Shape Around Itself To Protect Its Location Privacy. Specifically, An Anonymity Cloud Is A Set Of Active Nodes With Similar Radio Actions And They Are Statisti-cally Indistinguishable From Each Other. The Size Of The Cloud Is Controlled By The Preset Number Of Hops That The Shares Can Walk In The Cloud. At The Border Of The Cloud, The Fake Source Nodes Independently Send The Shares To The Sink Node Through Proper Routing Algorithms. At Last, The Original Message Can Be Recovered By The Sink Node Once At Least T Shares Are Re-ceived. Simulation Results Demonstrate That SPAC Can Strongly Protect The Source-location Privacy With An Efficient Manner. Moreover, The Message Sharing Mechanism Of SPAC Increases Confidentiality Of Network Data And It Also Brings High Tolerance For The Failures Of Sensor Nodes To The Data Transmission Process.
In Recent Years, There Has Been An Increase In The Number Of Phishing Attacks Targeting People In The Fields Of Defense, Security, And Diplomacy Around The World. In Particular, Hacking Attack Group Kimsuky Has Been Conducting Phishing Attacks To Collect Key Information From Public Institutions Since 2013. The Main Feature Of The Attack Techniques Used By The Kimsuky Attack Group Are To Conceal Malicious Code In Phishing E-mails Disguised As Normal E-mails To Spread A Document File That Is Vulnerable To Security, Such As A Hangul File, Or To Induce Interest Through A Social Engineering Attack Technique To Collect Account Information. This Study Classified The Types Of Phishing E-mail Attacks Into Spoofed E-mails, E-mail Body Vulnerability Use, And Attached File Spoofing, And Detailed Analyses Of Their Attack Methods, Such As Commonality And Characteristic Analyses, Were Performed To Analyze The Profile Of This Phishing E-mail Attack Group. Based On The Results, The Purpose Of The Attacking Group Was Determined To Be Intelligence Gathering Because It Focused On Phishing Attacks Targeting Korean Diplomatic And Defense Public Institutions And Related Foreign Institutions. Finally, A Countermeasure That Can Be Used By Mail Service Providers And Mail Users To Respond To Phishing E-mails Is Suggested.
Quantum Key Distribution (QKD) Has Demonstrated A Great Potential To Provide Future-proofed Security, Especially For 5G And Beyond Communications. As The Critical Infrastructure For 5G And Beyond Communications, Optical Networks Can Offer A Cost-effective Solution To QKD Deployment Utilizing The Existing Fiber Resources. In Particular, Measurement-device-independent QKD Shows Its Ability To Extend The Secure Distance With The Aid Of An Untrusted Relay. Compared To The Trusted Relay, The Untrusted Relay Has Obviously Better Security, Since It Does Not Rely On Any Assumption On Measurement And Even Allows To Be Accessed By An Eavesdropper. However, It Cannot Extend QKD To An Arbitrary Distance Like The Trusted Relay, Such That It Is Expected To Be Combined With The Trusted Relay For Large-scale QKD Deployment. In This Work, We Study The Hybrid Trusted/untrusted Relay Based QKD Deployment Over Optical Backbone Networks And Focus On Cost Optimization During The Deployment Phase. A New Network Architecture Of Hybrid Trusted/untrusted Relay Based QKD Over Optical Backbone Networks Is Described, Where The Node Structures Of The Trusted Relay And Untrusted Relay Are Elaborated. The Corresponding Network, Cost, And Security Models Are Formulated. To Optimize The Deployment Cost, An Integer Linear Programming Model And A Heuristic Algorithm Are Designed. Numerical Simulations Verify That The Cost-optimized Design Can Significantly Outperform The Benchmark Algorithm In Terms Of Deployment Cost And Security Level. Up To 25% Cost Saving Can Be Achieved By Deploying QKD With The Hybrid Trusted/untrusted Relay Scheme While Keeping Much Higher Security Level Relative To The Conventional Point-to-point QKD Protocols That Are Only With The Trusted Relays.
Social Networks Pervaded Human Lives In Mostly Each Aspect. The Vast Amount Of Sensitive Data That Users Produce And Exchanged On These Platforms Call For Intensive Concern About Information And Privacy Protection. Moreover, The Users’ Statistical Usage Data Collected For Analysis Is Also Subject To Leakage And Therefor Require Protection. Although There Is An Availability Of Privacy Preserving Methods, They Are Not Scalable, Or Tend To Underperform When It Comes To Data Utility And Efficiency. Thus, In This Paper, We Develop A Novel Approach For Anonymizing Users’ Statistical Data. The Data Is Collected From The User’s Behavior Patterns In Social Networks. In Particular, We Collect Specific Points From The User’s Behavior Patterns Rather Than The Entire Data Stream To Be Fed Into Local Differential Privacy (LDP). After The Statistical Data Has Been Anonymized, We Reconstruct The Original Points Using Nonlinear Techniques. The Results From This Approach Provide Significant Accuracy When Compared With The Straightforward Anonymization Approach.
Today's Organizations Raise An Increasing Need For Information Sharing Via On-demand Access. Information Brokering Systems (IBSs) Have Been Proposed To Connect Large-scale Loosely Federated Data Sources Via A Brokering Overlay, In Which The Brokers Make Routing Decisions To Direct Client Queries To The Requested Data Servers. Many Existing IBSs Assume That Brokers Are Trusted And Thus Only Adopt Server-side Access Control For Data Confidentiality. However, Privacy Of Data Location And Data Consumer Can Still Be Inferred From Metadata (such As Query And Access Control Rules) Exchanged Within The IBS, But Little Attention Has Been Put On Its Protection. In This Paper, We Propose A Novel Approach To Preserve Privacy Of Multiple Stakeholders Involved In The Information Brokering Process. We Are Among The First To Formally Define Two Privacy Attacks, Namely Attribute-correlation Attack And Inference Attack, And Propose Two Countermeasure Schemes Automaton Segmentation And Query Segment Encryption To Securely Share The Routing Decision-making Responsibility Among A Selected Set Of Brokering Servers. With Comprehensive Security Analysis And Experimental Results, We Show That Our Approach Seamlessly Integrates Security Enforcement With Query Routing To Provide System-wide Security With Insignificant Overhead.
In Cloud Service Over Crowd-sensing Data, The Data Owner (DO) Publishes The Sensing Data Through The Cloud Server, So That The User Can Obtain The Information Of Interest On Demand. But The Cloud Service Providers (CSP) Are Often Untrustworthy. The Privacy And Security Concerns Emerge Over The Authenticity Of The Query Answer And The Leakage Of The DO Identity. To Solve These Issues, Many Researchers Study The Query Answer Authentication Scheme For Cloud Service System. The Traditional Technique Is Providing DO's Signature For The Published Data. But The Signature Would Always Reveal DO's Identity. To Deal With This Disadvantage, This Paper Proposes A Cooperative Query Answer Authentication Scheme, Based On The Ring Signature, The Merkle Hash Tree (MHT) And The Non-repudiable Service Protocol. Through The Cooperation Among The Entities In Cloud Service System, The Proposed Scheme Could Not Only Verify The Query Answer, But Also Protect The DO's Identity. First, It Picks Up The Internal Nodes Of MHT To Sign, As Well As The Root Node. Thus, The Verification Computation Complexity Could Be Significantly Reduced From O(log 2 N) To O(log 2 N 0.5 ) In The Best Case. Then, It Improves An Existing Ring Signature To Sign The Selected Nodes. Furthermore, The Proposed Scheme Employs The Non-repudiation Protocol During The Transmission Of Query Answer And Verification Object To Protect Trading Behavior Between The CSP And Users. The Security And Performance Analysis Prove The Security And Feasibility Of The Proposed Scheme. Extensive Experimental Results Demonstrate Its Superiority Of Verification Efficiency And Communication Overhead
Fraudulent Behaviors In Google Play, The Most Popular Android App Market, Fuel Search Rank Abuse And Malware Proliferation. To Identify Malware, Previous Work Has Focused On App Executable And Permission Analysis. In This Paper, We Introduce FairPlay, A Novel System That Discovers And Leverages Traces Left Behind By Fraudsters, To Detect Both Malware And Apps Subjected To Search Rank Fraud. FairPlay Correlates Review Activities And Uniquely Combines Detected Review Relations With Linguistic And Behavioral Signals Gleaned From Google Play App Data (87 K Apps, 2.9 M Reviews, And 2.4M Reviewers, Collected Over Half A Year), In Order To Identify Suspicious Apps. FairPlay Achieves Over 95 Percent Accuracy In Classifying Gold Standard Datasets Of Malware, Fraudulent And Legitimate Apps. We Show That 75 Percent Of The Identified Malware Apps Engage In Search Rank Fraud. FairPlay Discovers Hundreds Of Fraudulent Apps That Currently Evade Google Bouncer's Detection Technology. FairPlay Also Helped The Discovery Of More Than 1,000 Reviews, Reported For 193 Apps, That Reveal A New Type Of “coercive” Review Campaign: Users Are Harassed Into Writing Positive Reviews, And Install And Review Other Apps.
With 20 Million Installs A Day , Third-party Apps Are A Major Reason For The Popularity And Addictiveness Of Facebook. Unfortunately, Hackers Have Realized The Potential Of Using Apps For Spreading Malware And Spam. The Problem Is Already Significant, As We Find That At Least 13% Of Apps In Our Dataset Are Malicious. So Far, The Research Community Has Focused On Detecting Malicious Posts And Campaigns. In This Paper, We Ask The Question: Given A Facebook Application, Can We Determine If It Is Malicious? Our Key Contribution Is In Developing FRAppE-Facebook's Rigorous Application Evaluator-arguably The First Tool Focused On Detecting Malicious Apps On Facebook. To Develop FRAppE, We Use Information Gathered By Observing The Posting Behavior Of 111K Facebook Apps Seen Across 2.2 Million Users On Facebook. First, We Identify A Set Of Features That Help Us Distinguish Malicious Apps From Benign Ones. For Example, We Find That Malicious Apps Often Share Names With Other Apps, And They Typically Request Fewer Permissions Than Benign Apps. Second, Leveraging These Distinguishing Features, We Show That FRAppE Can Detect Malicious Apps With 99.5% Accuracy, With No False Positives And A High True Positive Rate (95.9%). Finally, We Explore The Ecosystem Of Malicious Facebook Apps And Identify Mechanisms That These Apps Use To Propagate. Interestingly, We Find That Many Apps Collude And Support Each Other; In Our Dataset, We Find 1584 Apps Enabling The Viral Propagation Of 3723 Other Apps Through Their Posts. Long Term, We See FRAppE As A Step Toward Creating An Independent Watchdog For App Assessment And Ranking, So As To Warn Facebook Users Before Installing Apps
Now A Day’s Malwares Are Becoming Increasingly Stealthy, More And More Malwares Are Using Cryptographic Algorithms To Protect Themselves From Being Analyzed. To Enable More Effective Malware Analysis, Forensics And Reverse Engineering, We Have Developed CipherXRay – A Novel Binary Analysis Framework That Can Automatically Identify And Recover The Cryptographic Operations And Transient Secrets From The Execution Of Potentially Obfuscated Binary Executables. Based On The Avalanche Effect Of Cryptographic Functions, CipherXRay Is Able To Accurately Pinpoint The Boundary Of Cryptographic Operation And Recover Truly Transient Cryptographic Secrets That Only Exist In Memory For One Instant In Between Multiple Nested Cryptographic Operations. In Existing Mechanism Not Fully Detect The Malwares. Our Proposed Method CipherXRay Can Further Identify Certain Operation Modes Of The Identified Block Cipher And Tell Whether The Identified Block Cipher Operation Is Encryption Or Decryption In Certain Cases.
Malwares Are Becoming Increasingly Stealthy, More And More Malwares Are Using Cryptographic Algorithms (e.g., Packing, Encrypting C&C Communication) To Protect Themselves From Being Analyzed. The Use Of Cryptographic Algorithms And Truly Transient Cryptographic Secrets Inside The Malware Binary Imposes A Key Obstacle To Effective Malware Analysis And Defense. To Enable More Effective Malware Analysis, Forensics, And Reverse Engineering, We Have Developed CipherXRay - A Novel Binary Analysis Framework That Can Automatically Identify And Recover The Cryptographic Operations And Transient Secrets From The Execution Of Potentially Obfuscated Binary Executables. Based On The Avalanche Effect Of Cryptographic Functions, CipherXRay Is Able To Accurately Pinpoint The Boundary Of Cryptographic Operation And Recover Truly Transient Cryptographic Secrets That Only Exist In Memory For One Instant In Between Multiple Nested Cryptographic Operations. CipherXRay Can Further Identify Certain Operation Modes (e.g., ECB, CBC, CFB) Of The Identified Block Cipher And Tell Whether The Identified Block Cipher Operation Is Encryption Or Decryption In Certain Cases. We Have Empirically Validated CipherXRay With OpenSSL, Popular Password Safe KeePassX, The Ciphers Used By Malware Stuxnet, Kraken And Agobot, And A Number Of Third Party Softwares With Built-in Compression And Checksum. CipherXRay Is Able To Identify Various Cryptographic Operations And Recover Cryptographic Secrets That Exist In Memory For Only A Few Microseconds. Our Results Demonstrate That Current Software Implementations Of Cryptographic Algorithms Hardly Achieve Any Secrecy If Their Execution Can Be Monitored.
Cloud Security Is One Of Most Important Issues That Has Attracted A Lot Of Research And Development Effort In Past Few Years. Particularly, Attackers Can Explore Vulnerabilities Of A Cloud System And Compromise Virtual Machines To Deploy Further Large-scale Distributed Denial-of-Service (DDoS). DDoS Attacks Usually Involve Early Stage Actions Such As Multi-step Exploitation, Low Frequency Vulnerability Scanning, And Compromising Identified Vulnerable Virtual Machines As Zombies, And Finally DDoS Attacks Through The Compromised Zombies. Within The Cloud System, Especially The Infrastructure-as-a-Service (IaaS) Clouds, The Detection Of Zombie Exploration Attacks Is Extremely Difficult. This Is Because Cloud Users May Install Vulnerable Applications On Their Virtual Machines. To Prevent Vulnerable Virtual Machines From Being Compromised In The Cloud, We Propose A Multi-phase Distributed Vulnerability Detection, Measurement, And Countermeasure Selection Mechanism Called NICE, Which Is Built On Attack Graph Based Analytical Models And Reconfigurable Virtual Network-based Countermeasures. The Proposed Framework Leverages OpenFlow Network Programming APIs To Build A Monitor And Control Plane Over Distributed Programmable Virtual Switches In Order To Significantly Improve Attack Detection And Mitigate Attack Consequences. The System And Security Evaluations Demonstrate The Efficiency And Effectiveness Of The Proposed Solution.
Routing Protocol Is Taking A Vital Role In The Modern Internet Era. A Routing Protocol Determines How The Routers Communicate With Each Other To Forward The Packets By Taking The Optimal Path To Travel From A Source Node To A Destination Node. In This Paper We Have Explored Two Eminent Protocols Namely, Enhanced Interior Gateway Routing Protocol (EIGRP) And Open Shortest Path First (OSPF) Protocols. Evaluation Of These Routing Protocols Is Performed Based On The Quantitative Metrics Such As Convergence Time, Jitter, End-to- End Delay, Throughput And Packet Loss Through The Simulated Network Models. The Evaluation Results Show That EIGRP Routing Protocol Provides A Better Performance Than OSPF Routing Protocol For Real Time Applications. Through Network Simulations We Have Proved That EIGRP Is More CPU Intensive Than OSPF And Hence Uses A Lot Of System Power. Therefore EIGRP Is A Greener Routing Protocol And Provides For Greener Internetworking.
Every Customer Should Have Confidential Information. These Are Wants To Maintain In A Secure Manner. Online Banking System Can Be Considered As The One Of The Great Tool Supporting Many Customers As Well As Banks And Financial Institutions To Make May Bank Activities. Every Day Banks Need To Perform Many Activities Related To Users Which Needs Huge Infrastructure With More Staff Members Etc. But The Online Banking System Allows The Banks To Perform These Activities In A Simpler Way Without Involving The Employees For Example Consider Online Banking, Mobile Banking And ATM Banking. But Banking System Needs To Be More Secure And Reliable Because Each And Every Task Performed Is Related To Customer’s Money. Especially Authentication And Validation Of User Access Is The Major Task In The Banking Systems. Usable Security Has Unique Usability Challenges Because The Need For Security Often Means That Standard Human-computer-interaction Approaches Cannot Be Directly Applied. An Important Usability Goal For Authentication Systems Is To Support Users In Selecting Better Passwords. Users Often Create Memorable Passwords That Are Easy For Attackers To Guess, But Strong System-assigned Passwords Are Difficult For Users To Remember. So Researchers Of Modern Days Have Gone For Alternative Methods Wherein Graphical Pictures Are Used As Passwords. The Major Goal Of This Work Is To Reduce The Guessing Attacks As Well As Encouraging Users To Select More Random, And Difficult Passwords To Guess. Well Known Security Threats Like Brute Force Attacks And Dictionary Attacks Can Be Successfully Abolished Using This Method.
Distributed Systems Without Trusted Identities Are Particularly Vulnerable To Sybil Attacks, Where An Adversary Creates Multiple Bogus Identities To Compromise The Running Of The System. This Paper Presents SybilDefender, A Sybil Defense Mechanism That Leverages The Network Topologies To Defend Against Sybil Attacks In Social Networks. Based On Performing A Limited Number Of Random Walks Within The Social Graphs, SybilDefender Is Efficient And Scalable To Large Social Networks. Our Experiments On Two 3,000,000 Node Real-world Social Topologies Show That SybilDefender Outperforms The State Of The Art By More Than 10 Times In Both Accuracy And Running Time. SybilDefender Can Effectively Identify The Sybil Nodes And Detect The Sybil Community Around A Sybil Node, Even When The Number Of Sybil Nodes Introduced By Each Attack Edge Is Close To The Theoretically Detectable Lower Bound. Besides, We Propose Two Approaches To Limiting The Number Of Attack Edges In Online Social Networks. The Survey Results Of Our Facebook Application Show That The Assumption Made By Previous Work That All The Relationships In Social Networks Are Trusted Does Not Apply To Online Social Networks, And It Is Feasible To Limit The Number Of Attack Edges In Online Social Networks By Relationship Rating.
Adaptively-secure Key Exchange Allows The Establishment Of Secure Channels Even In The Presence Of An Adversary That Can Corrupt Parties Adaptively And Obtain Their Internal States. In This Paper, We Give A Formal Definition Of Contributory Protocols And Define An Ideal Functionality For Password-based Group Key Exchange With Explicit Authentication And Contributiveness In The UC Framework. As With Previous Definitions In The Same Framework, Our Definitions Do Not Assume Any Particular Distribution On Passwords Or Independence Between Passwords Of Different Parties. We Also Provide The First Steps Toward Realizing This Functionality In The Above Strong Adaptive Setting By Analyzing An Efficient Existing Protocol And Showing That It Realizes The Ideal Functionality In The Random-oracle And Ideal-cipher Models Based On The CDH Assumption.
Passwords Are The Most Commonly Used Means Of Authentication As Passwords Are Very Convenient For Users, Easier To Implement And User Friendly. Password Based Systems Suffer From Two Types Of Attacks: I) Offline Attacks Ii) Online Attacks. Eavesdropping The Communication Channel And Recording The Conversations Taking Place On The Communication Channel Is An Example For Offline Attack. Brute Force And Dictionary Attacks Are The Two Types Of Online Attacks Which Are Widespread And Increasing. Enabling Convenient Login For Legitimate Users While Preventing Such Attacks Is A Difficult Problem. The Proposed Protocol Called Password Guessing Resistant Protocol (PGRP), Helps In Preventing Such Attacks And Provides A Pleasant Login Experience For Legitimate Users. PGRP Limits The Number Of Login Attempts For Unknown Users To One, And Then Challenges The Unknown User With An Automated Turing Test (ATT). There Are Different Kinds Of ATT Tests Such As CAPTCHA (Completely Automated Public Turing Test To Tell Computers And Humans Apart), Security Questions Etc. In This System, A Distorted Textbased CAPTCHA Is Used. If The ATT Test Is Correctly Answered, The User Is Granted Access Else The User Is Denied Access. The Proposed Algorithm Analyzes The Efficiency Of PGRP Based On Three Conditions: I) Number Of Successful Login Attempts Ii) Number Of Failed Login Attempts With Invalid Password Iii) Number Of Failed Login Attempts With Invalid Password And ATT Test. PGRP Log Files Are Used As Data Sets. The Analysis Helps In Determining The Efficiency Of PGRP Protocol.
In This Paper, We Introduce A Novel Roadside Unit (RSU)-aided Message Authentication Scheme Named RAISE, Which Makes RSUs Responsible For Verifying The Authenticity Of Messages Sent From Vehicles And For Notifying The Results Back To Vehicles. In Addition, RAISE Adopts The $k$- Anonymity Property For Preserving User Privacy, Where A Message Cannot Be Associated With A Common Vehicle. In The Case Of The Absence Of An RSU, We Further Propose A Supplementary Scheme, Where Vehicles Would Cooperatively Work To Probabilistically Verify Only A Small Percentage Of These Message Signatures Based On Their Own Computing Capacity. Extensive Simulations Are Conducted To Validate The Proposed Scheme. It Is Demonstrated That RAISE Yields A Much Better Performance Than Previously Reported Counterparts In Terms Of Message Loss Ratio (LR) And Delay.
A Mobile Adhoc Network Is A Network That Does Not Relay On Fixed Infrastructure .It Is A Collection Of Independent Mobile Nodes That Can Communicate To Each Other Via Radio Waves. These Networks Are Fully Distributed, And Can Work At Any Place Without The Help Of Any Fixed Infrastructure As Access Points Or Base Stations. As In Ad- Hoc Network Communication Medium Is Air So It Would Be Easy For Attacker To Fetch Information From Air Medium Using Sniffing Software Tool. There Is An Attack Which Causes So Much Destruction To A Network Called Sybil Attack. In The Sybil Attack A Single Node Presents Multiple Fake Identities To Other Nodes In The Network. In This Research, We Implemented The Sybil Attack Detection Technique Which Is Used To Detect The Sybil Nodes In The Network And Also Prevent It. Simulation Tool Used For The Implementation Is NS2.35.
With The Popularity Of Voting Systems In Cyberspace, There Is Growing Evidence That Current Voting Systems Can Be Manipulated By Fake Votes. This Problem Has Attracted Many Researchers Working On Guarding Voting Systems In Two Areas: Relieving The Effect Of Dishonest Votes By Evaluating The Trust Of Voters, And Limiting The Resources That Can Be Used By Attackers, Such As The Number Of Voters And The Number Of Votes. In This Paper, We Argue That Powering Voting Systems With Trust And Limiting Attack Resources Are Not Enough. We Present A Novel Attack Named As Reputation Trap (RepTrap). Our Case Study And Experiments Show That This New Attack Needs Much Less Resources To Manipulate The Voting Systems And Has A Much Higher Success Rate Compared With Existing Attacks. We Further Identify The Reasons Behind This Attack And Propose Two Defense Schemes Accordingly. In The First Scheme, We Hide Correlation Knowledge From Attackers To Reduce Their Chance To Affect The Honest Voters. In The Second Scheme, We Introduce Robustness-of-evidence, A New Metric, In Trust Calculation To Reduce Their Effect On Honest Voters. We Conduct Extensive Experiments To Validate Our Approach. The Results Show That Our Defense Schemes Not Only Can Reduce The Success Rate Of Attacks But Also Significantly Increase The Amount Of Resources An Adversary Needs To Launch A Successful Attack.
Data Compression Is An Important Part Of Information Security Because Compressed Data Is More Secure And Easy To Handle. Effective Data Compression Technology Creates Efficient, Secure, And Easy-to-connect Data. There Are Two Types Of Compression Algorithm Techniques, Lossy And Lossless. These Technologies Can Be Used In Any Data Format Such As Text, Audio, Video, Or Image File. The Main Objective Of This Study Was To Reduce The Physical Space On The Various Storage Media And Reduce The Time Of Sending Data Over The Internet With A Complete Guarantee Of Encrypting This Data And Hiding It From Intruders. Two Techniques Are Implemented, With Data Loss (Lossy) And Without Data Loss (Lossless). In The Proposed Paper A Hybrid Data Compression Algorithm Increases The Input Data To Be Encrypted By RSA (Rivest–Shamir–Adleman) Cryptography Method To Enhance The Security Level And It Can Be Used In Executing Lossy And Lossless Compacting Steganography Methods. This Technique Can Be Used To Decrease The Amount Of Every Transmitted Data Aiding Fast Transmission While Using Slow Internet Or Take A Small Space On Different Storage Media. The Plain Text Is Compressed By The Huffman Coding Algorithm, And Also The Cover Image Is Compressed By Discrete Wavelet Transform DWT Based That Compacts The Cover Image Through Lossy Compression In Order To Reduce The Cover Image’s Dimensions.
Fingerprint Biometric Is The Most Widely Deployed Publicized Biometrics For Identification. This Is Largely Due To Its Easy And Cost Effective Integration In Existing And Upcoming Technologies. The Integration Of Biometric With Electronic Voting Machine Undoubtedly Requires Less Manpower, Save Much Time Of Voters And Personal, Eliminate Rigging, Ensure Accuracy, Transparency And Fast Results In Election. In This Paper, A Framework For Electronic Voting Machine Based On Biometric Verification Is Proposed And Implemented. The Proposed Framework Ensures Secured Identification And Authentication Processes For The Voters And Candidates Through The Use Of Fingerprint Biometrics. It Deals With The Design And Development Of An Electronic Voting System Using Fingerprint Recognition. It Allows The Voter To Scan Their Fingerprint, Which Is Then Matched With An Already Saved Image Within A Database. Upon Completion Of Voter Identification, Voters Are Allowed To Cast Their Vote Using LCD And Keypad Interface. Casted Vote Will Be Updated Immediately, Making The System Fast, Efficient And Fraud-free.
Today, It Is Almost Impossible To Implement Teaching Processes Without Using Information And Communication Technologies (ICT), Especially In Higher Education.
This Method Is Alternative To Cryptography Techniques Which Provide More Security Than Existing Techniques. Steganography Is The Practice Of Hiding Private Or Sensitive Information Within Something That Appears To Be Nothing Out Of The Usual. Steganography Is Often Confused With Cryptology Because The Two Are Similar In The Way That They Both Are Used To Protect Important Information. The Difference Between The Two Is That Steganography Involves Hiding Information So It Appears That No Information Is Hidden At All. If A Person Or Persons Views The Object That The Information Is Hidden Inside Of He Or She Will Have No Idea That There Is Any Hidden Information, Therefore The Person Will Not Attempt To Decrypt The Information. Steganography In The Modern Day Sense Of The Word Usually Refers To Information Or A File That Has Been Concealed Inside A Digital Picture, Video Or Audio File. What Steganography Essentially Does Is Exploit Human Perception, Human Senses Are Not Trained To Look For Files That Have Information Hidden Inside Of Them, Although There Are Programs Available That Can Do What Is Called Steganalysis (Detecting Use Of Steganography.) The Most Common Use Of Steganography Is To Hide A File Inside Another File. When Information Or A File Is Hidden Inside A Carrier File, The Data Is Usually Encrypted With A Password.
We Present An Efficient And Noise Robust Template Matching Method Based On Asymmetric Correlation (ASC). The ASC Similarity Function Is Invariant To Affine Illumination Changes And Robust To Extreme Noise. It Correlates The Given Non-normalized Template With A Normalized Version Of Each Image Window In The Frequency Domain. We Show That This Asymmetric Normalization Is More Robust To Noise Than Other Cross Correlation Variants, Such As The Correlation Coefficient. Direct Computation Of ASC Is Very Slow, As A DFT Needs To Be Calculated For Each Image Window Independently. To Make The Template Matching Efficient, We Develop A Much Faster Algorithm, Which Carries Out A Prediction Step In Linear Time And Then Computes DFTs For Only A Few Promising Candidate Windows. We Extend The Proposed Template Matching Scheme To Deal With Partial Occlusion And Spatially Varying Light Change. Experimental Results Demonstrate The Robustness Of The Proposed ASC Similarity Measure Compared To State-of-the-art Template Matching Methods.
Cloud Computing Has Become A Popular Buzzword And It Has Been Widely Used To Refer To Different Technologies, Services, And Concepts. With The Use Of Cloud Computing, Here We Are Trying To Give The Location Based Efficient Video Information To The Mobile Users. Location Based Service(LBS) Is An Information Service And Has A Number Of Uses In Social Networking Today As An Entertainment Service, Which Is Accessible With Mobile Devices Through The Mobile Network And Which Uses Information On The Geographical Position Of The Mobile Device. While The Demands Of Video Streaming Services Over The Mobile Networks Have Been Souring Over These Years, The Wireless Link Capacity Cannot Practically Keep Up With The Growing Traffic Load. In This Project, We Propose And Discuss A Adaptive Video Streaming Framework To Improve The Quality Of Video Services In The Location Based Manner. Through This System, Video Content Can Be Segmented By An Automatic Shot/scene Retrieval Technology And Stored In The Database (DB). In The Client Side, Two Threads Will Be Formed. One Is For Video Streaming And Another One Is For Location Searching And Updating. For The Security Purpose, We Are Using Self Destruction Algorithm Where The Uploaded Video Is Been Destructed Automatically After The User Defined Time. Thus The Location Based Video Information Can Be Streamed Efficiently And Securely By The Mobile Users.
Educational Process Mining Is One Of The Research Domains That Utilizes Students' Learning Behavior To Match Students' Actual Courses Taken And The Designed Curriculum. While Most Works Attempt To Deal With The Case Perspective (i.e., Traces Of The Cases), The Temporal Case Perspective Has Not Been Discussed. The Temporal Case Perspective Aims To Understand The Temporal Patterns Of Cases (e.g., Students' Learning Behavior In A Semester). This Study Proposes Modified Cluster Evolution Analysis, Called Profile-based Cluster Evolution Analysis, For Students' Learning Behavior Based On Profiles. The Results Show Three Salient Features: (1) Cluster Generation; (2) Within-cluster Generation; And (3) Time-based Between-cluster Generation. The Cluster Evolution Phase Modifies The Existing Cluster Evolution Analysis With A Dynamic Profiler. The Model Was Tested On Actual Educational Data Of The Information System Department In Indonesia. The Results Showed The Learning Behavior Of Students Who Graduated On Time, The Learning Behavior Of Students Who Graduated Late, And The Learning Behavior Of Students Who Dropped Out. Students Changed Their Learning Behavior By Observing The Migration Of Students From Cluster To Cluster For Each Semester. Furthermore, There Were Distinct Learning Behavior Migration Patterns For Each Category Of Students Based On Their Performance. The Migration Pattern Can Suggest To Academic Stakeholders To Understand About Students Who Are Likely To Drop Out, Graduate On Time Or Graduate Late. These Results Can Be Used As Recommendations To Academic Stakeholders For Curriculum Assessment And Development And Dropout Prevention.
Due To Its Cost Efficiency The Controller Area Network (CAN) Is Still The Most Wide-spread In-vehicle Bus And The Numerous Reported Attacks Demonstrate The Urgency In Designing New Security Solutions For CAN. In This Work We Propose An Intrusion Detection Mechanism That Takes Advantage Of Bloom Filtering To Test Frame Periodicity Based On Message Identifiers And Parts Of The Data-field Which Facilitates Detection Of Potential Replay Or Modification Attacks. This Proves To Be An Effective Approach Since Most Of The Traffic From In-vehicle Buses Is Cyclic In Nature And The Format Of The Data-field Is Fixed Due To Rigid Signal Allocation. Bloom Filters Provide An Efficient Time-memory Tradeoff Which Is Beneficial For The Constrained Resources Of Automotive Grade Controllers. We Test The Correctness Of Our Approach And Obtain Good Results On An Industry-standard CANoe Based Simulation For A J1939 Commercial-vehicle Bus And Also On CAN-FD Traces Obtained From A Real-world High-end Vehicle. The Proposed Filtering Mechanism Is Straight-forward To Adapt For Any Other Time-triggered In-vehicle Bus, E.g., FlexRay, Since It Is Built On Time-driven Characteristics.
The Complexity And Dynamic Of The Manufacturing Environment Are Growing Due To The Changes Of Manufacturing Demand From Mass Production To Mass Customization That Require Variable Product Types, Small Lot Sizes, And A Short Lead-time To Market. Currently, The Automatic Manufacturing Systems Are Suitable For Mass Production. To Cope With The Changes Of The Manufacturing Environment, The Paper Proposes The Model And Technologies For Developing A Smart Cyber-physical Manufacturing System (Smart-CPMS). The Transformation Of The Actual Manufacturing Systems To The Smart-CPMS Is Considered As The Next Generation Of Manufacturing Development In Industry 4.0. The Smart-CPMS Has Advanced Characteristics Inspired From Biology Such As Self-organization, Self-diagnosis, And Self-healing. These Characteristics Ensure That The Smart-CPMS Are Able To Adapt With Continuously Changing Manufacturing Requirements. The Model Of Smart-CPMS Is Inherited From The Organization Of Living Systems In Biology And Nature. Consequently, In The Smart-CPMS, Each Resource On The Shop Floor Such As Machines, Robots, Transporters, And So On, Is An Autonomous Entity, Namely A Cyber-physical System (CPS) Which Is Equipped With Cognitive Capabilities Such As Perception, Reasoning, Learning, And Cooperation. The Smart-CPMS Adapts To The Changes Of Manufacturing Environment By The Interaction Among CPSs Without External Intervention. The CPS Implementation Uses The Cognitive Agent Technology. Internet Of Things (IoT) With Wireless Networks, Radio Frequency Identification (RFID), And Sensor Networks Are Used As Information And Communication Technology (ICT) Infrastructure For Carrying Out The Smart-CPMS.
The Cascading Of Sensitive Information Such As Private Contents And Rumors Is A Severe Issue In Online Social Networks. One Approach For Limiting The Cascading Of Sensitive Information Is Constraining The Diffusion Among Social Network Users. However, The Diffusion Constraining Measures Limit The Diffusion Of Non-sensitive Information Diffusion As Well, Resulting In The Bad User Experiences. To Tackle This Issue, In This Paper, We Study The Problem Of How To Minimize The Sensitive Information Diffusion While Preserve The Diffusion Of Non-sensitive Information, And Formulate It As A Constrained Minimization Problem Where We Characterize The Intention Of Preserving Non-sensitive Information Diffusion As The Constraint. We Study The Problem Of Interest Over The Fully-known Network With Known Diffusion Abilities Of All Users And The Semi-known Network Where Diffusion Abilities Of Partial Users Remain Unknown In Advance. By Modeling The Sensitive Information Diffusion Size As The Reward Of A Bandit, We Utilize The Bandit Framework To Jointly Design The Solutions With Polynomial Complexity In The Both Scenarios. Moreover, The Unknown Diffusion Abilities Over The Semi-known Network Induce It Difficult To Quantify The Information Diffusion Size In Algorithm Design. For This Issue, We Propose To Learn The Unknown Diffusion Abilities From The Diffusion Process In Real Time And Then Adaptively Conduct The Diffusion Constraining Measures Based On The Learned Diffusion Abilities, Relying On The Bandit Framework. Extensive Experiments On Real And Synthetic Datasets Demonstrate That Our Solutions Can Effectively Constrain The Sensitive Information Diffusion, And Enjoy A 40 Percent Less Diffusion Loss Of Non-sensitive Information Comparing With Four Baseline Algorithms.
Cryptography Is Essential For Computer And Network Security. When Cryptosystems Are Deployed In Computing Or Communication Systems, It Is Extremely Critical To Protect The Cryptographic Keys. In Practice, Keys Are Loaded Into The Memory As Plaintext During Cryptographic Computations. Therefore, The Keys Are Subject To Memory Disclosure Attacks That Read Unauthorized Data From RAM. Such Attacks Could Be Performed Through Software Exploitations, Such As OpenSSL Heartbleed, Even When The Integrity Of The Victim System's Binaries Is Maintained. They Could Also Be Done Through Physical Methods, Such As Cold-boot Attacks, Even If The System Is Free Of Software Vulnerabilities. This Paper Presents Mimosa, To Protect RSA Private Keys Against Both Software-based And Physical Memory Disclosure Attacks. Mimosa Uses Hardware Transactional Memory (HTM) To Ensure That (a) Whenever A Malicious Thread Other Than Mimosa Attempts To Read The Plaintext Private Key, The Transaction Aborts And All Sensitive Data Are Automatically Cleared With Hardware, Due To The Strong Atomicity Guarantee Of HTM; And (b) All Sensitive Data, Including Private Keys And Intermediate States, Appear As Plaintext Only Within CPU-bound Caches, And Are Never Loaded To RAM Chips. To The Best Of Our Knowledge, Mimosa Is The First Solution To Use Transactional Memory To Protect Sensitive Data Against Memory Attacks. However, The Fragility Of TSX Transactions Introduces Extra Cache-clogging Denial-of-service (DoS) Threats, And Attackers Could Sharply Degrade The Performance By Concurrent Memory-intensive Tasks. To Mitigate The DoS Threats, We Further Partition An RSA Private-key Computation Into Multiple Transactional Parts By Analyzing The Distribution Of Aborts, While (sensitive) Intermediate Results Are Still Protected Across Transactional Parts. Through Extensive Experiments, We Show That Mimosa Effectively Protects Cryptographic Keys Against Attacks That Attempt To Read Sensitive Data In Memory, And Introduce.
Network Traffic Analysis Has Been Increasingly Used In Various Applications To Either Protect Or Threaten People, Information, And Systems. Website Fingerprinting Is A Passive Traffic Analysis Attack Which Threatens Web Navigation Privacy. It Is A Set Of Techniques Used To Discover Patterns From A Sequence Of Network Packets Generated While A User Accesses Different Websites. Internet Users (such As Online Activists Or Journalists) May Wish To Hide Their Identity And Online Activity To Protect Their Privacy. Typically, An Anonymity Network Is Utilized For This Purpose. These Anonymity Networks Such As Tor (The Onion Router) Provide Layers Of Data Encryption Which Poses A Challenge To The Traffic Analysis Techniques. Although Various Defenses Have Been Proposed To Counteract This Passive Attack, They Have Been Penetrated By New Attacks That Proved The Ineffectiveness And/or Impracticality Of Such Defenses. In This Work, We Introduce A Novel Defense Algorithm To Counteract The Website Fingerprinting Attacks. The Proposed Defense Obfuscates Original Website Traffic Patterns Through The Use Of Double Sampling And Mathematical Optimization Techniques To Deform Packet Sequences And Destroy Traffic Flow Dependency Characteristics Used By Attackers To Identify Websites. We Evaluate Our Defense Against State-of-the-art Studies And Show Its Effectiveness With Minimal Overhead And Zero-delay Transmission To The Real Traffic.
This Paper Addresses The Co-design Problem Of A Fault Detection Filter And Controller For A Networked-based Unmanned Surface Vehicle (USV) System Subject To Communication Delays, External Disturbance, Faults, And Aperiodic Denial-of-service (DoS) Jamming Attacks. First, An Event-triggering Communication Scheme Is Proposed To Enhance The Efficiency Of Network Resource Utilization While Counteracting The Impact Of Aperiodic DoS Attacks On The USV Control System Performance. Second, An Event-based Switched USV Control System Is Presented To Account For The Simultaneous Presence Of Communication Delays, Disturbance, Faults, And DoS Jamming Attacks. Third, By Using The Piecewise Lyapunov Functional (PLF) Approach, Criteria For Exponential Stability Analysis And Co-design Of A Desired Observer-based Fault Detection Filter And An Event-triggered Controller Are Derived And Expressed In Terms Of Linear Matrix Inequalities (LMIs). Finally, The Simulation Results Verify The Effectiveness Of The Proposed Co-design Method. The Results Show That This Method Not Only Ensures The Safe And Stable Operation Of The USV But Also Reduces The Amount Of Data Transmissions.
Wireless Ad Hoc Networks Are Widely Useful In Locations Where The Existing Infrastructure Is Difficult To Use, Especially During The Situations Like Flood, Earthquakes, And Other Natural Or Man-made Calamities. Lack Of Centralized Management And Absence Of Secure Boundaries Make These Networks Vulnerable To Various Types Of Attacks. Moreover, The Mobile Nodes Used In These Networks Have Limited Computational Capability, Memory, And Battery Backup. Flooding-based Denial-of-service (DoS) Attack, Which Results In Denial Of Sleep Attack, Targets The Mobile Node's Constrained Resources Which Results In Excess Consumption Of Battery Backup. In SYN Flooding-based DoS Attack, The Attacker Sends A Large Number Of Spoofed SYN Packets Which Not Only Overflow The Target Buffer But Also Creates Network Congestion. The Present Article Is Divided Into Three Parts: 1) Mathematical Modeling For SYN Traffic In The Network Using Bayesian Inference; 2) Proving The Equivalence Of Bayesian Inference With Exponential Weighted Moving Average; And 3) Developing An Efficient Algorithm For The Detection Of SYN Flooding Attack Using Bayesian Inference. Based On The Comprehensive Evaluation Using Mathematical Modeling And Simulation, The Proposed Method Can Successfully Defend Any Type Of Flooding-based DoS Attack In Wireless Ad Hoc Network With Higher Detection Accuracy And Extremely Lower False Detection Rate.
With The Web Advancements Are Rapidly Developing, The Greater Part Of Individuals Makes Their Transactions On Web, For Example, Searching Through Data, Banking, Shopping, Managing, Overseeing And Controlling Dam And Business Exchanges, Etc. Web Applications Have Gotten Fit To Numerous Individuals' Day By Day Lives Activities. Dangers Pertinent To Web Applications Have Expanded To Huge Development. Presently A Day, The More The Quantity Of Vulnerabilities Will Be Diminished, The More The Quantity Of Threats Become To Increment. Structured Query Language Injection Attack (SQLIA) Is One Of The Incredible Dangers Of Web Applications Threats. Lack Of Input Validation Vulnerabilities Where Cause To SQL Injection Attack On Web. SQLIA Is A Malicious Activity That Takes Negated SQL Statement To Misuse Data-driven Applications. This Vulnerability Admits An Attacker To Comply Crafted Input To Disclosure With The Application's Interaction With Back-end Databases. Therefore, The Attacker Can Gain Access To The Database By Inserting, Modifying Or Deleting Critical Information Without Legitimate Approval. The Paper Presents An Approach Which Detects A Query Token With Reserved Words-based Lexicon To Detect SQLIA. The Approach Consists Of Two Highlights: The First One Creates Lexicon And The Second Step Tokenizes The Input Query Statement And Each String Token Was Detected To Predefined Words Lexicon To Prevent SQLIA. In This Paper, Detection And Prevention Technologies Of SQL Injection Attacks Are Experimented And The Result Are Satisfactory.