As An Industrial Application Of Internet Of Things (IoT), Internet Of Vehicles Is One Of The Most Crucial Techniques For Intelligent Transportation System, Which Is A Basic Element Of Smart Cities.
DNS, One Of The Most Critical Elements Of The Internet, Is Among These Protocols. It Is Vulnerable To DDoS Attacks Mainly Because All Exchanges In This Protocol Use User Datagram Protocol (UDP).
Since Smart-Islands (SIs) With Advanced Cyber-infrastructure Are Incredibly Vulnerable To Cyber-attacks, Increasing Attention Needs To Be Applied To Their Cyber-security. False Data Injection Attacks (FDIAs) By Manipulating Measurements May Cause Wrong State Estimation (SE) Solutions Or Interfere With The Central Control System Performance. There Is A Possibility That Conventional Attack Detection Methods Do Not Detect Many Cyber-attacks; Hence, System Operation Can Interfere. Research Works Are More Focused On Detecting Cyber-attacks That Target DC-SE; However, Due To More Widely Uses Of AC SIs, Investigation On Cyber-attack Detection In AC Systems Is More Crucial. In These Regards, A New Mechanism To Detect Injection Of Any False Data In AC-SE Based On Signal Processing Technique Is Proposed In This Paper. Malicious Data Injection In The State Vectors May Cause Deviation Of Their Temporal And Spatial Data Correlations From Their Ordinary Operation. The Suggested Detection Method Is Based On Analyzing Temporally Consecutive System States Via Wavelet Singular Entropy (WSE). In This Method, To Adjust Singular Value Matrices And Wavelet Transforms' Detailed Coefficients, Switching Surface Based On Sliding Mode Controller Are Decomposed; Then, By Applying The Stochastic Process, Expected Entropy Values Are Calculated. Indices Are Characterized Based On The WSE In Switching Level Of Current And Voltage For Cyber-attack Detection. The Proposed Detection Method Is Applied To Different Case Studies To Detect Cyber-attacks With Various Types Of False Data Injection, Such As Amplitude, And Vector Deviation Signals. The Simulation Results Confirm The High-performance Capability Of The Proposed FDIA Detection Method. This Detection Method's Significant Characteristic Is Its Ability In Fast Detection (10 Ms From The Attack Initiation); Besides, This Technique Can Achieve An Accuracy Rate Of Over 96.5%.
With A Storage Space Limit On The Sensors, WSN Has Some Drawbacks Related To Bandwidth And Computational Skills. This Limited Resources Would Reduce The Amount Of Data Transmitted Across The Network. For This Reason, Data Aggregation Is Considered As A New Process. Iterative Filtration (IF) Algorithms, Which Provide Trust Assessment To The Various Sources From Which The Data Aggregation Has Been Performed, Are Efficient In The Present Data Aggregation Algorithms. Trust Assessment Is Done With Weights From The Simple Average Method To Aggregation, Which Treats Attack Susceptibility. Iteration Filter Algorithms Are Stronger Than The Ordinary Average, But They Do Not Handle The Current Advanced Attack That Takes Advantage Of False Information With Many Compromise Nodes. Iterative Filters Are Strengthened By An Initial Confidence Estimate To Track New And Complex Attacks, Improving The Solidity And Accuracy Of The IF Algorithm. The New Method Is Mainly Concerned With Attacks Against The Clusters And Not Against The Aggregator. In This Process, If An Aggregator Is Attacked, The Current System Fails, And The Information Is Eventually Transmitted To The Aggregator By The Cluster Members. This Problem Can Be Detected When Both Cluster Members And Aggregators Are Being Targeted. It Is Proposed To Choose An Aggregator Which Chooses A New Aggregator According To The Remaining Maximum Energy And Distance To The Base Station When An Aggregator Attack Is Detected. It Also Save Time And Energy Compared To The Current Program Against The Corrupted Aggregator Node.
Efficient Bot Detection Is A Crucial Security Matter And Widely Explored In The Past Years. Recent Approaches Supplant Flow-based Detection Techniques And Exploit Graph-based Features, Incurring However In Scalability Issues, With High Time And Space Complexity. Bots Exhibit Specific Communication Patterns: They Use Particular Protocols, Contact Specific Domains, Hence Can Be Identified By Analyzing Their Communication With The Outside. A Way We Follow To Simplify The Communication Graph And Avoid Scalability Issues Is Looking At Frequency Distributions Of Protocol Attributes Capturing The Specificity Of Botnets Behaviour. We Propose A Bot Detection Technique Named BotFP, For BotFingerPrinting, Which Acts By (i) Characterizing Hosts Behaviour With Attribute Frequency Distribution Signatures, (ii) Learning Benign Hosts And Bots Behaviours Through Either Clustering Or Supervised Machine Learning (ML), And (iii) Classifying New Hosts Either As Bots Or Benign Ones, Using Distances To Labelled Clusters Or Relying On A ML Algorithm. We Validate BotFP On The CTU-13 Dataset, Which Contains 13 Scenarios Of Bot Infections, Connecting To A Command-and-Control (C&C) Channel And Launching Malicious Actions Such As Port Scanning Or Denial-of-Service (DDoS) Attacks. Compared To State-of-the-art Techniques, We Show That BotFP Is More Lightweight, Can Handle Large Amounts Of Data, And Shows Better Accuracy.
A Fundamental Premise Of SMS One-Time Password (OTP) Is That The Used Pseudo-random Numbers (PRNs) Are Uniquely Unpredictable For Each Login Session. Hence, The Process Of Generating PRNs Is The Most Critical Step In The OTP Authentication. An Improper Implementation Of The Pseudorandom Number Generator (PRNG) Will Result In Predictable Or Even Static OTP Values, Making Them Vulnerable To Potential Attacks. In This Paper, We Present A Vulnerability Study Against PRNGs Implemented For Android Apps. A Key Challenge Is That PRNGs Are Typically Implemented On The Server-side, And Thus The Source Code Is Not Accessible. To Resolve This Issue, We Build An Analysis Tool, OTP-Lint, To Assess Implementations Of The PRNGs In An Automated Manner Without The Source Code Requirement. Through Reverse Engineering, OTP-Lint Identifies The Apps Using SMS OTP And Triggers Each App's Login Functionality To Retrieve OTP Values. It Further Assesses The Randomness Of The OTP Values To Identify Vulnerable PRNGs. By Analyzing 6,431 Commercially Used Android Apps Downloaded From Google Play And Tencent Myapp, OTP-Lint Identified 399 Vulnerable Apps That Generate Predictable OTP Values. Even Worse, 194 Vulnerable Apps Use The OTP Authentication Alone Without Any Additional Security Mechanisms, Leading To Insecure Authentication Against Guessing Attacks And Replay Attacks.
Spam And Phishing Emails Are Very Troublesome Problems For Mailbox Users. Many Enterprises, Departments And Individuals Are Harmed By Them. Moreover, The Senders Of These Malicious Emails Are In A Hidden Position And Occupy An Initiative Position. The Existing Mailbox Services Can Only Filter And Shield Some Malicious Mails, Which Is Difficult To Reverse The Disadvantage Of Users. To Solve These Problems, We Propose A Secure Mail System Using K-nearest Neighbor(KNN) Algorithm And Improved Long Short-term Memory(LSTM) Algorithm(Bi-LSTM-Attention Algorithm). KNN Classifier Can Effectively Distinguish Normal Emails, Spam And Phishing Emails, And Has A High Accuracy. Bi-LSTM-Attention Classifier Classifies Phishing Emails According To The Similarity Of The Malicious Mail Text From The Same Attacker To Some Extent. By Classifying And Identifying The Source Of Malicious Emails, We Can Grasp The Characteristics Of The Attacker, Provide Materials For Further Research, And Improve The Passive Status Of Users. Experiments Show That The Classification Results Of Attack Sources Reach 90%, Which Indicate The Value Of Further Research And Promotion.
Today's Organizations Raise An Increasing Need For Information Sharing Via On-demand Access. Information Brokering Systems (IBSs) Have Been Proposed To Connect Large-scale Loosely Federated Data Sources Via A Brokering Overlay, In Which The Brokers Make Routing Decisions To Direct Client Queries To The Requested Data Servers. Many Existing IBSs Assume That Brokers Are Trusted And Thus Only Adopt Server-side Access Control For Data Confidentiality. However, Privacy Of Data Location And Data Consumer Can Still Be Inferred From Metadata (such As Query And Access Control Rules) Exchanged Within The IBS, But Little Attention Has Been Put On Its Protection. In This Paper, We Propose A Novel Approach To Preserve Privacy Of Multiple Stakeholders Involved In The Information Brokering Process. We Are Among The First To Formally Define Two Privacy Attacks, Namely Attribute-correlation Attack And Inference Attack, And Propose Two Countermeasure Schemes Automaton Segmentation And Query Segment Encryption To Securely Share The Routing Decision-making Responsibility Among A Selected Set Of Brokering Servers. With Comprehensive Security Analysis And Experimental Results, We Show That Our Approach Seamlessly Integrates Security Enforcement With Query Routing To Provide System-wide Security With Insignificant Overhead.
The Link flooding Attack (LFA) Arises As A New Classof Distributed Denial Of Service (DDoS) Attacks In Recent Years.By Aggregating Low-rate Protocol-conforming Traffic To Congestselected Links, LFAs Can Degrade Connectivity Or Saturate Targetservers Indirectly. Due To Fast Proliferation Of Insecure Internetof Things (IOT) Devices, The Deployment Of Botnets Is Gettingeasier, Which Dramatically Increases The Risk Of LFAs. Since Theattacking Traffic May Not Reach The Victims Directly And Is Usuallylegitimate, LFAs Are Extremely Difficult To Detect And Defendby Traditional Methods. In This Work, We Model The Interactionbetween LFA Attackers And Defenders As A Two-person Extensiveform Bayesian Game With Incomplete Information. By Using Actionabstraction And The Divide And Conquer Method, We Analyzethe Nash Equilibrium On Each Link, Which Reveals The Rationalbehavior Of Attackers And The Optimal Strategy Of Defenders.Furthermore, We Concretely Expound How To Adopt Local Optimalstrategies In The Internet-wide Scenario. Experimental Resultsshow The Effectiveness And Robustness Of Our Proposed Decision-making Method In Explicit LFA Defending Scenarios.
It Is Well Known That Physical-layer Group Secret-Key (GSK) Generation Techniques Allow Multiple Nodes Of A Wireless Network To Synthesize A Common Secret-key, Which Can Be Subsequently Used To Keep Their Group Messages Confidential. As One Of Its Salient Features, The Wireless Nodes Involved In Physical-layer GSK Generation Extract Randomness From A Subset Of Their Wireless Channels, Referred As The Common Source Of Randomness (CSR). Unlike Two-user Key Generation, In GSK Generation, Some Nodes Must Act As Facilitators By Broadcasting Quantized Versions Of The Linear Combinations Of The Channel Realizations, So As To Assist All The Nodes To Observe A CSR. However, We Note That Broadcasting Linear Combination Of Channel Realizations Incurs Non-zero Leakage Of The CSR To An Eavesdropper, And Moreover, Quantizing The Linear Combination Also Reduces The Overall Key-rate. Identifying These Issues, We Propose A Practical GSK Generation Protocol, Referred To As Algebraic Symmetrically Quantized GSK (A-SQGSK) Protocol, In A Network Of Three Nodes, Wherein Due To Quantization Of Symbols At The Facilitator, The Other Two Nodes Also Quantize Their Channel Realizations, And Use Them Appropriately Over Algebraic Rings To Generate The Keys. First, We Prove That The A-SQGSK Protocol Incurs Zero Leakage To An Eavesdropper. Subsequently, On The CSR Provided By The A-SQGSK Protocol, We Propose A Consensus Algorithm Among The Three Nodes, Called The Entropy-Maximization Error-Minimization (EM-EM) Algorithm, Which Maximizes The Entropy Of The Secret-key Subject To An Upper-bound On The Mismatch-rate. We Use Extensive Analysis And Simulation Results To Lay Out Guidelines To Jointly Choose The Parameters Of The A-SQGSK Protocol And The EM-EM Algorithm.
Malicious Users Can Attack Web Applications By Exploiting Injection Vulnerabilities In The Source Code. This Work Addresses The Challenge Of Detecting Injection Vulnerabilities In The Server-side Code Of Java Web Applications In A Scalable And Effective Way. We Propose An Integrated Approach That Seamlessly Combines Security Slicing With Hybrid Constraint Solving; The Latter Orchestrates Automata-based Solving With Meta-heuristic Search. We Use Static Analysis To Extract Minimal Program Slices Relevant To Security From Web Programs And To Generate Attack Conditions. We Then Apply Hybrid Constraint Solving To Determine The Satisfiability Of Attack Conditions And Thus Detect Vulnerabilities. The Experimental Results, Using A Benchmark Comprising A Set Of Diverse And Representative Web Applications/services As Well As Security Benchmark Applications, Show That Our Approach (implemented In The JOACO Tool) Is Significantly More Effective At Detecting Injection Vulnerabilities Than State-of-the-art Approaches, Achieving 98 Percent Recall, Without Producing Any False Alarm. We Also Compared The Constraint Solving Module Of Our Approach With State-of-the-art Constraint Solvers, Using Six Different Benchmark Suites; Our Approach Correctly Solved The Highest Number Of Constraints (665 Out Of 672), Without Producing Any Incorrect Result, And Was The One With The Least Number Of Time-out/failing Cases. In Both Scenarios, The Execution Time Was Practically Acceptable, Given The Offline Nature Of Vulnerability Detection.
This Letter Proposes To Use Intelligent Reflecting Surface (IRS) As A Green Jammer To Attack A Legitimate Communication Without Using Any Internal Energy To Generate Jamming Signals. In Particular, The IRS Is Used To Intelligently Reflect The Signals From The Legitimate Transmitter To The Legitimate Receiver (LR) To Guarantee That The Received Signals From Direct And Reflecting Links Can Be Added Destructively, Which Thus Diminishes The Signal-to-Interference-plus-Noise Ratio (SINR) At The LR. To Minimize The Received Signal Power At The LR, We Consider The Joint Optimization Of Magnitudes Of Reflection Coefficients And Discrete Phase Shifts At The IRS. Based On The Block Coordinate Descent, Semidefinite Relaxation, And Gaussian Randomization Techniques, The Solution Can Be Obtained Efficiently. Through Simulation Results, We Show That By Using The IRS-based Jammer, We Can Reduce The Signal Power Received At The LR By Up To 99%. Interestingly, The Performance Of The Proposed IRS-based Jammer Is Even Better Than That Of The Conventional Active Jamming Attacks In Some Scenarios.
Brute Force And Dictionary Attacks On Password-only Remote Login Services Are Now Widespread And Ever Increasing. Enabling Convenient Login For Legitimate Users While Preventing Such Attacks Is A Difficult Problem. Automated Turing Tests (ATTs) Continue To Be An Effective, Easy-to-deploy Approach To Identify Automated Malicious Login Attempts With Reasonable Cost Of Inconvenience To Users. In This Paper, We Discuss The Inadequacy Of Existing And Proposed Login Protocols Designed To Address Large-scale Online Dictionary Attacks (e.g., From A Botnet Of Hundreds Of Thousands Of Nodes). We Propose A New Password Guessing Resistant Protocol (PGRP), Derived Upon Revisiting Prior Proposals Designed To Restrict Such Attacks. While PGRP Limits The Total Number Of Login Attempts From Unknown Remote Hosts To As Low As A Single Attempt Per Username, Legitimate Users In Most Cases (e.g., When Attempts Are Made From Known, Frequently-used Machines) Can Make Several Failed Login Attempts Before Being Challenged With An ATT. We Analyze The Performance Of PGRP With Two Real-world Data Sets And Find It More Promising Than Existing Proposals.
Emerging Computing Technologies Such As Web Services, Service-oriented Architecture, And Cloud Computing Has Enabled Us To Perform Business Services More Efficiently And Effectively. However, We Still Suffer From Unintended Security Leakages By Unauthorized Actions In Business Services While Providing More Convenient Services To Internet Users Through Such A Cutting-edge Technological Growth. Furthermore, Designing And Managing Web Access Control Policies Are Often Error-prone Due To The Lack Of Effective Analysis Mechanisms And Tools. In This Paper, We Represent An Innovative Policy Anomaly Analysis Approach For Web Access Control Policies, Focusing On Extensible Access Control Markup Language Policy. We Introduce A Policy-based Segmentation Technique To Accurately Identify Policy Anomalies And Derive Effective Anomaly Resolutions, Along With An Intuitive Visualization Representation Of Analysis Results. We Also Discuss A Proof-of-concept Implementation Of Our Method Called XAnalyzer And Demonstrate How Our Approach Can Efficiently Discover And Resolve Policy Anomalies.
Open Nature Of Peer-to-peer Systems Exposes Them To Malicious Activity. Building Trust Relationships Among Peers Can Mitigate Attacks Of Malicious Peers. This Paper Presents Distributed Algorithms That Enable A Peer To Reason About Trustworthiness Of Other Peers Based On Past Interactions And Recommendations. Peers Create Their Own Trust Network In Their Proximity By Using Local Information Available And Do Not Try To Learn Global Trust Information. Two Contexts Of Trust, Service, And Recommendation Contexts, Are Defined To Measure Trustworthiness In Providing Services And Giving Recommendations. Interactions And Recommendations Are Evaluated Based On Importance, Recentness, And Peer Satisfaction Parameters. Additionally, Recommender's Trustworthiness And Confidence About A Recommendation Are Considered While Evaluating Recommendations. Simulation Experiments On A File Sharing Application Show That The Proposed Model Can Mitigate Attacks On 16 Different Malicious Behavior Models. In The Experiments, Good Peers Were Able To Form Trust Relationships In Their Proximity And Isolate Malicious Peers.
In 2011, Sun Et Al. Proposed A Security Architecture To Ensure Unconditional Anonymity For Honest Users And Traceability Of Misbehaving Users For Network Authorities In Wireless Mesh Networks (WMNs). It Strives To Resolve The Conflicts Between The Anonymity And Traceability Objectives. In This Paper, We Attacked Sun Et Al. Scheme's Traceability. Our Analysis Showed That Trusted Authority (TA) Cannot Trace The Misbehavior Client (CL) Even If It Double-time Deposits The Same Ticket.
A Firewall Is A System Acting As An Interface Of A Network To One Or More External Networks. It Implements The Security Policy Of The Network By Deciding Which Packets To Let Through Based On Rules Defined By The Network Administrator. Any Error In Defining The Rules May Compromise The System Security By Letting Unwanted Traffic Pass Or Blocking Desired Traffic. Manual Definition Of Rules Often Results In A Set That Contains Conflicting, Redundant Or Overshadowed Rules, Resulting In Anomalies In The Policy. Manually Detecting And Resolving These Anomalies Is A Critical But Tedious And Error Prone Task. Existing Research On This Problem Have Been Focused On The Analysis And Detection Of The Anomalies In Firewall Policy. Previous Works Define The Possible Relations Between Rules And Also Define Anomalies In Terms Of The Relations And Present Algorithms To Detect The Anomalies By Analyzing The Rules. In This Paper, We Discuss Some Necessary Modifications To The Existing Definitions Of The Relations. We Present A New Algorithm That Will Simultaneously Detect And Resolve Any Anomaly Present In The Policy Rules By Necessary Reorder And Split Operations To Generate A New Anomaly Free Rule Set. We Also Present Proof Of Correctness Of The Algorithm. Then We Present An Algorithm To Merge Rules Where Possible In Order To Reduce The Number Of Rules And Hence Increase Efficiency Of The Firewall.
Client-side Watermark Embedding Systems Have Been Proposed As A Possible Solution For The Copyright Protection In Large-scale Content Distribution Environments. In This Framework, We Propose A New Look-up-table-based Secure Client-side Embedding Scheme Properly Designed For The Spread Transform Dither Modulation Watermarking Method. A Theoretical Analysis Of The Detector Performance Under The Most Known Attack Models Is Presented And The Agreement Between Theoretical And Experimental Results Verified Through Several Simulations. The Experimental Results Also Prove That The Advantages Of The Informed Embedding Technique In Comparison To The Spread-spectrum Watermarking Approach, Which Are Well Known In The Classical Embedding Schemes, Are Preserved In The Client-side Scenario. The Proposed Approach Permits Us To Successfully Combine The Security Of Client-side Embedding With The Robustness Of Informed Embedding Methods.
A Fair Contract-signing Protocol Allows Two Potentially Mistrusted Parities To Exchange Their Commitments (i.e., Digital Signatures) To An Agreed Contract Over The Internet In A Fair Way, So That Either Each Of Them Obtains The Other's Signature, Or Neither Party Does. Based On The RSA Signature Scheme, A New Digital Contract-signing Protocol Is Proposed In This Paper. Like The Existing RSA-based Solutions For The Same Problem, Our Protocol Is Not Only Fair, But Also Optimistic, Since The Trusted Third Party Is Involved Only In The Situations Where One Party Is Cheating Or The Communication Channel Is Interrupted. Furthermore, The Proposed Protocol Satisfies A New Property- Abuse-freeness . That Is, If The Protocol Is Executed Unsuccessfully, None Of The Two Parties Can Show The Validity Of Intermediate Results To Others. Technical Details Are Provided To Analyze The Security And Performance Of The Proposed Protocol. In Summary, We Present The First Abuse-free Fair Contract-signing Protocol Based On The RSA Signature, And Show That It Is Both Secure And Efficient.
This Paper Presents A ‘Spam Zombie Detection’ System Which Is An Online System Over The Network That Detects The Spam And The Sender Of The Spam (zombie) Before The Receiver Receives It. Thus All The Detection Work Is Done At Sender Level Itself. This Paper Focuses On A Powerful Statistical Tool Called Sequential Probability Ratio Test, Which Has Bounded False Positive And False Negative Error Rates On Which The Spam Zombie Detection System Is Based. This System Is Mainly Implemented Over The Private Mailing System. It Also Provides The Enhanced Security Mechanism In Which, If The System Which Has Been Hacked I.e. It Has Become A Zombie, Then It Gets Blocked Within The Network.