Best Project Center | Best project center in chennai, best project center in t.nagar, best project center in tnagar, Best final year project center, project center in Chennai, project center near me, best project center in mambalam, best project center in vadapalani, best project center in ashok nagar, best project center in Annanagar, best project center

Search Projects Here

397 Results Found

PATH HOLE DETECTION- A Modern Pothole Detection Technique Using Deep Learning

Road Accident Detection And Avoidance Are A More Difficult And Challenging Problem In India As Poor Quality Of Construction Materials Get Used In Road Drainage System Construction. Due To The Above Problems, Roads Get Damaged Early And Potholes Appear On The Roads Which Cause Accidents. According To A Report Submitted By The Ministry Of Road Transport And Highways Transport Research Wing New Delhi In 2017, Approximately 4,64,910 Accidents Happen Per Year In India. This Paper Proposed A Deep Learning-based Model That Can Detect Potholes Early Using Images And Videos Which Can Reduce The Chances Of An Accident. This Model Is Basically Based On Transfer Learning, Faster Region-based Convolutional Neural Network(F-RCNN) And Inception-V2. There Are Many Models For Pothole Detection That Uses The Accelerometer (without Using Images And Videos) With Machine Learning Techniques, But A Less Number Of Pothole Detection Models Can Be Found Which Uses Only Machine Learning Techniques To Detect Potholes. The Results Of This Work Have Shown That Our Proposed Model Outperforms Other Existing Techniques Of Potholes Detection.

IMAGE ENHANCEMENT- An Effectual Underwater Image Enhancement Using Deep Learning Algorithm

Digital Image Processing Domain Is Growing Day-by-day By Introducing Novel Technologies To Provide Assistance For Several Applications Such As Robotic Activities, Underwater Network Formation, And So On. In Particular, Underwater Image Processing Is Considered As The Crucial Task In Image Processing Industry Due To The Flow Of Light Waves That Are Not In The Specific And Expected Range Under The Water Level. While Image Restoration Technology Can Adequately Consider Removing This Same Haze From Source Images, They Need To Obtain Several Images From A Certain Place That Prevent It From Being Used In A Real-time System. To Overcome This Issue, A Deep Study Approach Is Developed By Providing Excellent Outcomes Of Deep Learning Approaches In Several Other Image Analysis Concerns Such As Colorizing Images Or Object Identification. A Convolution Neural Network (CNN) Model Is Trained To De-haze The Individual Images With Image Restoration In Order To Perform Further With An Image Improvement. The Proposed Approach Can Produce Images With Image Restoration Quality By Including A Standard Image Input And Here, The Neural Network Is Evaluated By Using Images And Features, Which Are Obtained From Separate Areas To Prove Its Capacity To Generalize. The Efficiency Of The Proposed Approach Is High When Compared To Other Existing Methods.

PLANT DISEASE DETECTION USING CNN- Disease Detection Of Plant Leaf Using Image Processing And CNN With Preventive Measures Flask Web Framework

One Of The Important And Tedious Task In Agricultural Practices Is The Detection Of The Disease On Crops. It Requires Huge Time As Well As Skilled Labor. This Paper Proposes A Smart And Efficient Technique For Detection Of Crop Disease Which Uses Computer Vision And Machine Learning Techniques. The Proposed System Is Able To Detect 20 Different Diseases Of 5 Common Plants With 93% Accuracy.

DOG BREED CLASSIFICATION- Breakthrough Conventional Based Approach For Dog Breed Classification Using CNN With Transfer Learning

Dogs Are One Of The Most Common Domestic Animals. Due To A Large Number Of Dogs, There Are Several Issues Such As Population Control, Decrease Outbreak Such As Rabies, Vaccination Control, And Legal Ownership. At Present, There Are Over 180 Dog Breeds. Each Dog Breed Has Specific Characteristics And Health Conditions. In Order To Provide Appropriate Treatments And Training, It Is Essential To Identify Individuals And Their Breeds. The Paper Presents The Classification Methods For Dog Breed Classification Using Two Image Processing Approaches 1) Conventional Based Approaches By Local Binary Pattern (LBP) And Histogram Of Oriented Gradient (HOG) 2) The Deep Learning Based Approach By Using Convolutional Neural Networks (CNN) With Transfer Learning. The Result Shows That Our Retrained CNN Model Performs Better In Classifying A Dog Breeds. It Achieves 96.75% Accuracy Compared With 79.25% Using The HOG Descriptor.

PLANT LEAF DISEASE RECOGNISATION- Leaf Disease Detection And Classification Based On Machine Learning

Detection Of Diseases In Plants Is A Significant Task That Has To Be Done In Agriculture. This Is Something On Which The Economy Profoundly Depends. Infection Discovery In Plants Is A Significant Job In The Agribusiness Field, As Having Diseases In Plants Is Very Common. To Recognize The Diseases In Leaves, A Continuous Observation Of The Plants Is Required. This Observation Or Continuous Monitoring Of The Plants Takes A Lot Of Human Effort And It Is Time-consuming Too. To Make It Simply Some Sort Of Programmed Strategy Is Required To Observe The Plants. Program Based Identification Of Diseases In Plants Makes Easier To Detect The Damaged Leaves And Reduces Human Efforts And Time-saving. The Proposed Algorithm Distinguishing Sickness In Plants And Classify Them More Accurately As Compared To Existing Techniques.

SCRATCH REMOVAL OF IMAGE FROM OLD IMAGE- Research On Repairing Historical Photos Of Damaged Scratches Based On Computer Technology

Historical Photographs Record The True Face Of A Moment In The Development Of Human History, They Have Authenticity, Vividness, And Unique Values. However, Due To Various Factors, Aging And Damage Will Occur. With The Development Of Computer Technology, The Restoration Technology Is More Used In Photo Restoration And Virtual Restoration Of Cultural Relics. This Paper First Analyzes The Principle Of Repairing Photo Archives Based On Computer Technology, And Then Uses The Combination Of Statistics And Computer Image Processing Technology To Detect And Repair The Scratches In Historical Photographs. And The Paper Establishes A Model Repair Framework, Which Provides A New Idea For The Repair Of Such Historical Photos. The Experimental Results Show That The Method Has A Significant Repair Effect.

SKIN CANCER CLASSIFICATION- Skin Cancer Classification Using Image Processing And Machine Learning

One Of The Most Rapidly Spreading Cancers Among Various Other Types Of Cancers Known To Humans Is Skin Cancer. Melanoma Is The Worst And The Most Dangerous Type Of Skin Cancer That Appears Usually On The Skin Surface And Then Extends Deeper Into The Layers Of Skin. However, If Diagnosed At An Early Stage; The Survival Rate Of Melanoma Patients Is 96% With Simple And Economical Treatments. The Conventional Method Of Diagnosing Melanoma Involves Expert Dermatologists, Equipment, And Biopsies. To Avoid The Expensive Diagnosis, And To Assist Dermatologists, The Field Of Machine Learning Has Proven To Provide State Of The Art Solutions For Skin Cancer Detection At An Earlier Stage With High Accuracy. In This Paper, A Method For Skin Lesion Classification And Segmentation As Benign Or Malignant Is Proposed Using Image Processing And Machine Learning. A Novel Method Of Contrast Stretching Of Dermoscopic Images Based On The Methods Of Mean Values And Standard Deviation Of Pixels Is Proposed. Then The OTSU Thresholding Algorithm Is Applied For Image Segmentation. After The Segmentation, Features Including Gray Level Co-occurrence Matrix (GLCM) Features For Texture Identification, The Histogram Of Oriented Gradients (HOG) Object, And Color Identification Features Are Extracted From The Segmented Images. Principal Component Analysis (PCA) Reduction Of HOG Features Is Performed For Dimensionality Reduction. Synthetic Minority Oversampling Technique (SMOTE) Sampling Is Performed To Deal With The Class Imbalance Problem. The Feature Vector Is Then Standardized And Scaled. A Novel Approach Of Feature Selection Based On The Wrapper Method Is Proposed Before Classification. Classifiers Including Quadratic Discriminant, SVM (Medium Gaussian), And Random Forest Are Used For Classification. The Proposed Approach Is Verified On The Publicly Accessible Dataset Of ISIC-ISBI 2016. Maximum Accuracy Is Achieved Using The Random Forest Classifier. The Classification Accuracy Of The Proposed System With The Random Forest Classifier On ISIC-ISBI 2016 Is 93.89%. The Proposed Approach Of Contrast Stretching Before The Segmentation Gives Satisfactory Results Of Segmentation. Further, The Proposed Wrapper-based Approach Of Feature Selection In Combination With The Random Forest Classifier Gives Promising Results As Compared To Other Commonly Used Classifiers.

SKIN CANCER DETECTION USING FLASK API- A Smartphone Based Application For Skin Cancer Classification Using Deep Learning With Clinical Images And Lesion Information

Over The Last Decades, The Incidence Of Skin Cancer, Melanoma And Non-melanoma, Has Increased At A Continuous Rate. In Particular For Melanoma, The Deadliest Type Of Skin Cancer, Early Detection Is Important To Increase Patient Prognosis. Recently, Deep Neural Networks (DNNs) Have Become Viable To Deal With Skin Cancer Detection. In This Work, We Present A Smartphone-based Application To Assist On Skin Cancer Detection. This Application Is Based On A Convolutional Neural Network(CNN) Trained On Clinical Images And Patients Demographics, Both Collected From Smartphones. Also, As Skin Cancer Datasets Are Imbalanced, We Present An Approach, Based On The Mutation Operator Of Differential Evolution (DE) Algorithm, To Balance Data. In This Sense, Beyond Provides A Flexible Tool To Assist Doctors On Skin Cancer Screening Phase, The Method Obtains Promising Results With A Balanced Accuracy Of 85% And A Recall Of 96%.

ACTIVITY RECOGNITION OF THE SPORT PERSON- Computer Vision-based Survey On Human Activity Recognition System Challenges And Applications

For Surveillance Purpose, Lots Of Method Were Used By The Researchers But Computer Vision Based Human Activity Recognition (HAR) Technologies/systems Received The Most Interest Because They Automatically Distinguish Human Behaviour And Movements From Video Data Utilizing Recorded Details From Cameras. But The Extraction Of Accurate And Opportune Information From Video Of Human's Activities And Behaviours Is Most Important And Difficult Task In Pervasive Computing Environment. Due To Lots Of Applications Of HAR Systems Like In Medical Field, Security, Visual Monitoring, Video Recovery, Entertainment And Irregular Behaviour Detection, The Accuracy Of System Is Most Important Factors For Researchers. This Review Article Presents A Brief Survey Of The Existing Video Or Vision-based HAR System To Find Out Their Challenges And Applications In Three Aspects Such As Recognition Of Activities, Activity Analysis, And Decision From Visual Content Representation. In Many Applications, System Recognition Time And Accuracy Is Most Important Factor And It Is Affected Due To An Increase In The Usage Of Simple Or Low Quality Type Cameras For Automated Systems. So, To Obtain A Better Accuracy And Fast Responses, The Usage Of Demanding And Computationally Intelligent Classification Techniques Such As Deep Learning And Machine Learning Is A Better Option For Researchers. In This Survey, We Addressed Numerous Computationally Intelligent Classification Techniques-based Research For HAR From 2010 To 2020 For A Better Analysis Of The Benefits And Drawbacks Of Systems, The Challenges Faced And Applications With Future Directions For HAR. We Also Present Some Accessible Problems And Ideas That Should Be Discussed In Future Research For The HAR System Utilizing Machine Learning And Deep Learning Principles Due To ThPeir Strong Relevance.

AIR CANVAS APPLICATION USING OPENCV AND NUMPY IN PYTHON

Writing In Air Has Been One Of The Most Fascinating And Challenging Research Areas In Field Of Image Processing And Pattern Recognition In The Recent Years. It Contributes Immensely To The Advancement Of An Automation Process And Can Improve The Interface Between Man And Machine In Numerous Applications. Several Research Works Have Been Focusing On New Techniques And Methods That Would Reduce The Processing Time While Providing Higher Recognition Accuracy. Object Tracking Is Considered As An Important Task Within The Field Of Computer Vision. The Invention Of Faster Computers, Availability Of Inexpensive And Good Quality Video Cameras And Demands Of Automated Video Analysis Has Given Popularity To Object Tracking Techniques. Generally, Video Analysis Procedure Has Three Major Steps: Firstly, Detecting Of The Object, Secondly Tracking Its Movement From Frame To Frame And Lastly Analysing The Behaviour Of That Object. For Object Tracking, Four Different Issues Are Taken Into Account; Selection Of Suitable Object Representation, Feature Selection For Tracking, Object Detection And Object Tracking. In Real World, Object Tracking Algorithms Are The Primarily Part Of Different Applications Such As: Automatic Surveillance, Video Indexing And Vehicle Navigation Etc.

BLIND PEOPLE GUIDENCE- Object Detection For Visually Impaired People Using SSD Algorithm

Visually Impaired People Are Unaware Of The Danger That They Are Facing In Their Life. They May Face Many Challenges While Performing Their Daily Activity Even In Their Familiar Environments. Vision Is The Necessary Human Senses And It Plays The Important Role In Human Perception About Surrounding Environment. Hence, There Are Variety Of Computer Vision Products And Services Which Are Used In The Development Of New Electronic Aids For Those Blind People. In This Paper We Designed To Provide Navigation To Those People. It Guides The People About The Object As Well As Provides The Distance Of The Object. The Algorithm Itself Calculates The Distance Of The Object. Here It Also Provides The Audio Jack To Insist Them About The Object. Here We Are Using SSD Algorithm For Object Detection And Calculating The Distance Of The Object By Using Monodepth Algorithm.

CROWD SOCIAL DISTANCE MEASUREMENT AND MASK DETECTION- Monitoring Pandemic Precautionary Protocols Using Real-time Surveillance And Artificial Intelligence

The Worst Possible Situation Faced By Humanity, COVID-19, Is Proliferating Across More Than 180 Countries And About 37,000,000 Confirmed Cases, Along With 1,000,000 Deaths Worldwide As Of October 2020. The Absence Of Any Medical And Strategic Expertise Is A Colossal Problem, And Lack Of Immunity Against It Increases The Risk Of Being Affected By The Virus. Since The Absence Of A Vaccine Is An Issue, Social Spacing And Face Covering Are Primary Precautionary Methods Apt In This Situation. This Study Proposes Automation With A Deep Learning Framework For Monitoring Social Distancing Using Surveillance Video Footage And Face Mask Detection In Public And Crowded Places As A Mandatory Rule Set For Pandemic Terms Using Computer Vision. The Paper Proposes A Framework Is Based On YOLO Object Detection Model To Define The Background And Human Beings With Bounding Boxes And Assigned Identifications. In The Same Framework, A Trained Module Checks For Any Unmasked Individual. The Automation Will Give Useful Data And Understanding For The Pandemic's Current Evaluation; This Data Will Help Analyse The Individuals Who Do Not Follow Health Protocol Norms.

DRIVER DROWSINESS DETECTION USING EYE GAZE DETECTION- Synchronous System For Driver Drowsiness Detection Using Convolutional Neural Network And Computer Vision

One Of The Major Reasons Behind Car Accidents Is The Drowsy Nature Acquired By A Driver While Driving Any Vehicle. Owing To The Ongoing Scenario, In This Project, We Aim To Develop A Real Time Driver Drowsiness Detection System In Order To Detect The Drivers' Fatigue Status, Such As Dozing, Flickering Of Eye Lids And Time Span Of Eye Closure Without Having To Equip Their Bodies With Devices. The Objective Of This Project Is To Build A Drowsiness Detection System That Will Detect That A Person's Eyes Are Closed For A Few Seconds. This System Will Alert The Driver When Drowsiness Is Detected. This Approach Is Based On The Convolutional Neural Network That Can Be Implemented On Android Applications With High Accuracy. Apart From CNN, Computer Vision Also Plays A Major Role To Detect The Drowsiness Pattern Of The Driver. Cloud Architecture Has Also Proved To Be Beneficial In Case Of Capturing And Analyzing Real Time Video Streams.

DRONE MOVEMENT ANALYSIS WITH REINFORCEMENT LEARNING Q- LEARNING Simulation Of Drone Controller Using Reinforcement Learning AI With Hyper Parameter Optimization

Drone Is One Of The Latest Drone Technologies That Grows With Multiple Applications; One Of The Critical Applications Is For Fire-fighting Drones Such As Water Hose Carrying For Firefighting. One Of The Main Challenges Of The Drone Technologies Is The Non-linear Dynamic Movement Caused By A Variety Of Fire Conditions. One Solution Is To Use A Nonlinear Controller Such As Reinforcement Learning. In This Paper, Reinforcement Learning Has Been Applied As Their Key Control System To Improve The Conventional Approach, Which Is The Agent (drone) That Will Interact With The Environment Without Need Of The Controller For The Flying Process. This Paper Is Introduced An Optimization Method For The Hyperparameter In Order To Achieve A Better Reward. In Addition, We Only Concentrate On The Learning Rate (alpha) And Potential Reward Factor Discount (gamma) For Optimization In This Paper. From This Optimization, The Better Performance And Response From Our Result By Using Alpha = 0.1 & Gamma = 0.8 With Reward Produced 6100 And It Takes 49 Seconds In The Learning Process.

OBJECT DETECTION USING CAFFE CNN YOLOV AND SSD- Object Detection And Tracking For Community Surveillance Using Transfer Learning

Automation Of Video Surveillance Is Gaining Extensive Interest Recently, Considering The Public Security Issues. In Recent Times, A Systematic And Exact Detection Of An Object Is A Foremost Point In Computer Vision Technology. With The Unfolding Of Recent Deep Learning Techniques, The Precision Of Detecting An Object Has Increased Greatly Thereby Igniting The Interest In This Area To Large Extent. Also, With The Advent Of Automatic Driving Electric Cars, The Accurate Detection Of Objects Has Gained Phenomenal Importance. The Main Aim Is To Integrate The State-of-the-art Deep Learning Method On Pedestrian Object Detection In Real-time With Improved Accuracy. One Of The Crucial Problems In Deep Learning Is Using Computer Vision Techniques, Which Tend To Slow Down The Process With Trivial Performance. In This Work, An Improved Yolov3 Transfer Learning-based Deep Learning Technique Is Used For Object Detection. It Is Also Shown That This Approach Can Be Used For Solving The Problem Of Object Detection In A Sustained Manner Having The Ability To Further Separate Occluded Objects. Moreover, The Use Of This Approach Has Enhanced The Accuracy Of Object Detection. The Network Used Is Trained On A Challenging Data Set And The Output Obtained Is Fast And Precise Which Is Helpful For The Application That Requires Object Detection.

SELF DRIVING CAR USING COMPUTER VISION- Simulation Of Self-driving Car Using Deep Learning

The Rapid Development Of Artificial Intelligence Has Revolutionized The Area Of Autonomous Vehicles By Incorporating Complex Models And Algorithms. Self-driving Cars Are Always One Of The Biggest Inventions In Computer Science And Robotic Intelligence. Highly Robust Algorithms That Facilitate The Functioning Of These Vehicles Will Reduce Many Problems Associated With Driving Such As The Drunken Driver Problem. In This Paper Our Aim Is To Build A Deep Learning Model That Can Drive The Car Autonomously Which Can Adapt Well To The Real-time Tracks And Does Not Require Any Manual Feature Extraction. This Research Work Proposes A Computer Vision Model That Learns From Video Data. It Involves Image Processing, Image Augmentation, Behavioural Cloning And Convolutional Neural Network Model. The Neural Network Architecture Is Used To Detect Path In A Video Segment, Linings Of Roads, Locations Of Obstacles, And Behavioural Cloning Is Used For The Model To Learn From Human Actions In The Video.

VIOLENCE DETECTION- Violence Detection From Video Under 2D Spatio-Temporal Representations

Action Recognition In Videos, Especially For Violence Detection, Is Now A Hot Topic In Computer Vision. The Interest Of This Task Is Related To The Multiplication Of Videos From Surveillance Cameras Or Live Television Content Producing Complex 2D+t Data. State-of-the-art Methods Rely On End-to-end Learning From 3D Neural Network Approaches That Should Be Trained With A Large Amount Of Data To Obtain Discriminating Features. To Face These Limitations, We Present In This Article A Method To Classify Videos For Violence Recognition Purpose, By Using A Classical 2D Convolutional Neural Network (CNN). The Strategy Of The Method Is Two-fold: (1) We Start By Building Several 2D Spatio-temporal Representations From An Input Video, (2) The New Representations Are Considered To Feed The CNN To The Train/test Process. The Classification Decision Of The Video Is Carried Out By Aggregating The Individual Decisions From Its Different 2D Spatio-temporal Representations. An Experimental Study On Public Datasets Containing Violent Videos Highlights The Interest Of The Presented Method.

RARE ITEM IDENDIFICATION USING APRIORI ALGORITHM- Modern Applications And Challenges For Rare Itemset Mining

Data Mining Is The Process Of Extracting Useful Unknown Knowledge From Large Datasets. Frequent Itemset Mining Is The Fundamental Task Of Data Mining That Aims At Discovering Interesting Itemsets That Frequently Appear Together In A Dataset. However, Mining Infrequent (rare) Itemsets May Be More Interesting In Many Real-life Applications Such As Predicting Telecommunication Equipment Failures, Genetics, Medical Diagnosis, Or Anomaly Detection. In This Paper, We Survey Up-to-date Methods Of Rare Itemset Mining. The Main Goal Of This Survey Is To Provide A Comprehensive Overview Of The State-of-the-art Algorithms Of Rare Itemset Mining And Its Applications. The Main Contributions Of This Survey Can Be Summarized As Follows. In The First Part, We Define The Task Of Rare Itemset Mining By Explaining Key Concepts And Terminology, Motivation Examples, And Comparisons With Underlying Concepts. Then, We Highlight The State-of-art Methods For Rare Itemsets Mining. Furthermore, We Present Variations Of The Task Of Rare Itemset Mining To Discuss Limitations Of Traditional Rare Itemset Mining Algorithms. After That, We Highlight The Fundamental Applications Of Rare Itemset Mining. In The Last, We Point Out Research Opportunities And Challenges For Rare Itemset Mining For Future Research.

STREAMING DATA CLASSIFICATION- Diversity In Ensemble Model For Classification Of Data Streams With Concept Drift

Data Streams Can Be Defined As The Continuous Stream Of Data In Many Forms Coming From Different Sources. Data Streams Are Usually Non-stationary With Continually Changing Their Underlying Structure. Solving Of Predictive Or Classification Tasks On Such Data Must Consider This Aspect. Traditional Machine Learning Models Applied On The Drifting Data May Become Invalid In The Case When A Concept Change Appears. To Tackle This Problem, We Must Utilize Special Adaptive Learning Models, Which Utilize Various Tools Able To Reflect The Drifting Data. One Of The Most Popular Groups Of Such Methods Are Adaptive Ensembles. This Paper Describes The Work Focused On The Design And Implementation Of A Novel Adaptive Ensemble Learning Model, Which Is Based On The Construction Of A Robust Ensemble Consisting Of A Heterogeneous Set Of Its Members. We Used K-NN, Naive Bayes And Hoeffding Trees As Base Learners And Implemented An Update Mechanism, Which Considers Dynamic Class-weighting And Q Statistics Diversity Calculation To Ensure The Diversity Of The Ensemble. The Model Was Experimentally Evaluated On The Streaming Datasets, And The Effects Of The Diversity Calculation Were Analyzed.

AIR AND WEATHER QUALITY- Machine Learning To Improve Numerical Weather Forecasting

This Paper Presents A Brief Overview Of Trends In Numerical Weather Prediction, Difficulties, And The Nature Of Their Occurrence, The Existing And Promising Ways To Overcome Them. The Neural Network Architecture Is Proposed As A Promising Approach To Increase The Accuracy Of The 2m Temperature Forecast Given By The COSMO Regional Model. This Architecture Allows Predicting Errors Of The Atmospheric Model Forecasts With Their Further Corrections. Experiments Are Conducted With Different Histories Of Regional Model Errors. The Number Of Epochs After Which Network Overfitting Happens Is Determined. It Is Shown That The Proposed Architecture Makes It Possible To Achieve An Improvement Of A 2m Temperature Forecast In Approximately 50% Of Cases.

COVID AND GDP GROWTH RATE PREDICTION- Forecasting The Impact Of COVID-19 On GDP Based On Adaboost

Covid-19 Pandemic Has A Unique Impact On The Economy As Other Infectious Diseases. Epidemics Affect People's Daily Consumption Activities, For Example, By Causing Them To Shop Less, Travel Less, Consume Less And Invest Less. The Reduction Of A Large Number Of Economic Activities Leads To The Suppression Of Social Demand And The Reduction Of Consumption Level, Which Further Affects The GDP Of Various Countries Around The World. It Is Necessary To Investigate And Analyze The Impact Of The Epidemic On GDP In Order To Control And Analyze The Economic Situation Under The Impact Of The Epidemic. In This Paper, We Take The Impact Of COVID-19 On The GDP Of Each Country As A Regression Problem, And Propose To Forecast GDP Through Feature Engineering Combined With Aaboost Model. The Model Was Tested On More Than 50,000 Data Records From More Than 200 Countries Provided By The Kaggle Platform To Prove The Validity. The Experiment Shows That Adaboost Has Stronger Robustness Compared With Other Methods, Such As Random Forest, SVR. Adaboost Improves The MSE Of Random Forest By 2.39 And SVR By 0.38.

DIABETICS PATIENTS READMISSION PREDICTION- Prediction Of Diabetic Patient Readmission Using Machine Learning

Hospital Readmissions Pose Additional Costs And Discomfort For The Patient And Their Occurrences Are Indicative Of Deficient Health Service Quality, Hence Efforts Are Generally Made By Medical Professionals In Order To Prevent Them. These Endeavors Are Especially Critical In The Case Of Chronic Conditions, Such As Diabetes. Recent Developments In Machine Learning Have Been Successful At Predicting Readmissions From The Medical History Of The Diabetic Patient. However, These Approaches Rely On A Large Number Of Clinical Variables Thereby Requiring Deep Learning Techniques. This Article Presents The Application Of Simpler Machine Learning Models Achieving Superior Prediction Performance While Making Computations More Tractable.

HOTEL BOOKING WITH DATA ANALYSIS- Performance Analysis Of Machine Learning Techniques To Predict Hotel Booking Cancellations In Hospitality Industry

Hotel Booking Cancellation Is Provided A Substantial Effects On Demand Management Decisions In The Hospitality Industry. The Goal Of This Work Is To Investigate The Effects Of Different Machine Learning Methods In Hotel Booking Cancellation Process. In This Work, We Gathered A Hotel Booking Cancellation Dataset From Kaggle Data Repository. Then, Different Feature Transformation Techniques Were Implemented Into Primary Dataset And Generate Transformed Datasets. Further, We Reduced Insignificant Variables Using Feature Selection Methods. Therefore, Various Classifiers Were Employed Into Primary And Generated Subsets. The Effects Of The Machine Learning Methods Were Evaluated And Explored The Best Approaches In This Step. Among All Of These Methods, We Found That XGBoost Is The Most Frequent Method To Analyze These Datasets. Besides, Individual Classifiers Are Generated The Highest Result For Information Gain Feature Selection Method. This Analysis Can Be Used As The Complementary Tool To Investigate Hotel Booking Cancellation Dataset More Effectively.

SMART IRRIGATION USING IDENTIFYING PLANT DISEASE AND WEATHER CONDITION- Smart Automated Irrigation System With Disease Prediction

Precision Agriculture Have Gained Wide Popularity In Recent Years For Its High-ranking Applications Such As Remote Environment Monitoring, Disease Detection, Insects And Pests Management Etc. In Addition, The Advancement In Internet Of Things (IOT) Through Which We Can Connect Real World Objects To Obtain The Information Such As Physical Phenomenon Through Sensors In The Field Of Agriculture. This Paper Reports On The Smart Automated Irrigation System With Disease Detection. The System Design Includes Soil Moisture Sensors, Temperature Sensors, Leaf Wetness Sensors Deployed In Agriculture Field, The Sensed Data From Sensors Will Be Compared With Pre-determined Threshold Values Of Various Soil And Specific Crops. The Deployed Sensors Data Are Fed To The Arduino Uno Processor Which Is Linked To The Data Centre Wirelessly Via GSM Module. The Data Received By The Data Centre Is Stored To Perform Data Analysis Using Data Mining Technique Such As Markov Model To Detect The Possible Disease For That Condition. Finally, The Analysis Results And Observed Physical Parameters Are Transmitted To Android Smart Phone And Displayed On User Interface. The User Interface In Smart Phone Allows Remote User To Control Irrigation System By Switching, On And Off, The Motor Pump By The Arduino Based On The Commands From The Android Smart Phone.

RAIN FALL PREDICTION USING ANN- Rainfall Prediction Using Machine Learning And Deep Learning Techniques

In India, Agriculture Is The Key Point For Survival. For Agriculture, Rainfall Is Most Important. These Days Rainfall Prediction Has Become A Major Problem. Prediction Of Rainfall Gives Awareness To People And Know In Advance About Rainfall To Take Certain Precautions To Protect Their Crop From Rainfall. Many Techniques Came Into Existence To Predict Rainfall. Machine Learning Algorithms Are Mostly Useful In Predicting Rainfall. Some Of The Major Machine Learning Algorithms Are ARIMA Model(Auto-Regressive Integrated Moving Average), Artificial Neural Network, Logistic Regression, Support Vector Machine And Self Organizing Map. Two Commonly Used Models Predict Seasonal Rainfall Such As Linear And Non-Linear Models. The First Models Are ARIMA Model. While Using Artificial Neural Network(ANN) Predicting Rainfall Can Be Done Using Back Propagation NN, Cascade NN Or Layer Recurrent Network. Artificial NN Is Same As Biological Neural Networks.

RESTAURANT MANAGEMENT SYSTEM- An Android Based Restaurant Automation System With Touch Screen

Online-to-offline (O2O) Commerce Connecting Service Providers And Individuals To Address Daily Human Needs Is Quickly Expanding. In Particular, On-demand Food, Whereby Food Orders Are Placed Online By Customers And Delivered By Couriers, Is Becoming Popular. This Novel Urban Food Application Requires Highly Efficient And Scalable Real-time Delivery Services. However, It Is Difficult To Recruit Enough Couriers And Route Them To Facilitate Such Food Ordering Systems. This Paper Presents An Online Crowdsourced Delivery (OCD) Approach For On-demand Food. Facilitated By Internet-of-Things And 3G/4G/5G Technologies, Public Riders Can Be Attracted To Act As Crowdsourced Workers Delivering Food By Means Of Shared Bicycles Or Electric Motorbikes. An Online Dynamic Optimization Framework Comprising Order Collection, Solution Generation, And Sequential Delivery Processes Is Presented. A Hybrid Metaheuristic Solution Process Integrating The Adaptive Large Neighborhood Search And Tabu Search Approaches Is Developed To Assign Food Delivery Tasks And Generate High-quality Delivery Routes In A Real-time Manner. The Crowdsourced Riders Are Dynamically Shared Among Different Food Providers. Simulated Small-scale And Real-world Large-scale On-demand Food Delivery Instances Are Used To Evaluate The Performance Of The Proposed Approach. The Results Indicate That The Presented Crowdsourced Food Delivery Approach Outperforms Traditional Urban Logistics. The Developed Hybrid Optimization Mechanism Is Able To Produce High-quality Crowdsourced Delivery Routes In Less Than 120 S. The Results Demonstrate That The Presented OCD Approach Can Facilitate City-scale On-demand Food Delivery.

PIZZA ORDERING SYSTEM USING TKINTER GUI- Online Crowdsourced Delivery For On-Demand Food

Online-to-offline (O2O) Commerce Connecting Service Providers And Individuals To Address Daily Human Needs Is Quickly Expanding. In Particular, On-demand Food, Whereby Food Orders Are Placed Online By Customers And Delivered By Couriers, Is Becoming Popular. This Novel Urban Food Application Requires Highly Efficient And Scalable Real-time Delivery Services. However, It Is Difficult To Recruit Enough Couriers And Route Them To Facilitate Such Food Ordering Systems. This Paper Presents An Online Crowdsourced Delivery (OCD) Approach For On-demand Food. Facilitated By Internet-of-Things And 3G/4G/5G Technologies, Public Riders Can Be Attracted To Act As Crowdsourced Workers Delivering Food By Means Of Shared Bicycles Or Electric Motorbikes. An Online Dynamic Optimization Framework Comprising Order Collection, Solution Generation, And Sequential Delivery Processes Is Presented. A Hybrid Metaheuristic Solution Process Integrating The Adaptive Large Neighborhood Search And Tabu Search Approaches Is Developed To Assign Food Delivery Tasks And Generate High-quality Delivery Routes In A Real-time Manner. The Crowdsourced Riders Are Dynamically Shared Among Different Food Providers. Simulated Small-scale And Real-world Large-scale On-demand Food Delivery Instances Are Used To Evaluate The Performance Of The Proposed Approach. The Results Indicate That The Presented Crowdsourced Food Delivery Approach Outperforms Traditional Urban Logistics. The Developed Hybrid Optimization Mechanism Is Able To Produce High-quality Crowdsourced Delivery Routes In Less Than 120 S. The Results Demonstrate That The Presented OCD Approach Can Facilitate City-scale On-demand Food Delivery.

E-HEALTH DATA- Interoperable And Discrete EHealth Data Exchange Between Hospital And Patient

In Order To Prevent Health Risks And Provide A Better Service To The Patients That Have Visited The Hospital, There Is A Need For Monitoring The Patients After Being Released And Providing The Data Submitted By The Patient EHealth Enablers To The Medical Personnel. This Article Proposes An Architecture For Providing The Secure Exchange Of Data Between The Patient Mobile Application And The Hospital Infrastructure. The Implemented Solution Is Validated On A Laboratory Testbed.

VOICE CHATTING AND VIDEO CONFERENCING WebRTC Role In Real-time Communication And Video Conferencing

Real-time Communication (RTC) Is A New Standard And Industry-wide Effort That Expand The Web Browsing Model, Allowing Access To Information In Areas Like Social Media, Chat, Video Conferencing, And Television Over The Internet, And Unified Communication. These Systems Users Can View, Record, Remark, Or Edit Video And Audio Content Flows Using Time-critical Cloud Infrastructures That Enforce The Quality Of Services. However, There Are Many Proprietary Protocols And Codecs Available That Are Not Easily Interoperable And Scalable To Implement Multipoint Videoconference Systems. WebRTC (Web Real-Time Communication) Is A State-of-the-Art Open Technology That Makes Real-time Communication Capabilities In Audio, Video, And Data Transmission Possible In Real-time Communication Through Web Browsers Using JavaScript APIs (Application Programming Interfaces) Without Plug-ins. This Paper Aims To Introduce The P2P Video Conferencing System Based On Web Real-Time Communication (WebRTC). In This Paper, We Have Proposed A Web-based Peer-to-peer Real-time Communication System Using The Mozilla Firefox Together With The ScaleDrone Service That Enables Users To Communicate With Highspeed Data Transmission Over The Communication Channel Using WebRTC Technology, HTML5 And Use Node.js Server Address. Our Experiments Show That WebRTC Is A Capable Building Block For Scalable Live Video Conferencing Within A Web Browser.

VIDEO STERNOGRAPHY- A New Video Steganography Scheme Based On Shi-Tomasi Corner Detector

Recent Developments In The Speed Of The Internet And Information Technology Have Made The Rapid Exchange Of Multimedia Information Possible. However, These Developments In Technology Lead To Violations Of Information Security And Private Information. Digital Steganography Provides The Ability To Protect Private Information That Has Become Essential In The Current Internet Age. Among All Digital Media, Digital Video Has Become Of Interest To Many Researchers Due To Its High Capacity For Hiding Sensitive Data. Numerous Video Steganography Methods Have Recently Been Proposed To Prevent Secret Data From Being Stolen. Nevertheless, These Methods Have Multiple Issues Related To Visual Imperceptibly, Robustness, And Embedding Capacity. To Tackle These Issues, This Paper Proposes A New Approach To Video Steganography Based On The Corner Point Principle And LSBs Algorithm. The Proposed Method First Uses Shi-Tomasi Algorithm To Detect Regions Of Corner Points Within The Cover Video Frames. Then, It Uses 4-LSBs Algorithm To Hide Confidential Data Inside The Identified Corner Points. Besides, Before The Embedding Process, The Proposed Method Encrypts Confidential Data Using Arnold’s Cat Map Method To Boost The Security Level.

SECRET KEY GENERATION- Securing Private Key Using New Transposition Cipher Technique

The Security Of Any Public Key Cryptosystem Depends On The Private Key Thus, It Is Important That Only An Authorized Person Can Have Access To The Private Key. The Paper Presents A New Algorithm That Protects The Private Key Using The Transposition Cipher Technique. The Performance Of The Proposed Technique Is Evaluated By Applying It In The RSA Algorithm's Generated Private Keys Using 512-bit, 1024-bit, And 2048-bit, Respectively. The Result Shows That The Technique Is Practical And Efficient In Securing Private Keys While In Storage As It Produced High Avalanche Effect.

QR CODE GENERATION AND RECOGNITION FOR DATA SECURITY- A Desktop Application Of QR Code For Data Security And Authentication

Initially The Barcodes Have Been Widely Used For The Unique Identification Of The Products. Quick Response I.e. QR Codes Are 2D Representation Of Barcodes That Can Embed Text, Audio, Video, Web URL, Phone Contacts, Credentials And Much More. This Paper Primarily Deals With The Generation Of QR Codes For Question Paper. We Have Proposed Encryption Of Question Paper Data Using AES Encryption Algorithm. The Working Of The QR Codes Is Based On Encrypting It To QR Code And Scanning To Decrypt It. Furthermore, We Have Reduced The Memory Storage By Redirecting To A Webpage Through The Transmission And Online Acceptance Of Data.

E-HEALTH DATA- Detecting The Malicious Application Using FRAppE

Communication Technology Has Completely Occupied All The Areas Of Applications. Last Decade Has However Witnessed A Drastic Evolution In Information And Communication Technology Due To The Introduction Of Social Media Network. Business Growth Is Further Achieved Via These Social Media. Nevertheless, Increase In The Usage Of Online Social Networks (OSN) Such As Face Book, Twitter, Instagram Etc Has However Led To The Increase In Privacy And Security Concerns. Third Party Applications Are One Of The Many Reasons For Facebook Attractiveness. Regrettably, The Users Are Unaware Of Detail That A Lot Of Malicious Facebook Applications Provide On Their Profile. The Popularity Of These Third Party Applications Is Such That There Are Almost 20 Million Installations Per Day. But Cyber Criminals Have Appreciated The Popularity Of Third Party Applications And The Possibility Of Using These Apps For Distributing The Malware And Spam. This Paper Proposes A Method To Categorize A Given Application As Malicious Or Safe By Using FRAppE (Facebook's Rigorous Application Evaluator), Possibly One Of The First Tool For Detecting Malicious Apps On The Facebook. To Develop The FRAppE, The Data Is Gathered From MyPagekeeper Application, A Website That Provides Significant Information About Various Third Party Applications And Their Insight Into Their Behavior.

NEGATIVE REVIEW AND POSITIVE REVIEW CLASSIFICATION- A Review On Machine Learning Based Students Academic Performance Prediction Systems

Prediction Of Academic Performance Of Students Beforehand Provides Scope To Universities To Lower Their Dropout Rate And Help The Students In Improving Their Performance. In This Field, Research Is Being Done To Find Out Which Algorithm Is Best To Use And Which Features Should Be Considered While Predicting The Academic Performance Of Students. This Kind Of Research Work Has Been Increasing Over The Years. This Paper Performs A Survey On The Techniques Used In Various Research Papers For Academic Performance Prediction And Also Point Out The Limitations If Any, In The Methodology Used.

REVIEW PREDICTION FOR AMAZON DATASET- Opinion Mining Based Fake Product Review Monitoring And Removal System

Fake Review Detection And Its Elimination From The Given Dataset Using Different Natural Language Processing (NLP) Techniques Is Important In Several Aspects. In This Article, The Fake Review Dataset Is Trained By Applying Two Different Machine Learning (ML) Models To Predict The Accuracy Of How Genuine Are The Reviews In A Given Dataset. The Rate Of Fake Reviews In E-commerce Industry And Even Other Platforms Is Increasing When Depend On Product Reviews For The Item Found Online On Different Websites And Applications. The Products Of The Company Were Trusted Before Making A Purchase. So This Fake Review Problem Must Be Addressed So That These Large E-commerce Industries Such As Flipkart, Amazon, Etc. Can Rectify This Issue So That The Fake Reviewers And Spammers Are Eliminated To Prevent Users From Losing Trust On Online Shopping Platforms. This Model Can Be Used By Websites And Applications With Few Thousands Of Users Where It Can Predict The Authenticity Of The Review Based On Which The Website Owners Can Take Necessary Action Towards Them. This Model Is Developed Using Naïve Bayes And Random Forest Methods. By Applying These Models One Can Know The Number Of Spam Reviews On A Website Or Application Instantly. To Counter Such Spammers, A Sophisticated Model Is Required In Which A Need To Be Trained On Millions Of Reviews. In This Work “amazon Yelp Dataset” Is Used To Train The Models And Its Very Small Dataset Is Used For Training On A Very Small Scale And Can Be Scaled To Get High Accuracy And Flexibility.

CHATBOT FOR EMOTIOON RECOGNITION- Model Of Multi-turn Dialogue In Emotional Chatbot

The Intent Recognition And Natural Language Understanding Of Multi-turn Dialogue Is Key For The Commercialization Of Chatbots. Chatbots Are Mainly Used For The Processing Of Specific Tasks, And Can Introduce Products To Customers Or Solve Related Problems, Thus Saving Human Resources. Text Sentiment Recognition Enables A Chatbot To Know The User's Emotional State And Select The Best Response, Which Is Important In Medical Care. In This Study, We Combined The Multi-turn Dialogue Model And Sentiment Recognition Model To Develop A Chatbot, That Is Designed For Used In Daily Conversations Rather Than For Specific Tasks. Thus, The Chatbot Has The Ability To Provide The Robot's Emotions As Feedback While Talking With A User. Moreover, It Can Exhibit Different Emotional Reactions Based On The Content Of The User's Conversation.

VIRTUAL ASSISTANT USING NATURAL LANGUAGE PROCESSING- Intelligent Personal Assistant - Implementing Voice Commands Enabling Speech Recognition

Intelligent Personal Assistant (IPA) Is A Software Agent Performing Tasks On Behalf Of An Human Or Individual I Based On Commands Or Questions Which Are Similar To Chat Bots. They Are Also Referred As Intelligent Virtual Assistant Which Interprets Human Speech And Respond Via Synthesized Voices. IPAs And IVAs Finds Their Usage In Various Applications Such As Home Automation, Manage To-do Tasks And Media Playback Through Voice. This Paper Aims To Propose Speech Recognition Systems And Dealing With Creating A Virtual Personal Assistant. The Existing System Serves On The Internet And Is Maintained By The Third Party. This Application Shall Protect Personal Data From Others And Use The Local Database, Speech Recognition And Synthesiser. A Parser Named SURR(Semantic Unification And Reference Resolution) Is Employed To Recognise The Speech. Synthesizer Uses Text To Phoneme.

CHATBOT FOR HEALTH CARE AND DIET PLAN- Artificial Intelligence AI Dietician For Personal Use

Poor Nutrition Can Lead To Reduced Immunity, Increased Susceptibility To Disease, Impaired Physical And Mental Development, And Reduced Productivity. A Conversational Agent Can Support People As A Virtual Coach, However Building Such Systems Still Have Its Associated Challenges And Limitations. This Paper Describes The Background And Motivation For Chatbot Systems In The Context Of Healthy Nutrition Recommendation. We Discuss Current Challenges Associated With Chatbotapplication, We Tackled Technical, Theoretical, Behavioural, And Social Aspects Of The Challenges. We Then Propose A Pipeline To Be Used As Guidelines By Developers To Implement Theoretically And Technically Robust Chatbot Systems.

VOICE BASED ASCENDING AND DECENDING ORDER EXECUTION- Voice Based Sorting Using Pyaudio And Also Searching Operations

Voice Control Is A Major Growing Feature That Change The Way People Can Live. The Voice Assistant Is Commonly Being Used In Smartphones And Laptops. AI-based Voice Assistants Are The Operating Systems That Can Recognize Human Voice And Respond Via Integrated Voices. This Voice Assistant Will Gather The Audio From The Microphone And Then Convert That Into Text, Later It Is Sent Through GTTS (Google Text To Speech). GTTS Engine Will Convert Text Into Audio File In English Language, Then That Audio Is Played Using Play Sound Package Of Python Programming Language.

BLOOD CANCER CELL CLASSIFICATION USING DENSENET- Medical Image Classification Using A Light-Weighted Hybrid Neural Network Based On PCANet And DenseNet

Medical Image Classification Plays An Important Role In Disease Diagnosis Since It Can Provide Important Reference Information For Doctors. The Supervised Convolutional Neural Networks (CNNs) Such As DenseNet Provide The Versatile And Effective Method For Medical Image Classification Tasks, But They Require Large Amounts Of Data With Labels And Involve Complex And Time-consuming Training Process. The Unsupervised CNNs Such As Principal Component Analysis Network (PCANet) Need No Labels For Training But Cannot Provide Desirable Classification Accuracy. To Realize The Accurate Medical Image Classification In The Case Of A Small Training Dataset, We Have Proposed A Light-weighted Hybrid Neural Network Which Consists Of A Modified PCANet Cascaded With A Simplified DenseNet. The Modified PCANet Has Two Stages, In Which The Network Produces The Effective Feature Maps At Each Stage By Convoluting Inputs With Various Learned Kernels. The Following Simplified DenseNet With A Small Number Of Weights Will Take All Feature Maps Produced By The PCANet As Inputs And Employ The Dense Shortcut Connections To Realize Accurate Medical Image Classification. To Appreciate The Performance Of The Proposed Method, Some Experiments Have Been Done On Mammography And Osteosarcoma Histology Images. Experimental Results Show That The Proposed Hybrid Neural Network Is Easy To Train And It Outperforms Such Popular CNN Models As PCANet, ResNet And DenseNet In Terms Of Classification Accuracy, Sensitivity And Specificity.

BREAST CANCER CLASSIFICATION WITH FEATURE DATA- Breast Cancer Detection-Based Feature Optimization Using Firefly Algorithm And Ensemble Classifier

The Accurate Detection Of The Malignant Cell Of Breast Cancer, Decrease The Rate Of Mortality Of Women Around The World. The Process Of Feature Optimization, Increase The Probability Of Feature Selection And Mapping Of Data. This Paper Proposed Ensemble-based Classifier For Detection Of Breast Cancer On An Early Stage. The Proposed Ensemble Classifier Support Vector Machine Is A Base Classifier, And The Other Is Boost Classifier. The Firefly Algorithm Reduces The Variance Of Breast Cancer Features For The Selection Of Feature Components For The Classification Algorithm. The Most Dominated Work Is The Extraction Of Features Of Breast Cancer Image. For The Extraction Of Features Applied Wavelet Packet Transform, Wavelet Packet Transform Overcome The Limitation Of Wavelet Transform And Increase The Diversity Of Feature Extraction Process. The Proposed Algorithm Implemented In MATLAB Software And Tested With The Very Reputed Breast Cancer Image Dataset, MIAS And DDSM. The Proposed Algorithm’s Performance Measured With Standard Parameters Such As Accuracy, Specificity, Sensitivity, And MCC. The Evaluated Results Indicate That The Proposed Algorithm Is Better Than DWPT, SVM And BMC. The Increasing Ratio Of The Classification Algorithm Is 8% Instead Of Existing Algorithms.

HUMAN DISEASE PREDICTION USING SYMPTOMS BY MACHINE LEARNING- Symptoms Based Disease Prediction Using Machine Learning Techniques

Computer Aided Diagnosis (CAD) Is Quickly Evolving, Diverse Field Of Study In Medical Analysis. Significant Efforts Have Been Made In Recent Years To Develop Computer-aided Diagnostic Applications, As Failures In Medical Diagnosing Processes Can Result In Medical Therapies That Are Severely Deceptive. Machine Learning (ML) Is Important In Computer Aided Diagnostic Test. Object Such As Body-organs Cannot Be Identified Correctly After Using An Easy Equation. Therefore, Pattern Recognition Essentially Requires Training From Instances. In The Bio Medical Area, Pattern Detection And ML Promises To Improve The Reliability Of Disease Approach And Detection. They Also Respect The Dispassion Of The Method Of Decisions Making. ML Provides A Respectable Approach To Make Superior And Automated Algorithm For The Study Of High Dimension And Multi - Modal Bio Medicals Data. The Relative Study Of Various ML Algorithm For The Detection Of Various Disease Such As Heart Disease, Diabetes Disease Is Given In This Survey Paper. It Calls Focus On The Collection Of Algorithms And Techniques For ML Used For Disease Detection And Decision Making Processes.

COVID FOR PNEUMONIA CLASSIFICATION USING DENSENET- Deep Learning Based Detection And Segmentation Of COVID-19 - Pneumonia On Chest X-ray Image

The Outbreaks Of COVID-19 Virus Have Crossed The Limit To Our Expectation And It Breaks All Previous Records Of Virus Outbreaks. The Effect Of Corona Virus Causes A Serious Illness May Result In Death As A Consequence Of Substantial Alveolar Damage And Progressive Respiratory Failure. Automatic Detection And Classification Of This Virus From Chest X-ray Image Using Computer Vision Technology Can Be Very Useful Complement With Respect To The Less Sensitive Traditional Process Of Detecting COVID-19 I.e. Reverse Transcription Polymerase Chain Reaction (RT-PCR). This Automated Process Offers A Great Potential To Enhance The Conventional Healthcare Tactic For Tackling COVID-19 And Can Mitigate The Shortage Of Trained Physicians In Remote Communities. Again, The Segmentation Of The Infected Regions From Chest X-ray Image Can Help The Medical Specialists To View Insights Of The Affected Region. So, In This Paper We Have Used Deep Learning Based Ensemble Model For The Classification Of COVID-19, Pneumonia And Normal X-ray Image And For Segmentation We Have Used DenseNet Based U-Net Architecture To Segment The Affected Region. For Making The Ground Truth Mask Image Which Is Needed For Segmenting Purpose, We Have Used Amazon SageMaker Ground Truth Tool To Manually Crop The Activation Region (discriminative Image Regions By Which CNN Identify A Specific Class Using Grad-CAM Algorithm) Of The X-ray Image. We Have Found The Classification Accuracy 99.2% On The Available X-ray Dataset And 92% Average Accuracy From The Segmentation Process.

FALL DETECTION USING MOTION SENSOR DATA- Human Falling Detection Algorithm Based On Multisensor Data Fusion With SVM

Falling Is A Common Phenomenon In The Life Of The Elderly, And It Is Also One Of The 10 Main Causes Of Serious Health Injuries And Death Of The Elderly. In Order To Prevent Falling Of The Elderly, A Real-time Fall Prediction System Is Installed On The Wearable Intelligent Device, Which Can Timely Trigger The Alarm And Reduce The Accidental Injury Caused By Falls. At Present, Most Algorithms Based On Single-sensor Data Cannot Accurately Describe The Fall State, While The Fall Detection Algorithm Based On Multisensor Data Integration Can Improve The Sensitivity And Specificity Of Prediction. In This Study, We Design A Fall Detection System Based On Multisensor Data Fusion And Analyze The Four Stages Of Falls Using The Data Of 100 Volunteers Simulating Falls And Daily Activities. In This Paper, Data Fusion Method Is Used To Extract Three Characteristic Parameters Representing Human Body Acceleration And Posture Change, And The Effectiveness Of The Multisensor Data Fusion Algorithm Is Verified. The Sensitivity Is 96.67%, And The Specificity Is 97%. It Is Found That The Recognition Rate Is The Highest When The Training Set Contains The Largest Number Of Samples In The Training Set. Therefore, After Training The Model Based On A Large Amount Of Effective Data, Its Recognition Ability Can Be Improved, And The Prevention Of Fall Possibility Will Gradually Increase. In Order To Compare The Applicability Of Random Forest And Support Vector Machine (SVM) In The Development Of Wearable Intelligent Devices, Two Fall Posture Recognition Models Were Established, Respectively, And The Training Time And Recognition Time Of The Models Are Compared. The Results Show That SVM Is More Suitable For The Development Of Wearable Intelligent Devices.

BREAST CANCER CLASSIFICATION- Breast Cancer Detection And Classification

Blood Cancer (Leukemia) Is One Of The Leading Causes Of Death Among Humans. The Pace Of Healing Depends Mainly On Early Detection And Diagnosis Of A Disease. The Main Reason Behind Occurrence Of Leukemia Is When Bone Marrow Produces A Lot Of Abnormal White Blood Cells This Happens. Microscopic Study On Images Is Done By Hematologists Who Make Use Of Human Blood Samples, From Which It Leads To The Requirement Of Following Methods, Which Are Microscopic Color Imaging, Image Segmentation, Clustering And Classification Which Allows Easy Identification Of Patients Suffering From This Disease. Microscopic Imaging Allows For Various Methods Of Detecting Blood Cancer In Visible And Immature White Blood Cells. Identifying Leukemia Early And Quickly Greatly Helps Practitioners In Providing Appropriate Treatment To Patients. Initially To Start With, Segmentation Stage Is Achieved By Segregating White Blood Cells From Other Blood Components I.e. Erythrocytes And Platelets By Using Statistical Parameters Such As Mean, Standard Deviation. For Diagnosing Prediction Of Leukemia, Geometrical Features Such As Area, Perimeter Of The White Blood Cell Nucleuses Investigated. In The Proposed Methodology We Make Use Of K-means, For Identifying Cancerous Stages And Its Early Detection. Experimentation And Results Were Found To Be Promising With The Accuracy Of 90% Identification Of The Cancer Cells.

LEUKIMEA CLASSIFICATION- Detection Of Blood Cancer-Leukemia Using K-means Algorithm

Blood Cancer (Leukemia) Is One Of The Leading Causes Of Death Among Humans. The Pace Of Healing Depends Mainly On Early Detection And Diagnosis Of A Disease. The Main Reason Behind Occurrence Of Leukemia Is When Bone Marrow Produces A Lot Of Abnormal White Blood Cells This Happens. Microscopic Study On Images Is Done By Hematologists Who Make Use Of Human Blood Samples, From Which It Leads To The Requirement Of Following Methods, Which Are Microscopic Color Imaging, Image Segmentation, Clustering And Classification Which Allows Easy Identification Of Patients Suffering From This Disease. Microscopic Imaging Allows For Various Methods Of Detecting Blood Cancer In Visible And Immature White Blood Cells. Identifying Leukemia Early And Quickly Greatly Helps Practitioners In Providing Appropriate Treatment To Patients. Initially To Start With, Segmentation Stage Is Achieved By Segregating White Blood Cells From Other Blood Components I.e. Erythrocytes And Platelets By Using Statistical Parameters Such As Mean, Standard Deviation. For Diagnosing Prediction Of Leukemia, Geometrical Features Such As Area, Perimeter Of The White Blood Cell Nucleuses Investigated. In The Proposed Methodology We Make Use Of K-means, For Identifying Cancerous Stages And Its Early Detection. Experimentation And Results Were Found To Be Promising With The Accuracy Of 90% Identification Of The Cancer Cells.

ORAL CANCER CLASSIFIIACTION USING VGG16- Automated Detection And Classification Of Oral Lesions Using Deep Learning For Early Detection Of Oral Cancer

Oral Cancer Is A Major Global Health Issue Accounting For 177,384 Deaths In 2018 And It Is Most Prevalent In Low- And Middle-income Countries. Enabling Automation In The Identification Of Potentially Malignant And Malignant Lesions In The Oral Cavity Would Potentially Lead To Low-cost And Early Diagnosis Of The Disease. Building A Large Library Of Well-annotated Oral Lesions Is Key. As Part Of The MeMoSA ® (Mobile Mouth Screening Anywhere) Project, Images Are Currently In The Process Of Being Gathered From Clinical Experts From Across The World, Who Have Been Provided With An Annotation Tool To Produce Rich Labels. A Novel Strategy To Combine Bounding Box Annotations From Multiple Clinicians Is Provided In This Paper. Further To This, Deep Neural Networks Were Used To Build Automated Systems, In Which Complex Patterns Were Derived For Tackling This Difficult Task. Using The Initial Data Gathered In This Study, Two Deep Learning Based Computer Vision Approaches Were Assessed For The Automated Detection And Classification Of Oral Lesions For The Early Detection Of Oral Cancer, These Were Image Classification With ResNet-101 And Object Detection With The Faster R-CNN. Image Classification Achieved An F1 Score Of 87.07% For Identification Of Images That Contained Lesions And 78.30% For The Identification Of Images That Required Referral. Object Detection Achieved An F1 Score Of 41.18% For The Detection Of Lesions That Required Referral. Further Performances Are Reported With Respect To Classifying According To The Type Of Referral Decision. Our Initial Results Demonstrate Deep Learning Has The Potential To Tackle This Challenging Task.

EFFECTIVE HEART DISEASE PREDICTION USING HYBRID MACHINE LEARNING TECHNIQUES

Heart Disease Is One Of The Most Significant Causes Of Mortality In The World Today. Prediction Of Cardio Vascular Disease Is A Critical Challenge In The Area Of Clinical Data Analysis. Machine Learning (ML) Has Been Shown To Be Effective In Assisting In Making Decisions And Predictions From The Large Quantity Of Data Produced By The Health Care Industry. We Have Also Seen ML Techniques Being Used In Recent Developments In Different Areas Of The Internet Of Things (IoT). Various Studies Give Only A Glimpse Into Predicting Heart Disease With ML Techniques. In This Paper, We Propose A Novel Method That Aims At finding Significant Features By Applying Machine Learning Techniques Resulting In Improving The Accuracy In The Prediction Of Cardiovascular Disease. The Prediction Model Is Introduced With Different Combinations Of Features And Several Known Classification Techniques.

HUMAN SIGNATURE CLASSIFICATION- Handwritten Signature Recognition- A Convolutional Neural Network Approach

Handwritten Signature Recognition Is An Important Behavioral Biometric Which Is Used For Numerous Identification And Authentication Applications. There Are Two Fundamental Methods Of Signature Recognition, On-line Or Off-line. On-line Recognition Is A Dynamic Form, Which Uses Parameters Like Writing Pace, Change In Stylus Direction And Number Of Pen Ups And Pen Downs During The Writing Of The Signature. Off-line Signature Recognition Is A Static Form Where A Signature Is Handled As An Image And The Author Of The Signature Is Predicted Based On The Features Of The Signature. The Current Method Of Off-line Signature Recognition Predominantly Employs Template Matching, Where A Test Image Is Compared With Multiple Specimen Images To Speculate The Author Of The Signature. This Takes Up A Lot Of Memory And Has A Higher Time Complexity. This Paper Proposes A Method Of Off-line Signature Recognition Using Convolution Neural Network. The Purpose Of This Paper Is To Obtain High Accuracy Multi-class Classification With A Few Training Signature Samples. Images Are Preprocessed To Isolate The Signature Pixels From The Background/noise Pixels Using A Series Of Image Processing Techniques. Initially, The System Is Trained With 27 Genuine Signatures Of 10 Different Authors Each. A Convolution Neural Network Is Used To Predict A Test Signature Belongs To Which Of The 10 Given Authors. Different Public Datasets Are Used To Demonstrate Effectiveness Of The Proposed Solution.

ONLINE VOTING USING OTPBY DJANGO- Smart Online Voting System

Our Country, India Is The Largest Democratic Country In The World. So It Is Essential To Make Sure That The Governing Body Is Elected Through A Fair Election. India Has Only Offline Voting System Which Is Not Effective And Upto The Mark As It Requires Large Man Force And It Also Requires More Time To Process And Publish The Results. Therefore, To Be Made Effective, The System Needs A Change, Which Overcomes These Disadvantages. The New Method Does Not Force The Person's Physical Appearance To Vote, Which Makes The Things Easier. This Paper Focusses On A System Where The User Can Vote Remotely From Anywhere Using His/her Computer Or Mobile Phone And Doesn't Require The Voter To Got To The Polling Station Through Two Step Authentication Of Face Recognition And OTP System. This Project Also Allows The User To Vote Offline As Well If He/she Feels That Is Comfortable. The Face Scanning System Is Used To Record The Voters Face Prior To The Election And Is Useful At The Time Of Voting. The Offline Voting System Is Improvised With The Help Of RFID Tags Instead Of Voter Id. This System Also Enables The User The Citizens To See The Results Anytime Which Can Avoid Situations That Pave Way To Vote Tampering.

FACE RECOGNITION BASED ATTENDENCE SYSTEM- Automated Smart Attendance System Using Face Recognition

In The Human Body, The Face Is The Most Crucial Factor In Identifying Each Person As It Contains Many Vital Details. There Are Different Prevailing Methods To Capture Person's Presence Like Biometrics To Take Attendance Which Is A Time-consuming Process. This Paper Develops A Model To Classify Each Character's Face From A Captured Image Using A Collection Of Rules I.e., LBP Algorithm To Record The Student Attendance. LBP (Local Binary Pattern) Is One Among The Methods And Is Popular As Well As Effective Technique Used For The Image Representation And Classification And It Was Chosen For Its Robustness To Pose And Illumination Shifts. The Proposed ASAS (Automated Smart Attendance System) Will Capture The Image And Will Be Compared To The Image Stored In The Database. The Database Is Updated Upon The Enrolment Of The Student Using An Automation Process That Also Includes Name And Rolls Number. ASAS Marks Individual Attendance, If The Captured Image Matches The Image In The Database I.e., If Both Images Are Identical. The Proposed Algorithm Reduces Effort And Captures Day-to-day Actions Of Managing Each Student And Also Makes It Simple To Mark The Presence.

MENTAL HEALTH IDENTIFICATION USING FACE EMOTION RECOGNITION- Facial Expression Recognition And Recommendations Using Deep Neural Network With Transfer Learning

Human Facial Emotion Recognition (FER) Has Attracted The Attention Of The Research Community For Its Promising Applications. Mapping Different Facial Expressions To The Respective Emotional States Are The Main Task In FER. The Classical FER Consists Of Two Major Steps: Feature Extraction And Emotion Recognition. Currently, The Deep Neural Networks, Especially The Convolutional Neural Network (CNN), Is Widely Used In FER By Virtue Of Its Inherent Feature Extraction Mechanism From Images. Several Works Have Been Reported On CNN With Only A Few Layers To Resolve FER Problems. However, Standard Shallow CNNs With Straightforward Learning Schemes Have Limited Feature Extraction Capability To Capture Emotion Information From High-resolution Images. A Notable Drawback Of The Most Existing Methods Is That They Consider Only The Frontal Images (i.e., Ignore Profile Views For Convenience), Although The Profile Views Taken From Different Angles Are Important For A Practical FER System. For Developing A Highly Accurate FER System, This Study Proposes A Very Deep CNN (DCNN) Modeling Through Transfer Learning (TL) Technique Where A Pre-trained DCNN Model Is Adopted By Replacing Its Dense Upper Layer(s) Compatible With FER, And The Model Is Fine-tuned With Facial Emotion Data. A Novel Pipeline Strategy Is Introduced, Where The Training Of The Dense Layer(s) Is Followed By Tuning Each Of The Pre-trained DCNN Blocks Successively That Has Led To Gradual Improvement Of The Accuracy Of FER To A Higher Level. The Proposed FER System Is Verified On Eight Different Pre-trained DCNN Models (VGG-16, VGG-19, ResNet-18, ResNet-34, ResNet-50, ResNet-152, Inception-v3 And DenseNet-161) And Well-known KDEF And JAFFE Facial Image Datasets. FER Is Very Challenging Even For Frontal Views Alone. FER On The KDEF Dataset Poses Further Challenges Due To The Diversity Of Images With Different Profile Views Together With Frontal Views. The Proposed Method Achieved Remarkable Accuracy On Both Datasets With Pre-trained Models. On A 10-fold Cross-validation Way, The Best Achieved FER Accuracies With DenseNet-161 On Test Sets Of KDEF And JAFFE Are 96.51% And 99.52%, Respectively. The Evaluation Results Reveal The Superiority Of The Proposed FER System Over The Existing Ones Regarding Emotion Detection Accuracy. Moreover, The Achieved Performance On The KDEF Dataset With Profile Views Is Promising As It Clearly Demonstrates The Required Proficiency For Real-life Applications.

NON-CONTACT HEART RATE MONITORING USING MACHINE LEARNING

Health Monitoring Is An Important Parameter To Determine The Health Status Of A Person. Measuring The Heart Rate Is An Easy Way To Gauge Our Health. Normal Heart Rate May Vary From Person To Person And A Usually High Or Low Resting Heart Rate Can Be A Sign Of Trouble. There Are Several Methods For The Measurement Of Heart Rate Monitoring Such As Ecg, Ppg Etc. Such Methods Having A Disadvantage That These Are Invasive And Have A Continuous Contact With The Human Body. In Order To Overcome This Problem A New System Is Proposed Using Camera. In This Method A Blind Source Separation Algorithm Is Used For Extracting The Heart Rate Signal From The Face Image. Viola Jones Based Face Detection Algorithm Is Used To Track The Face. FastICA Algorithm Is Exploited To Separate Heart Rate Signal From Noise And Artefacts. Machine Learning Algorithm Is Implemented To Standardize The Signal. The Data Is Successfully Tested With Real Time Video.

SMART DOOR USING WEBCAM AND FINGERPRINT- Image Processing Technique For Smart Home Security Based On The Principal Component Analysis PCA Methods

Smart Home Is One Application Of The Pervasive Computing Branch Of Science. Three Categories Of Smart Homes, Namely Comfort, Healthcare, And Security. The Security System Is A Part Of Smart Home Technology That Is Very Important Because The Intensity Of Crime Is Increasing, Especially In Residential Areas. The System Will Detect The Face By The Webcam Camera If The User Enters The Correct Password. Face Recognition Will Be Processed By The Raspberry Pi 3 Microcontroller With The Principal Component Analysis Method Using OpenCV And Python Software Which Has Outputs, Namely Actuators In The Form Of A Solenoid Lock Door And Buzzer. The Test Results Show That The Webcam Can Perform Face Detection When The Password Input Is Successful, Then The Buzzer Actuator Can Turn On When The Database Does Not Match The Data Taken By The Webcam Or The Test Data And The Solenoid Door Lock Actuator Can Run If The Database Matches The Test Data Taken By The Sensor. Webcam. The Mean Response Time Of Face Detection Is 1.35 Seconds.

SIGN LANGUAGE RECOGNITION- Real-Time Recognition Of Indian Sign Language

The Real-time Sign Language Recognition System Is Developed For Recognising The Gestures Of Indian Sign Language (ISL). Generally, Sign Languages Consist Of Hand Gestures And Facial Expressions. For Recognising The Signs, The Regions Of Interest (ROI) Are Identified And Tracked Using The Skin Segmentation Feature Of OpenCV. The Training And Prediction Of Hand Gestures Are Performed By Applying Fuzzy C-means Clustering Machine Learning Algorithm. The Gesture Recognition Has Many Applications Such As Gesture Controlled Robots And Automated Homes, Game Control, Human-Computer Interaction (HCI) And Sign Language Interpretation. The Proposed System Is Used To Recognize The Real-time Signs. Hence It Is Very Much Useful For Hearing And Speech Impaired People To Communicate With Normal People.

FACE RECOGNITION- Face Detection And Recognition System Using Digital Image Processing

While Recognizing Any Individual, The Most Important Attribute Is Face. It Serves As An Individual Identity Of Everyone And Therefore Face Recognition Helps In Authenticating Any Person's Identity Using His Personal Characteristics. The Whole Procedure For Authenticating Any Face Data Is Sub-divided Into Two Phases, In The First Phase, The Face Detection Is Done Quickly Except For Those Cases In Which The Object Is Placed Quite Far, Followed By This The Second Phase Is Initiated In Which The Face Is Recognized As An Individual. Then The Whole Process Is Repeated Thereby Helping In Developing A Face Recognition Model Which Is Considered To Be One Of The Most Extremely Deliberated Biometric Technology. Basically, There Are Two Type Of Techniques That Are Currently Being Followed In Face Recognition Pattern That Is, The Eigenface Method And The Fisherface Method. The Eigenface Method Basically Make Use Of The PCA (Principal Component Analysis) To Minimize The Face Dimensional Space Of The Facial Features. The Area Of Concern Of This Paper Is Using The Digital Image Processing To Develop A Face Recognition System.

FACE EXPRESSION RECOGNITION SYSTEM- Facial Expression Recognition With Convolutional Neural Networks

Emotions Are A Powerful Tool In Communication And One Way That Humans Show Their Emotions Is Through Their Facial Expressions. One Of The Challenging And Powerful Tasks In Social Communications Is Facial Expression Recognition, As In Non-verbal Communication, Facial Expressions Are Key. In The Field Of Artificial Intelligence, Facial Expression Recognition (FER) Is An Active Research Area, With Several Recent Studies Using Convolutional Neural Networks (CNNs). In This Paper, We Demonstrate The Classification Of FER Based On Static Images, Using CNNs, Without Requiring Any Pre-processing Or Feature Extraction Tasks. The Paper Also Illustrates Techniques To Improve Future Accuracy In This Area By Using Pre-processing, Which Includes Face Detection And Illumination Correction. Feature Extraction Is Used To Extract The Most Prominent Parts Of The Face, Including The Jaw, Mouth, Eyes, Nose, And Eyebrows. Furthermore, We Also Discuss The Literature Review And Present Our CNN Architecture, And The Challenges Of Using Max-pooling And Dropout, Which Eventually Aided In Better Performance. We Obtained A Test Accuracy Of 61.7% On FER2013 In A Seven-classes Classification Task Compared To 75.2% In State-of-the-art Classification.

Deep Learning And Audio Based Emotion Recognition

While Tightening And Expansion Of Our Facial Muscles Cause Some Changes Called Facial Expressions As A Reaction To The Different Kinds Of Emotional Situations Of Our Brain, Similarly There Are Some Physiological Changes Like Tone, Loudness, Rhythm And Intonation In Our Voice, Too. These Visual And Auditory Changes Have A Great Importance For Human-human Interaction Human-machine Interaction And Human-computer Interaction As They Include Critical Information About Humans' Emotional Situations. Automatic Emotion Recognition Systems Are Defined As Systems That Can Analyze Individual's Emotional Situation By Using This Distinctive Information. In This Study, An Automatic Emotion Recognition System In Which Auditory Information Is Analyzed And Classified In Order To Recognize Human Emotions Is Proposed. In The Study Spectral Features And MFCC Coefficients Which Are Commonly Used For Feature Extraction From Voice Signals Are Firstly Used, And Then Deep Learning-based LSTM Algorithm Is Used For Classification. Suggested Algorithm Is Evaluated By Using Three Different Audio Data Sets (SAVEE, RAVADES And RML).

AN END-TO-END OPTICAL CHARACTER RECOGNITION PIPELINE FOR INDONESIAN IDENTITY CARD

Exponential Growth Of Fake ID Cards Generation Leads To Increased Tendency Of Forgery With Severe Security And Privacy Threats. University ID Cards Are Used To Authenticate Actual Employees And Students Of The University. Manual Examination Of ID Cards Is A Laborious Activity, Therefore, In This Paper, We Propose An Effective Automated Method For Employee/student Authentication Based On Analyzing The Cards. Additionally, Our Method Also Identifies The Department Of Concerned Employee/student. For This Purpose, We Employ Different Image Enhancement And Morphological Operators To Improve The Appearance Of Input Image Better Suitable For Recognition. More Specifically, We Employ Median Filtering To Remove Noise From The Given Input Image.

STONE INSCRIPTION IDENTIFICATION USING OPTICAL CHARACTER RECOGNITION- A Survey On Ancient Marathi Script Recognition From Stone Inscriptions

Rigorous Research Has Been Done On Ancient Indian Script Character Recognition. Many Research Articles Are Published In Last Few Decades. Number Of OCR Techniques Is Available In Market, But OCR Techniques Are Not Useful For Ancient Script Recognition. But More Research Work Is Required To Recognize Ancient Marathi Scripts. This Paper Presents Different Techniques Which Are Published By Different Researchers To Recognize Ancient Scripts. Also Challenges In Recognition Of Ancient Marathi Scripts Are Discussed In This Paper.

LIE OR TRUTH DETECTION USING VGG-16

Deep Neural Networks Achieve Best Classification Accuracy On Videos. However, Traditional Methods Or Shallow Architectures Remain Competitive And Combinations Of Different Network Types Are The Usual Chosen Approach. A Reason For This Less Important Impact Of Deep Methods For Video Recognition Is The Motion Representation. The Time Has A Stronger Redundancy, And An Important Elasticity Compared To The Spatial Dimensions. The Temporal Redundancy Is Evident, But The Elasticity Within An Action Class Is Well Less Considered .Several Instances Of The Action Still Widely Differ By Their Style And Speed Of Execution.

ROAD ACCIDENT DETECTION USING VIDEO CLASSIFICATION- A Novel Approach For Road Accident Detection Using DETR Algorithm

Road Accidents Are Man-made Cataclysmic Phenomena And Are Not Generally Predictable. With Increasing Numbers Of Deaths Due To Accidents In The Roadways, A Smart And Fast Detection System For Road Accidents Is The Need Of The Hour. Often, Precious Few Seconds After The Accidents Make The Difference Between Life And Death. To Address This Problem More Efficiently, “A Novel Approach For Road Accident Detection In CCTV Videos Using DETR Algorithm” Has Been Developed To Aid In Notifying Hospitals And The Local Police At Places Where Instant Notification Is Seldom Feasible. This Paper Presents A Novel And Efficient Method For Detecting Road Accidents With DETR (Detection Transformers) And Random Forest Classifier. Objects Such As Cars, Bikes, People, Etc. In The CCTV Footage Are Detected Using The DETR And The Features Are Fed To A Random Forest Classifier For Frame Wise Classification. Each Frame Of The Video Is Classified As An Accident Frame Or A Non-accident Frame. A Total Count Of Predicted Accident Frames From Any 60 Continuous Frames Of The Video Are Considered Using A Sliding Window Technique Before The Final Decision Is Made. Simulation Results Show That The Proposed System Achieves 78.2% Detection Rate In CCTV Videos.

SPORT ACTION CLASSIFICATION USING GAN- Improving Human Pose Estimation With Self-Attention Generative Adversarial Networks

Human Pose Estimation In Images Is Challenging And Important For Many Computer Vision Applications. Large Improvements In Human Pose Estimation Have Been Achieved With The Development Of Convolutional Neural Networks. Even Though, When Encountered Some Difficult Cases Even The State-of-the-art Models May Fail To Predict All The Body Joints Correctly. Some Recent Works Try To Refine The Pose Estimator. GAN (Generative Adversarial Networks) Has Been Proved To Be Efficient To Improve Human Pose Estimation. However, GAN Can Only Learn Local Body Joints Structural Constrains. In This Paper, We Propose To Apply Self-Attention GAN To Further Improve The Performance Of Human Pose Estimation. With Attention Mechanism In The Framework Of GAN, We Can Learn Long-range Body Joints Dependencies, Therefore Enforce The Entire Body Joints Structural Constrains To Make All The Body Joints To Be Consistent. Our Method Outperforms Other State-of-the-art Methods On Two Standard Benchmark Datasets MPII And LSP For Human Pose Estimation.

SPEECH EMOTION RECOGNITION- Speech Emotion Recognition With Multiscale Area Attention And Data Augmentation

In Speech Emotion Recognition (SER), Emotional Characteristics Often Appear In Diverse Forms Of Energy Patterns In Spectrograms. Typical Attention Neural Network Classifiers Of SER Are Usually Optimized On A Fixed Attention Granularity. In This Paper, We Apply Multiscale Area Attention In A Deep Convolutional Neural Network To Attend Emotional Characteristics With Varied Granularities And Therefore The Classifier Can Benefit From An Ensemble Of Attentions With Different Scales. To Deal With Data Sparsity, We Conduct Data Augmentation With Vocal Tract Length Perturbation (VTLP) To Improve The Generalization Capability Of The Classifier. Experiments Are Carried Out On The Interactive Emotional Dyadic Motion Capture (IEMOCAP) Dataset. We Achieved 79.34% Weighted Accuracy (WA) And 77.54% Unweighted Accuracy (UA), Which, To The Best Of Our Knowledge, Is The State Of The Art On This Dataset.

AUDIO OF SPEAKER RECOGNITION- Automatic Speaker Recognition System Based On Machine Learning Algorithms

Speaker Recognition Is A Technique Used To Automatically Recognize A Speaker From A Recording Of Their Voice Or Speech Utterance. Speaker Recognition Technology Has Improved Over Recent Years And Has Become Inexpensive And And Reliable Method For Person Identification And Verification. Research In The Field Of Speaker Recognition Has Now Spanned Over Five Decades And Has Shown Fruitful Results, However There Is Not Much Work Done With Regards To South African Indigenous Languages. This Paper Presents The Development Of An Automatic Speaker Recognition System That Incorporates Classification And Recognition Of Sepedi Home Language Speakers. Four Classifier Models, Namely, Support Vector Machines, K-Nearest Neighbors, Multilayer Perceptrons (MLP) And Random Forest (RF), Are Trained Using WEKA Data Mining Tool. Auto-WEKA Is Applied To Determine The Best Classifier Model Together With Its Best Hyper-parameters. The Performance Of Each Model Is Evaluated In WEKA Using 10-fold Cross Validation. MLP And RF Yielded Good Accuracy Surpassing The State-of-the-art With An Accuracy Of 97% And 99.9% Respectively, The RF Model Is Then Implemented On A Graphical User Interface For Development Testing.

MUSIC GENRE CLASSIFICATION USING DEEP LEARNING- Music Genre Classification Classification Using Deep Learning And Neural Network Algorithms

Categorizing Music Files According To Their Genre Is A Challenging Task In The Area Of Music Information Retrieval (MIR). In This Study, We Compare The Performance Of Two Classes Of Models. The First Is A Deep Learning Approach Wherein A CNN Model Is Trained End-to-end, To Predict The Genre Label Of An Audio Signal, Solely Using Its Spectrogram. The Second Approach Utilizes Hand-crafted Features, Both From The Time Domain And The Frequency Domain. We Train Four Traditional Machine Learning Classifiers With These Features And Compare Their Performance. The Features That Contribute The Most Towards This Multi-class Classification Task Are Identified. The Experiments Are Conducted On The Audio Set Data Set And We Report An AUC Value Of 0.894 For An Ensemble Classifier Which Combines The Two Proposed Approaches.

SPEECH TO TEXT CONVERTION USING PYTHON- Design Of Voice To Text Conversion And Management Program Based On Google Cloud Speech API

Sexual Crime, Including Sexual Harassment And Sex Assault, Is Prevalent. In Particular, The Number Of Reported Cases Of Sexual Crimes Occurring In The Workplace Is Steadily Increasing. Victims Of Sexual Crime Are Required To Prove The Fact Of The Damage, But It Is Not Easy To Prove The Evidence, So The Sex Offenders Are Often Not Punished Properly Because Of Insufficient Evidence. In This Paper, We Design A Recording Service Called CCVoice. It Uses Mobile Devices To Record Everyday Life. At The Same Time, It Converts The Recorded File To Text Using Google Cloud Speech API And Save The Text File. Therefore, It Is Possible To Easily Obtain Voice Evidence When A User Is Suddenly Sexually Abused Such As Sexual Harassment Or Sex Assault.

BIG DATA ANALYSIS BY US ACCIDENT DATASET- Data Analytics Factors Of Traffic Accidents In The UK

  • Domain: PYSPARK
  • Category: PYTHON
  • Year: 2022
  • Project Code: PYPYS2101

The Traffic And Accident Datasets For This Research Are Sourced By Data.gov.uk. The Data Analytics In This Paper Comprises Three Levels Namely: Descriptive Statistical Analysis; Inferential Statistical Analysis; Machine Learning. The Aim Of The Data Analytics Is To Explore The Factors That Could Have Impact On The Number Of Accidents And Their Associated Fatalities. Some Of The Factors Investigated On Are: Time Of The Day, Day Of The Week, Month Of The Year, Speed Limits, Etc... Machine Learning Approaches Have Also Been Employed To Predict The Types Of Accident Severity.

A PRACTICAL INTERACTIVE CHESS BOARD WITH AUTOMATIC MOVEMENT CONTROL

The Interactive Chess Board Game Is Unlike Games In Its Ordinary Way. This Board Game Together With Tangible Movements Of All Pieces Is Considered To Be Users Attraction. Therefore, The New Chessboard With An Automatic Moving Mechanism For Every Piece Is Chosen. Initially, We Have Designed And Developed An Aluminum Core Structure For Positioning X And Y-axis. Furthermore, A Controllable Magnet Is Deliberated For Holding And Moving An Individual Chess Piece According To Player Manipulations. Purpose Of This Interactive Chess Board Is Applying Technology To Board Game For Excitement, Interest, Amazement, And Attraction. Arduino Microcontroller Is Used For Controlling Every Step Of Piece Movement. The Microcontroller Receives Control Information Through The User Interface And Then Moves The Chess Piece To The Destination On The Board. The Position Calculation Is Brought To Identify The Chess Piece And Drive Accurately The Stepper Motors In X And Y-axis.

SNAKE GAME- Solving The Classic Snake Game Using AI

N This Paper We Did Survey Of Various Papers Based On The Classic Snake Game And Compared Their Various Traits And Features. In This Paper We Introduce An AI Bot To Enhance The Skills Of The Player And The AI Bot Uses The Algorithms Further Discussed In This Paper. Player Can Follow The Simultaneously Running AI Bot To Play The Game Effectively. In This We Use The Classic Snake Game, For That We Present Different Algorithms Or Methods For AI Bot. It Includes Three Searching Algorithms Related To Artificial Intelligence, Best First Search, A* Search And Improved A* Search With Forward Checking, And Two Baseline Methods Random Move And Almighty Move

Property Selling Website Using Django Framework

In A Generation Led By Millennials, Technologies Are Becoming Redundant Each Year. The Organizations Are Competing On A Global Scale And Newer And Innovative Strategies Are Introduced In The Field Of Marketing To Reach Out To The Potential Buyers. Real Estate, Being One Of The Biggest Business Sectors, Needs More Efficiently Targeted Marketing Campaigns As This Is A Very Niche And Unexplored Field In The Indian Scenario. Real Estate Projects Are Highly Priced Products Which Cannot Be Sold Efficiently Without A Well Strategized Marketing Campaign So As To Reach Out To The Exact Targeted Market. The Unsold Inventories In Various Metro Cities Range From 15-60%. The Stipulated Real Estate Sector Growth Trends By Government Do Not Go Hand In Hand With The On-ground Realities Of The Piled-up Inventories. The Marketing Strategies Have Not Evolved With The Digitization Boom, And Still The Real Estate Marketing Techniques Are Conventional And Financially Heavy. Through This Study, Efforts Have Been Made To Pin Point The Existing Supply-demand Problems In The Cities Of Ahmedabad And Mumbai In The Affordable And HIG Housing Sector Specifically. Also, Suitable Solutions For Marketing Campaigns Have Been Proposed Considering The Current Market Realities For Both Cities.

EYE BLINK DETECTION- Eye Blink Detection Using Opencv Computer Vision

The Basic Nonverbal Interaction That Is Now Evolving In The Upcoming Generation Is Eye Gaze. This Eye Blink System Builds A Bridge For Communication Of People Affected With Disabilities. The Operation Is So Simple That With The Eyes Blinking At The Control Keys That Are Built In The Screen . This Type Of System Can Synthesize Speech, Control His Environment, And Give A Major Development Of Confidence In The Individual . Our Paper Mainly Enforces The Virtual Keyboard That Not Only Has The Built In Phrases But Also Can Provide The Voice Notification/ Speech Assistance For The People Who Are Speech Disabled. To Achieve This We Have Used Our Pc/laptop Camera Which Is Built In And It Recognizes The Face And Parts Of The Face. This Makes The Process Of Detecting The Face Much Easier Than Anything. The Eye Blink Serves As The Alternative For A Mouse Click On The Virtual Interface. As Already Mentioned, Our Ultimate Achievement Is To Provide A Nonverbal Communication And Hence The Physically Disabled People Should Get A Mode Of Communication Along With A Voice Assistant. This Type Of Innovation Is A Golden Fortune For The People Who Lost Their Voice And Affected To Paralytic Disorders. We Have Further Explained With The Respective Flowcharts And With Each Juncture

CYBER THREAT DETECTION USING MACHINE LEARNING TECHNIQUES A PERFORMANCE EVALUATION PERSPECTIVE

The Present-day World Has Become All Dependent On Cyberspace For Every Aspect Of Daily Living. The Use Of Cyberspace Is Rising With Each Passing Day. The World Is Spending More Time On The Internet Than Ever Before. As A Result, The Risks Of Cyber Threats And Cybercrimes Are Increasing. The Term 'cyber Threat' Is Referred To As The Illegal Activity Performed Using The Internet. Cybercriminals Are Changing Their Techniques With Time To Pass Through The Wall Of Protection. Conventional Techniques Are Not Capable Of Detecting Zero-day Attacks And Sophisticated Attacks. Thus Far, Heaps Of Machine Learning Techniques Have Been Developed To Detect The Cybercrimes And Battle Against Cyber Threats. The Objective Of This Research Work Is To Present The Evaluation Of Some Of The Widely Used Machine Learning Techniques Used To Detect Some Of The Most Threatening Cyber Threats To The Cyberspace. Three Primary Machine Learning Techniques Are Mainly Investigated, Including Deep Belief Network, Decision Tree And Support Vector Machine. We Have Presented A Brief Exploration To Gauge The Performance Of These Machine Learning Techniques In The Spam Detection, Intrusion Detection And Malware Detection Based On Frequently Used And Benchmark Datasets.

DATA MANAGEMENT SERVICE

The Need For A Method To Create A Collaborative Machine Learning Model Which Can Utilize Data From Different Clients, Each With Privacy Constraints, Has Recently Emerged. This Is Due To Privacy Restrictions, Such As General Data Protection Regulation, Together With The Fact That Machine Learning Models In General Needs Large Size Data To Perform Well. Google Introduced Federated Learning In 2016 With The Aim To Address This Problem. Federated Learning Can Further Be Divided Into Horizontal And Vertical Federated Learning, Depending On How The Data Is Structured At The Different Clients. Vertical Federated Learning Is Applicable When Many Different Features Is Obtained On Distributed Computation Nodes, Where They Can Not Be Shared In Between. The Aim Of This Thesis Is To Identify The Current State Of The Art Methods In Vertical Federated Learning, Implement The Most Interesting Ones And Compare The Results In Order To Draw Conclusions Of The Benefits And Drawbacks Of The Different Methods. From The Results Of The Experiments, A Method Called FedBCD Shows Very Promising Results Where It Achieves Massive Improvements In The Number Of Communication Rounds Needed For Convergence, At The Cost Of More Computations At The Clients. A Comparison Between Synchronous And Asynchronous Approaches Shows Slightly Better Results For The Synchronous Approach In Scenarios With No Delay. Delay Refers To Slower Performance In One Of The Workers, Either Due To Lower Computational Resources Or Due To Communication Issues.

DECENTRALIZED MACHINE LEARNING AND APPLICATIONS

The Need For A Method To Create A Collaborative Machine Learning Model Which Can Utilize Data From Different Clients, Each With Privacy Constraints, Has Recently Emerged. This Is Due To Privacy Restrictions, Such As General Data Protection Regulation, Together With The Fact That Machine Learning Models In General Needs Large Size Data To Perform Well. Google Introduced Federated Learning In 2016 With The Aim To Address This Problem. Federated Learning Can Further Be Divided Into Horizontal And Vertical Federated Learning, Depending On How The Data Is Structured At The Different Clients. Vertical Federated Learning Is Applicable When Many Different Features Is Obtained On Distributed Computation Nodes, Where They Can Not Be Shared In Between. The Aim Of This Thesis Is To Identify The Current State Of The Art Methods In Vertical Federated Learning, Implement The Most Interesting Ones And Compare The Results In Order To Draw Conclusions Of The Benefits And Drawbacks Of The Different Methods. From The Results Of The Experiments, A Method Called FedBCD Shows Very Promising Results Where It Achieves Massive Improvements In The Number Of Communication Rounds Needed For Convergence, At The Cost Of More Computations At The Clients. A Comparison Between Synchronous And Asynchronous Approaches Shows Slightly Better Results For The Synchronous Approach In Scenarios With No Delay. Delay Refers To Slower Performance In One Of The Workers, Either Due To Lower Computational Resources Or Due To Communication Issues.

COVID-19 PANDEMIC- A Systematic Review On The Use Of AI And ML For Fighting The COVID-19 Pandemic

Artificial Intelligence (AI) And Machine Learning (ML) Have Caused A Paradigm Shift In Healthcare That Can Be Used For Decision Support And Forecasting By Exploring Medical Data. Recent Studies Have Shown That AI And ML Can Be Used To Fight COVID-19. The Objective Of This Article Is To Summarize The Recent AI- And ML-based Studies That Have Addressed The Pandemic. From An Initial Set Of 634 Articles, A Total Of 49 Articles Were Finally Selected Through An Inclusion-exclusion Process. In This Article, We Have Explored The Objectives Of The Existing Studies (i.e., The Role Of AI/ML In Fighting The COVID-19 Pandemic); The Context Of The Studies (i.e., Whether It Was Focused On A Specific Country-context Or With A Global Perspective; The Type And Volume Of The Dataset; And The Methodology, Algorithms, And Techniques Adopted In The Prediction Or Diagnosis Processes). We Have Mapped The Algorithms And Techniques With The Data Type By Highlighting Their Prediction/classification Accuracy. From Our Analysis, We Categorized The Objectives Of The Studies Into Four Groups: Disease Detection, Epidemic Forecasting, Sustainable Development, And Disease Diagnosis. We Observed That Most Of These Studies Used Deep Learning Algorithms On Image-data, More Specifically On Chest X-rays And CT Scans. We Have Identified Six Future Research Opportunities That We Have Summarized In This Paper.

FACE EXPRESSION RECOGNITION- Automatic Detection Of Pain From Facial Expressions A Survey

Pain Sensation Is Essential For Survival, Since It Draws Attention To Physical Threat To The Body. Pain Assessment Is Usually Done Through Self-reports. However, Self-assessment Of Pain Is Not Available In The Case Of Noncommunicative Patients, And Therefore, Observer Reports Should Be Relied Upon. Observer Reports Of Pain Could Be Prone To Errors Due To Subjective Biases Of Observers. Moreover, Continuous Monitoring By Humans Is Impractical. Therefore, Automatic Pain Detection Technology Could Be Deployed To Assist Human Caregivers And Complement Their Service, Thereby Improving The Quality Of Pain Management, Especially For Noncommunicative Patients. Facial Expressions Are A Reliable Indicator Of Pain, And Are Used In All Observer-based Pain Assessment Tools. Following The Advancements In Automatic Facial Expression Analysis, Computer Vision Researchers Have Tried To Use This Technology For Developing Approaches For Automatically Detecting Pain From Facial Expressions. This Paper Surveys The Literature Published In This Field Over The Past Decade, Categorizes It, And Identifies Future Research Directions. The Survey Covers The Pain Datasets Used In The Reviewed Literature, The Learning Tasks Targeted By The Approaches, The Features Extracted From Images And Image Sequences To Represent Pain-related Information, And Finally, The Machine Learning Methods Used.

CREDIT CARD FRAUDULET TRANSACTION DETECTION- Supervised Machine Learning Algorithm For Credit Card Fraudulent Transaction Detection

The Goal Of Data Analytics Is To Delineate Hidden Patterns And Use Them To Support Informed Decisions In A Variety Of Situations. Credit Card Fraud Is Escalating Significantly With The Advancement Of The Modernized Technology And Become An Easy Target For Fraudulent. Credit Card Fraud Is A Severe Problem In The Financial Service And Costs Billions Of A Dollar Every Year. The Design Of Fraud Detection Algorithm Is A Challenging Task With The Lack Of Real-world Transaction Dataset Because Of Confidentiality And The Highly Imbalanced Publicly Available Datasets. In This Paper, We Apply Different Supervised Machine Learning Algorithms To Detect Credit Card Fraudulent Transaction Using A Real-world Dataset. Furthermore, We Employ These Algorithms To Implement A Super Classifier Using Ensemble Learning Methods. We Identify The Most Important Variables That May Lead To Higher Accuracy In Credit Card Fraudulent Transaction Detection. Additionally, We Compare And Discuss The Performance Of Various Supervised Machine Learning Algorithms Exist In Literature Against The Super Classifier That We Implemented In This Paper.

EHEALTHCARE RECOMMENDATION SCHEME- PPMR A Privacy-preserving Online Medical Service Recommendation Scheme In EHealthcare System

With The Continuous Development Of EHealthcare Systems, Medical Service Recommendation Has Received Great Attention. However, Although It Can Recommend Doctors To Users, There Are Still Challenges In Ensuring The Accuracy And Privacy Of Recommendation. In This Paper, To Ensure The Accuracy Of The Recommendation, We Consider Doctors' Reputation Scores And Similarities Between Users' Demands And Doctors' Information As The Basis Of The Medical Service Recommendation. The Doctors' Reputation Scores Are Measured By Multiple Feedbacks From Users. We Propose Two Concrete Algorithms To Compute The Similarity And The Reputation Scores In A Privacy-preserving Way Based On The Modified Paillier Cryptosystem, Truth Discovery Technology, And The Dirichlet Distribution. Detailed Security Analysis Is Given To Show Its Security Prosperities. In Addition, Extensive Experiments Demonstrate The Efficiency In Terms Of Computational Time For Truth Discovery And Recommendation Process.

FAULT TOLERANT DATA PROCESSING IN HEALTHCARE- Adaptive And Fault-tolerant Data Processing In Healthcare IoT Based On Fog Computing Networks

In Recent Years, Healthcare IoT Have Been Helpful In Mitigating Pressures Of Hospital And Medical Resources Caused By Aging Population To A Large Extent. As A Safety-critical System, The Rapid Response From The Health Care System Is Extremely Important. To Fulfill The Low Latency Requirement, Fog Computing Is A Competitive Solution By Deploying Healthcare IoT Devices On The Edge Of Clouds. However, These Fog Devices Generate Huge Amount Of Sensor Data. Designing A Specific Framework For Fog Devices To Ensure Reliable Data Transmission And Rapid Data Processing Becomes A Topic Of Utmost Significance. In This Paper, A Reduced Variable Neighborhood Search (RVNS)-based SEnsor Data Processing Framework (REDPF) Is Proposed To Enhance Reliability Of Data Transmission And Processing Speed. Functionalities Of REDPF Include Fault-tolerant Data Transmission, Self-adaptive Filtering And Data-load-reduction Processing. Specifically, A Reliable Transmission Mechanism, Managed By A Self-adaptive Filter, Will Recollect Lost Or Inaccurate Data Automatically. Then, A New Scheme Is Designed To Evaluate The Health Status Of The Elderly People. Through Extensive Simulations, We Show That Our Proposed Scheme Improves Network Reliability, And Provides A Faster Processing Speed.

REMOTE AUTHENTICATION SCHEMES- Remote Authentication Schemes For Wireless Body Area Networks Based On The Internet Of Things

Internet Of Things (IoT) Is A New Technology Which Offers Enormous Applications That Make People’s Lives More Convenient And Enhances Cities’ Development. In Particular, Smart Healthcare Applications In IoT Have Been Receiving Increasing Attention For Industrial And Academic Research. However, Due To The Sensitiveness Of Medical Information, Security And Privacy Issues In IoT Healthcare Systems Are Very Important. Designing An Efficient Secure Scheme With Less Computation Time And Energy Consumption Is A Critical Challenge In IoT Healthcare Systems. In This Paper, A Lightweight Online/offline Certificateless Signature (L-OOCLS) Is Proposed, Then A Heterogeneous Remote Anonymous Authentication Protocol (HRAAP) Is Designed To Enable Remote Wireless Body Area Networks (WBANs) Users To Anonymously Enjoy Healthcare Service Based On The IoT Applications. The Proposed L-OOCLS Scheme Is Proven Secure In Random Oracle Model And The Proposed HRAAP Can Resist Various Types Of Attacks. Compared With The Existing Relevant Schemes, The Proposed HRAAP Achieves Less Computation Overhead As Well As Less Power Consumption On WBANs Client. In Addition, To Nicely Meet The Application In The IoT, An Application Scenario Is Given.

ATTRIBUTE BASED SIGNATURE- Outsourced Decentralized Multi-Authority Attribute Based Signature And Its Application In Iot

IoT (Internet Of Things) Devices Often Collect Data And Store The Data In The Cloud For Sharing And Further Processing; This Collection, Sharing, And Processing Will Inevitably Encounter Secure Access And Authentication Issues. Attribute Based Signature (ABS), Which Utilizes The Signer’s Attributes To Generate Private Keys, Plays A Competent Role In Data Authentication And Identity Privacy Preservation. In ABS, There Are Multiple Authorities That Issue Different Private Keys For Signers Based On Their Various Attributes, And A Central Authority Is Usually Established To Manage All These Attribute Authorities. However, One Security Concern Is That If The Central Authority Is Compromised, The Whole System Will Be Broken. In This Paper, We Present An Outsourced Decentralized Multi-authority Attribute Based Signature (ODMA-ABS) Scheme. The Proposed ODMA-ABS Achieves Attribute Privacy And Stronger Authority-corruption Resistance Than Existing Multi-authority Attribute Based Signature Schemes Can Achieve. In Addition, The Overhead To Generate A Signature Is Further Reduced By Outsourcing Expensive Computation To A Signing Cloud Server. We Present Extensive Security Analysis And Experimental Simulation Of The Proposed Scheme. We Also Propose An Access Control Scheme That Is Based On ODMA-ABS.

TO ENGAGE STUDENT IN SOCIAL MEDIA NETWORK- Creating And Using Digital Games For Learning In Elementary And Secondary Education

The Use Of Digital Games In Education Has Gained Considerable Popularity In The Last Years Due To The Fact That These Games Are Considered To Be Excellent Tools For Teaching And Learning And Offer To Students An Engaging And Interesting Way Of Participating And Learning. In This Study, The Design And Implementation Of Educational Activities That Include Game Creation And Use In Elementary And Secondary Education Is Presented. The Proposed Educational Activities’ Content Covers The Parts Of The Curricula Of All The Informatics Courses, For Each Education Level Separately, That Include The Learning Of Programming Principles. The Educational Activities Were Implemented And Evaluated By Teachers Through A Discussion Session. The Findings Indicate That The Teachers Think That Learning Through Creating And Using Games Is More Interesting And That They Also Like The Idea Of Using Various Programming Environments To Create Games In Order To Teach Basic Programming Principles To Students.

DRIMUX Dynamic Rumor Influence Minimization With User Experience In Social Networks

With The Soaring Development Of Large Scale Online Social Networks, Online Information Sharing Is Becoming Ubiquitous Every Day. Various Information Is Propagating Through Online Social Networks Including Both The Positive And Negative. In This Paper, We Focus On The Negative Information Problems Such As The Online Rumors. With The Soaring Development Of Large Scale Online Social Networks, Online Information Sharing Is Becoming Ubiquitous Everyday. Various Information Is Propagating Through Online Social Networks Including Both The Positive And Negative. In This Paper, We Focus On The Negative Information Problems Such As The Online Rumors. Rumor Blocking Is A Serious Problem In Large-scale Social Networks. Malicious Rumors Could Cause Chaos In Society And Hence Need To Be Blocked As Soon As Possible After Being Detected. In This Paper, We Propose A Model Of Dynamic Rumor Influence Minimization With User Experience (DRIMUX). Our Goal Is To Minimize The Influence Of The Rumor (i.e., The Number Of Users That Have Accepted And Sent The Rumor) By Blocking A Certain Subset Of Nodes. A Dynamic Ising Propagation Model Considering Both The Global Popularity And Individual Attraction Of The Rumor Is Presented Based On A Realistic Scenario. In Addition, Different From Existing Problems Of Influence Minimization, We Take Into Account The Constraint Of User Experience Utility. Specifically, Each Node Is Assigned A Tolerance Time Threshold. If The Blocking Time Of Each User Exceeds That Threshold, The Utility Of The Network Will Decrease. Under This Constraint, We Then Formulate The Problem As A Network Inference Problem With Survival Theory, And Propose Solutions Based On Maximum Likelihood Principle. Experiments Are Implemented Based On Large-scale Real World Networks And Validate The Effectiveness Of Our Method.

USER PROFILE MATCHING- Privacy-Preserving User Profile Matching In Social Networks

In This Paper, We Consider A Scenario Where A User Queries A User Profile Database, Maintained By A Social Networking Service Provider, To Identify Users Whose Profiles Match The Profile Specified By The Querying User. A Typical Example Of This Application Is Online Dating. Most Recently, An Online Dating Website, Ashley Madison, Was Hacked, Which Resulted In A Disclosure Of A Large Number Of Dating User Profiles. This Data Breach Has Urged Researchers To Explore Practical Privacy Protection For User Profiles In A Social Network. In This Paper, We Propose A Privacy-preserving Solution For Profile Matching In Social Networks By Using Multiple Servers. Our Solution Is Built On Homomorphic Encryption And Allows A User To Find Out Matching Users With The Help Of Multiple Servers Without Revealing To Anyone The Query And The Queried User Profiles In Clear. Our Solution Achieves User Profile Privacy And User Query Privacy As Long As At Least One Of The Multiple Servers Is Honest. Our Experiments Demonstrate That Our Solution Is Practical.

PHARMACOVIGILANCE FROM SOCIAL MEDIA- Pharmacovigilance From Social Media An Improved Random Subspace Method For Identifying Adverse Drug Events

Social Media-based Pharmacovigilance Has Great Potential To Augment Current Efforts And Provide Regulatory Authorities With Valuable Decision Aids. Among Various Pharmacovigilance Activities, Identifying Adverse Drug Events (ADEs) Is Very Important For Patient Safety. However, In Health-related Discussion Forums, ADEs May Confound With Drug Indications And Beneficial Effects, Etc. Therefore, The Focus Of This Study Is To Develop A Strategy To Identify ADEs From Other Semantic Types, And Meanwhile To Determine The Drug That An ADE Is Associated With. And Then Get The Id Of An User Who Share The ADE On The Medical Social Media. The User Id Detect By Using Naïve Bayes Algorithm.

OWNERSHIP IDENTIFICATION AND SIGNALING OF MULTIMEDIA- Security Of Multimedia Content For Ownership Identification Using Signaling Technique

The Information Shared Over Network Like Audio And Video Files Will Be Having Major Challenge Due To Security Credentials. In Large Scale Systems Like Cloud Infrastructure Used To Improve The Better Security From Past One Decade. The Contents Are Like Pictures, Audio And Video Clips Are Shared Over The Online Training Sessions In Recent Days. Therefore The Video Files Are Used To Protect Using Digital Signature And Digital Watermarking. The Content Of The Multimedia Files Are Require The Better Environment For Sharing Of Knowledge Using Private And Public Clouds. The Digital Signature Method Is Used For Multimedia Components Such As 2D And 3D Video Clips And Shared Among The Users On Cloud Infrastructure Will Be Predicted With Various Cloud Based Security Techniques.

PRIVACY-PRESERVING MULTI-KEYWORD RANKED SEARCH- Enhanced Semantic-Aware Multi-Keyword Ranked Search Scheme Over Encrypted

Traditional Searchable Encryption Schemes Based On The Term Frequency-Inverse Document Frequency (TF-IDF) Model Adopt The Presence Of Keywords To Measure The Relevance Of Documents To Queries, Which Ignores The Latent Semantic Meanings That Are Concealed In The Context. Latent Dirichlet Allocation (LDA) Topic Model Can Be Utilized For Modeling The Semantics Among Texts To Achieve Semantic-aware Multi-keyword Search. However, The LDA Topic Model Treats Queries And Documents From The Perspective Of Topics, And The Keywords Information Is Ignored. In This Paper, We Propose A Privacy-preserving Searchable Encryption Scheme Based On The LDA Topic Model And The Query Likelihood Model. We Extract The Feature Keywords From The Document Using The LDA-based Information Gain (IG) And Topic Frequency-Inverse Topic Frequency (TF-ITF) Model. With Feature Keyword Extraction And The Query Likelihood Model, Our Scheme Can Achieve A More Accurate Semantic-aware Keyword Search. A Special Index Tree Is Used To Enhance Search Efficiency. The Secure Inner Product Operation Is Utilized To Implement The Privacy-preserving Ranked Search. The Experiments On Real-world Datasets Demonstrate The Effectiveness Of Our Scheme.

SECURE SHARING OF PESONAL HEALTH RECORDS IN TH CLOUD- Secure Outsourced Attribute-based Sharing Framework For Lightweight Devices In Smart Health Systems

Personal Health Record (PHR) Service Is An Emerging Model For Health Information Exchange. It Allows Patients To Create, Update And Manage Personal And Medical Information. Also They Can Control And Share Their Medical Information With Other Users As Well As Health Care Providers. PHR Data Is Hosted To The Third Party Cloud Service Providers In Order To Enhance Its Interoperability. However, There Have Been Serious Security And Privacy Issues In Outsourcing These Data To Cloud Server. For Security, Encrypt The PHRs Before Outsourcing. So Many Issues Such As Risks Of Privacy Exposure, Scalability In Key Management, Flexible Access And Efficient User Revocation, Have Remained The Most Important Challenges Toward Achieving Fine-grained, Cryptographically Enforced Data Access Control. To Achieve Fine-grained And Scalable Data Access Control For Client’s Data, A Novel Patient-centric Framework Is Used. This Frame Work Is Mainly Focus On The Multiple Data Owner Scenario. A High Degree Of Patient Privacy Is Guaranteed Simultaneously By Exploiting Multi Authority ABE. This Scheme Also Enables Dynamic Modification Of Access Policies Or File Attributes, Support Efficient On Demand User/attribute Revocation. However Some Practical Limitations Are In Building PHR System. If Consider The Workflow Based Access Control Scenarios, The Data Access Right Could Be Given Based On Users Identities Rather Than Their Attributes, While ABE Does Not Handle That Efficiently. For Solving These Problem In This Thesis Proposed PHR System, Based On Attribute Based Broadcast Encryption (ABBE).

ATTRIBUTE-BASED ACCESS CONTROL WITH CONSTANT-SIZE CIPHERTEXT- Efficient Multi-Authority Attribute-Based Signcryption With Constant-Size Cipher Text

Recently, Efficient Fine-grained Access Mechanism Has Been Studied As A Main Concern In Cloud Storage Area For Several Years. Attribute-based Signcryption (ABSC) Which Is Logical Combination Of Attribute-based Encryption(ABE) And Attribute-based Signature(ABS), Can Provide Confidentiality, Authenticity For Sensitive Data And Anonymous Authentication. At The Same Time It Is More Efficient Than Previous “encrypt-then-sign” And “sign-then-encrypt” Patterns. However, Most Of The Existing ABSC Schemes Fail To Serve For Real Scenario Of Multiple Authorities And Have Heavy Communication Overhead And Computing Overhead. Hence, We Construct A Novel ABSC Scheme Realizing Multi-authority Access Control And Constant-size Ciphertext That Does Not Depend On The Number Of Attributes Or Authorities. Furthermore, Our Scheme Provides Public Verifiability Of The Ciphertext And Privacy Protection For The Signcryptor. Specially, It Is Proven To Be Secure In The Standard Model, Including Ciphertext Indistinguishability Under Adaptive Chosen Ciphertext Attacks And Existential Unforgeability Under Adaptive Chosen Message Attack.

ENCRYTION CLOUD BASED REVOCATION - Efficient Revocable Multi-Authority Attribute-Based Encryption For Cloud Storage

As Is Known, Attribute-based Encryption (ABE) Is Usually Adopted For Cloud Storage, Both For Its Achievement Of Fine-grained Access Control Over Data, And For Its Guarantee Of Data Confidentiality. Nevertheless, Single-authority Attribute-based Encryption (SA-ABE) Has Its Obvious Drawback In That Only One Attribute Authority Can Assign The Users' Attributes, Enabling The Data To Be Shared Only Within The Management Domain Of The Attribute Authority, While Rendering Multiple Attribute Authorities Unable To Share The Data. On The Other Hand, Multi-authority Attribute-based Encryption (MA-ABE) Has Its Advantages Over SA-ABE. It Can Not Only Satisfy The Need For The Fine-grained Access Control And Confidentiality Of Data, But Also Make The Data Shared Among Different Multiple Attribute Authorities. However, Existing MA-ABE Schemes Are Unsuitable For The Devices With Resources-constraint, Because These Schemes Are All Based On Expensive Bilinear Pairing. Moreover, The Major Challenge Of MA-ABE Scheme Is Attribute Revocation. So Far, Many Solutions In This Respect Are Not Efficient Enough. In This Paper, On The Basis Of The Elliptic Curves Cryptography, We Propose An Efficient Revocable Multi-authority Attribute-based Encryption (RMA-ABE) Scheme For Cloud Storage. The Security Analysis Indicates That The Proposed Scheme Satisfies Indistinguishable Under Adaptive Chosen Plaintext Attack Assuming Hardness Of The Decisional Diffie-Hellman Problem. Compared With The Other Schemes, The Proposed Scheme Gets Its Advantages In That It Is More Economical In Computation And Storage.

DIPLOCLOUD- DistSim - Scalable Distributed In-Memory Semantic Similarity Estimation For RDF Knowledge Graphs

In This Paper, We Present DistSim, A Scalable Distributed In-Memory Semantic Similarity Estimation Framework For Knowledge Graphs. DistSim Provides A Multitude Of State-ofthe-art Similarity Estimators. We Have Developed The Similarity Estimation Pipeline By Combining Generic Software Modules. For Large Scale RDF Data, DistSim Proposes MinHash With Locality Sensitivity Hashing To Achieve Better Scalability Over All-pair Similarity Estimations. The Modules Of DistSim Can Be Set Up Using A Multitude Of (hyper)-parameters Allowing To Adjust The Tradeoff Between Information Taken Into Account, And Processing Time. Furthermore, The Output Of The Similarity Estimation Pipeline Is Native RDF. DistSim Is Integrated Into The SANSA Stack, Documented In Scala-docs, And Covered By Unit Tests. Additionally, The Variables And Provided Methods Follow The Apache Spark MLlib Name-space Conventions. The Performance Of DistSim Was Tested Over A Distributed Cluster, For The Dimensions Of Data Set Size And Processing Power Versus Processing Time, Which Shows The Scalability Of DistSim W.r.t. Increasing Data Set Sizes And Processing Power. DistSim Is Already In Use For Solving Several RDF Data Analytics Related Use Cases. Additionally, DistSim Is Available And Integrated Into The Open-source GitHub Project SANSA

SECURE ANTI-COLLUSION DATA SHARING SCHEME - Attribute-Based Keyword Search Encryption Scheme With Verifiable Ciphertext Via Blockchain

In Order To Realize The Sharing Of Data By Multiple Users On The Blockchain, This Paper Proposes An Attribute-based Searchable Encryption With Verifiable Ciphertext Scheme Via Blockchain. The Scheme Uses The Public Key Algorithm To Encrypt The Keyword, The Attribute-based Encryption Algorithm To Encrypt The Symmetric Key, And The Symmetric Key To Encrypt The File. The Keyword Index Is Stored On The Blockchain, And The Ciphertext Of The Symmetric Key And File Are Stored On The Cloud Server. The Scheme Uses Searchable Encryption Technology To Achieve Secure Search On The Blockchain, Uses The Immutability Of The Blockchain To Ensure The Security Of The Keyword Ciphertext, Uses Verify Algorithm Guarantees The Integrity Of The Data On The Cloud. When The User's Attributes Need To Be Changed Or The Ciphertext Access Structure Is Changed, The Scheme Uses Proxy Re-encryption Technology To Implement The User's Attribute Revocation, And The Authority Center Is Responsible For The Whole Attribute Revocation Process. The Security Proof Shows That The Scheme Can Achieve Ciphertext Security, Keyword Security And Anti-collusion. In Addition, The Numerical Results Show That The Proposed Scheme Is Effective.

SHARED DYNAMIC CLOUD DATA WITH GROUP USER REVOCATION- Secure Efficient Revocable Large Universe Multi-Authority Attribute-Based Encryption For Cloud-Aided IoT

With The Help Of Cloud Computing, The Ubiquitous And Diversified Internet Of Things (IoT) Has Greatly Improved Human Society. Revocable Multi-authority Attribute-based Encryption (MA-ABE) Is Considered A Promising Technique To Solve The Security Challenges On Data Access Control In The Dynamic IoT Since It Can Achieve Dynamic Access Control Over The Encrypted Data. However, On The One Hand, The Existing Revocable Large Universe MA-ABE Suffers The Collusion Attack Launched By Revoked Users And Non-revoked Users. On The Other Hand, The User Collusion Avoidance Revocable MA-ABE Schemes Do Not Support Large Attributes (or Users) Universe, I.e. The Flexible Number Of Attributes (or Users). In This Article, The Author Proposes An Efficient Revocable Large Universe MA-ABE Based On Prime Order Bilinear Groups. The Proposed Scheme Supports User-attribute Revocation, I.e., The Revoked User Only Loses One Or More Attributes, And She/he Can Access The Data So Long As Her/his Remaining Attributes Satisfy The Access Policy. It Is Static Security In The Random Oracle Model Under The Q-DPBDHE2 Assumption. Moreover, It Is Secure Against The Collusion Attack Launched By Revoked Users And Non-revoked Users. Meanwhile, It Meets The Requirements Of Forward And Backward Security. The Limited-resource Users Can Choose Outsourcing Decryption To Save Resources. The Performance Analysis Results Indicate That It Is Suitable For Large-scale Cross-domain Collaboration In The Dynamic Cloud-aided IoT.

PROFIT MAXIMIZATION SCHEME- Personality- And Value-aware Scheduling Of User Requests In Cloud For Profit Maximization

The Main Goal Of A Cloud Provider Is To Make Profits By Providing Services To Users. Existing Profit Optimization Strategies Employ Homogeneous User Models In Which User Personality Is Ignored, Resulting In Fewer Profits And Particularly Notably Lower User Satisfaction That In Turn, Leads To Fewer Users And Reduced Profits. In This Paper, We Propose Efficient Personality-aware Request Scheduling Schemes To Maximize The Profit Of The Cloud Provider Under The Constraint Of User Satisfaction. Specifically, We First Model The Service Requests At The Granularity Of Individual Personality And Propose A Personalized User Satisfaction Prediction Model Based On Questionnaires. Subsequently, We Design A Personality-guided Integer Linear Programming (ILP)-based Request Scheduling Algorithm To Maximize The Profit Under The Constraint Of User Satisfaction, Which Is Followed By An Approximate But Lightweight Value Assessment And Cross Entropy (VACE)-based Profit Improvement Scheme. The VACE-based Scheme Is Especially Tailored For Applications With High Scheduling Resolution. Extensive Simulation Results Show That Our Satisfaction Prediction Model Can Achieve The Accuracy Of Up To 83%, And Our Profit Optimization Schemes Can Improve The Profit By At Least 3.96% As Compared To The Benchmarking Methods While Still Obtaining A Speedup Of At Least 1.68x

MULTIMEDIA CONTENT PRTECTION SYSTEM- Cloud-Based Multimedia Content Protection System

We Propose A New Design For Large-scale Multimedia Content Protection Systems. Our Design Leverages Cloud Infrastructures To Provide Cost Efficiency, Rapid Deployment, Scalability, And Elasticity To Accommodate Varying Workloads. The Proposed System Can Be Used To Protect Different Multimedia Content Types, Including Videos, Images, Audio Clips, Songs, And Music Clips. The System Can Be Deployed On Private And/or Public Clouds. Our System Has Two Novel Components: (i) Method To Create Signatures Of Videos, And (ii) Distributed Matching Engine For Multimedia Objects. The Signature Method Creates Robust And Representative Signatures Of Videos That Capture The Depth Signals In These Videos And It Is Computationally Efficient To Compute And Compare As Well As It Requires Small Storage. The Distributed Matching Engine Achieves High Scalability And It Is Designed To Support Different Multimedia Objects. We Implemented The Proposed System And Deployed It On Two Clouds: Amazon Cloud And Our Private Cloud. Our Experiments With More Than 11,000 Videos And 1 Million Images Show The High Accuracy And Scalability Of The Proposed System. In Addition, We Compared Our System To The Protection System Used By YouTube And Our Results Show That The YouTube Protection System Fails To Detect Most Copies Of Videos, While Our System Detects More Than 98% Of Them.

PUBLISHING OF SET-VALUED DATA ON HYBRID CLOUD- A Privacy Preserving Method For Publishing Set Valued Data And Its Correlative Social Network

Set-valued Data And Social Network Provide Opportunities To Mine Useful, Yet Potentially Security-sensitive, Information. While There Are Mechanisms To Anonymize Data And Protect The Privacy Separately In Set-valued Data And In Social Network, The Existing Approaches In Data Privacy Do Not Address The Privacy Issue Which Emerge When Publishing Set-valued Data And Its Correlative Social Network Simultaneously. In This Paper, We Propose A Privacy Attack Model Based On Linking The Set-valued Data And The Social Network Topology Information And A Novel Technique To Defend Against Such Attack To Protect The Individual Privacy. To Improve Data Utility And The Practicality Of Our Scheme, We Use Local Generalization And Partial Suppression To Make Set-valued Data Satisfy The Grouped ρ-uncertainty Model And To Reduce The Impact On The Community Structure Of The Social Network When Anonymizing The Social Network. Experiments On Real-life Data Sets Show That Our Method Outperforms The Existing Mechanisms In Data Privacy And, More Specifically, That It Provides Greater Data Utility While Having Less Impact On The Community Structure Of Social Networks.

DECENTRALIZED CLOUD FIREWALL FRAMEWORK Hierarchical Multi-Agent Optimization For Resource Allocation In Cloud Computing

In Cloud Computing, An Important Concern Is To Allocate The Available Resources Of Service Nodes To The Requested Tasks On Demand And To Make The Objective Function Optimum, I.e., Maximizing Resource Utilization, Payoffs, And Available Bandwidth. This Article Proposes A Hierarchical Multi-agent Optimization (HMAO) Algorithm In Order To Maximize The Resource Utilization And Make The Bandwidth Cost Minimum For Cloud Computing. The Proposed HMAO Algorithm Is A Combination Of The Genetic Algorithm (GA) And The Multi-agent Optimization (MAO) Algorithm. With Maximizing The Resource Utilization, An Improved GA Is Implemented To Find A Set Of Service Nodes That Are Used To Deploy The Requested Tasks. A Decentralized-based MAO Algorithm Is Presented To Minimize The Bandwidth Cost. We Study The Effect Of Key Parameters Of The HMAO Algorithm By The Taguchi Method And Evaluate The Performance Results. The Results Demonstrate That The HMAO Algorithm Is More Effective Than Two Baseline Algorithms Of Genetic Algorithm (GA) And Fast Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) In Solving The Large-scale Optimization Problem Of Resource Allocation. Furthermore, We Provide The Performance Comparison Of The HMAO Algorithm With Two Heuristic Greedy And Viterbi Algorithms In On-line Resource Allocation.

CRYPTOGRAPIC ALGORITHM- Comparative Study Of Cryptographic Algorithm

We Know That With The Emergence Of Internet People All Around The World Are Using Its Services &are Heavily Dependent On It. People Are Also Storing Their Huge Amount Of Data Over The Cloud .It Is The Challenge For Researchers To Secure The Private And Critical Data Of The Users, So That Unauthorized Person Should Not Be Able To Access It And Manipulate It .Cryptography Is A Process Of Converting The User Useful Information To A Form Which Is Insignificant To An Unauthorized Person So That Only Authorized Persons Can Access And Understands It .For Ensuring Privacy There Are Multiple Cryptographic Algorithms, Which Is Selected As Per Requirement Of User Or Security Specification Of The Organization. This Paper Discusses The Comparison Of Various Cryptographic Encryption Algorithms With Respect To Its Various Key Features & Then Later Discusses Their Performance Cost Based On The Some Selected Key Criteria’s. Some Of The Algorithms Chosen For The Purpose Are DES, 3DES, IDEA, CAST128, AES, Blowfish, RSA, ABE &ECC.

DATA INTEGRITY AUDITING WITHOUT PRIVATE KEY - Data Integrity Auditing Without Private Key Storage For Secure Cloud Storage

Using Cloud Storage Services, Users Can Store Their Data In The Cloud To Avoid The Expenditure Of Local Data Storage And Maintenance. To Ensure The Integrity Of The Data Stored In The Cloud, Many Data Integrity Auditing Schemes Have Been Proposed. In Most, If Not All, Of The Existing Schemes, A User Needs To Employ His Private Key To Generate The Data Authenticators For Realizing The Data Integrity Auditing. Thus, The User Has To Possess A Hardware Token (e.g., USB Token, Smart Card) To Store His Private Key And Memorize A Password To Activate This Private Key. If This Hardware Token Is Lost Or This Password Is Forgotten, Most Of The Current Data Integrity Auditing Schemes Would Be Unable To Work. In Order To Overcome This Problem, We Propose A New Paradigm Called Data Integrity Auditing Without Private Key Storage And Design Such A Scheme. In This Scheme, We Use Biometric Data (e.g., Iris Scan, Fingerprint) As The User’s Fuzzy Private Key To Avoid Using The Hardware Token. Meanwhile, The Scheme Can Still Effectively Complete The Data Integrity Auditing. We Utilize A Linear Sketch With Coding And Error Correction Processes To Confirm The Identity Of The User. In Addition, We Design A New Signature Scheme Which Not Only Supports Blockless Verifiability, But Also Is Compatible With The Linear Sketch. The Security Proof And The Performance Analysis Show That Our Proposed Scheme Achieves Desirable Security And Efficiency.

PUBLIC-KEY ENCRYPTION WITH KEYWORD SEARCH- Dual-Server Public-Key Encryption With Keyword Search For Secure Cloud Storage

Searchable Encryption Is Of Increasing Interest For Protecting The Data Privacy In Secure Searchable Cloud Storage. In This Paper, We Investigate The Security Of A Well-known Cryptographic Primitive, Namely, Public Key Encryption With Keyword Search (PEKS) Which Is Very Useful In Many Applications Of Cloud Storage. Unfortunately, It Has Been Shown That The Traditional PEKS Framework Suffers From An Inherent Insecurity Called Inside Keyword Guessing Attack (KGA) Launched By The Malicious Server. To Address This Security Vulnerability, We Propose A New PEKS Framework Named Dual-server PEKS (DS-PEKS). As Another Main Contribution, We Define A New Variant Of The Smooth Projective Hash Functions (SPHFs) Referred To As Linear And Homomorphic SPHF (LH-SPHF). We Then Show A Generic Construction Of Secure DS-PEKS From LH-SPHF. To Illustrate The Feasibility Of Our New Framework, We Provide An Efficient Instantiation Of The General Framework From A Decision Diffie-Hellman-based LH-SPHF And Show That It Can Achieve The Strong Security Against Inside The KGA.

DEDUPLICATING DATA- Secure Auditing And Deduplicating Data In Cloud

As The Cloud Computing Technology Develops During The Last Decade, Outsourcing Data To Cloud Service For Storage Becomes An Attractive Trend, Which Benefits In Sparing Efforts On Heavy Data Maintenance And Management. Nevertheless, Since The Outsourced Cloud Storage Is Not Fully Trustworthy, It Raises Security Concerns On How To Realize Data Deduplication In Cloud While Achieving Integrity Auditing. In This Work, We Study The Problem Of Integrity Auditing And Secure Deduplication On Cloud Data. Specifically, Aiming At Achieving Both Data Integrity And Deduplication In Cloud, We Propose Two Secure Systems, Namely SecCloud And SecCloud $^+$ . SecCloud Introduces An Auditing Entity With A Maintenance Of A MapReduce Cloud, Which Helps Clients Generate Data Tags Before Uploading As Well As Audit The Integrity Of Data Having Been Stored In Cloud. Compared With Previous Work, The Computation By User In SecCloud Is Greatly Reduced During The File Uploading And Auditing Phases. SecCloud $^+$ Is Designed Motivated By The Fact That Customers Always Want To Encrypt Their Data Before Uploading, And Enables Integrity Auditing And Secure Deduplication On Encrypted Data.

AUDITING FOR OUTSOURCED DATABASE- Verifiable Auditing For Outsourced Database In Cloud Computing

The Notion Of Database Outsourcing Enables The Data Owner To Delegate The Database Management To A Cloud Service Provider (CSP) That Provides Various Database Services To Different Users. Recently, Plenty Of Research Work Has Been Done On The Primitive Of Outsourced Database. However, It Seems That No Existing Solutions Can Perfectly Support The Properties Of Both Correctness And Completeness For The Query Results, Especially In The Case When The Dishonest CSP Intentionally Returns An Empty Set For The Query Request Of The User. In This Paper, We Propose A New Verifiable Auditing Scheme For Outsourced Database, Which Can Simultaneously Achieve The Correctness And Completeness Of Search Results Even If The Dishonest CSP Purposely Returns An Empty Set. Furthermore, We Can Prove That Our Construction Can Achieve The Desired Security Properties Even In The Encrypted Outsourced Database. Besides, The Proposed Scheme Can Be Extended To Support The Dynamic Database Setting By Incorporating The Notion Of Verifiable Database With Updates.

ATTRIBUTE-BASED ENCRYPTION- Audit-Free Cloud Storage Via Deniable Attribute-based Encryption

Cloud Storage Services Have Become Increasingly Popular. Because Of The Importance Of Privacy, Many Cloud Storage Encryption Schemes Have Been Proposed To Protect Data From Those Who Do Not Have Access. All Such Schemes Assumed That Cloud Storage Providers Are Safe And Cannot Be Hacked; However, In Practice, Some Authorities (i.e., Coercers) May Force Cloud Storage Providers To Reveal User Secrets Or Confidential Data On The Cloud, Thus Altogether Circumventing Storage Encryption Schemes. In This Paper, We Present Our Design For A New Cloud Storage Encryption Scheme That Enables Cloud Storage Providers To Create Convincing Fake User Secrets To Protect User Privacy. Since Coercers Cannot Tell If Obtained Secrets Are True Or Not, The Cloud Storage Providers Ensure That User Privacy Is Still Securely Protected.

PROOF OF RETRIEVABILITY IN CLOUD COMPUTING- Enabling Proof Of Retrievability In Cloud Computing With Resource-Constrained Devices

Cloud Computing Moves The Application Software And Databases To The Centralized Large Data Centers, Where The Management Of The Data And Services May Not Be Fully Trustworthy. In This Work, We Study The Problem Of Ensuring The Integrity Of Data Storage In Cloud Computing. To Reduce The Computational Cost At User Side During The Integrity Verification Of Their Data, The Notion Of Public Verifiability Has Been Proposed. However, The Challenge Is That The Computational Burden Is Too Huge For The Users With Resource-constrained Devices To Compute The Public Authentication Tags Of File Blocks. To Tackle The Challenge, We Propose OPoR, A New Cloud Storage Scheme Involving A Cloud Storage Server And A Cloud Audit Server, Where The Latter Is Assumed To Be Semi-honest. In Particular, We Consider The Task Of Allowing The Cloud Audit Server, On Behalf Of The Cloud Users, To Pre-process The Data Before Uploading To The Cloud Storage Server And Later Verifying The Data Integrity. OPoR Outsources And Offloads The Heavy Computation Of The Tag Generation To The Cloud Audit Server And Eliminates The Involvement Of User In The Auditing And In The Pre-processing Phases. Furthermore, We Strengthen The Proof Of Retrievability (PoR) Model To Support Dynamic Data Operations, As Well As Ensure Security Against Reset Attacks Launched By The Cloud Storage Server In The Upload Phase.

DYNAMIC DATA POSSESSION IN CLOUD COMPUTING- Provable Multicopy Dynamic Data Possession In Cloud Computing Systems

Increasingly More And More Organizations Are Opting For Outsourcing Data To Remote Cloud Service Providers (CSPs). Customers Can Rent The CSPs Storage Infrastructure To Store And Retrieve Almost Unlimited Amount Of Data By Paying Fees Metered In Gigabyte/month. For An Increased Level Of Scalability, Availability, And Durability, Some Customers May Want Their Data To Be Replicated On Multiple Servers Across Multiple Data Centers. The More Copies The CSP Is Asked To Store, The More Fees The Customers Are Charged. Therefore, Customers Need To Have A Strong Guarantee That The CSP Is Storing All Data Copies That Are Agreed Upon In The Service Contract, And All These Copies Are Consistent With The Most Recent Modifications Issued By The Customers. In This Paper, We Propose A Map-based Provable Multicopy Dynamic Data Possession (MB-PMDDP) Scheme That Has The Following Features: 1) It Provides An Evidence To The Customers That The CSP Is Not Cheating By Storing Fewer Copies; 2) It Supports Outsourcing Of Dynamic Data, I.e., It Supports Block-level Operations, Such As Block Modification, Insertion, Deletion, And Append; And 3) It Allows Authorized Users To Seamlessly Access The File Copies Stored By The CSP. We Give A Comparative Analysis Of The Proposed MB-PMDDP Scheme With A Reference Model Obtained By Extending Existing Provable Possession Of Dynamic Single-copy Schemes. The Theoretical Analysis Is Validated Through Experimental Results On A Commercial Cloud Platform. In Addition, We Show The Security Against Colluding Servers, And Discuss How To Identify Corrupted Copies By Slightly Modifying The Proposed Scheme.

SECURITY ANALYSIS ON CLOUD- Security Analysis On One-to-Many Order Preserving Encryption Based Cloud Data Search

For Ranked Search In Encrypted Cloud Data, Order Preserving Encryption (OPE) Is An Efficient Tool To Encrypt Relevance Scores Of The Inverted Index. When Using Deterministic OPE, The Ciphertexts Will Reveal The Distribution Of Relevance Scores. Therefore, Wang Et Al. Proposed A Probabilistic OPE, Called One-to-many OPE, For Applications Of Searchable Encryption, Which Can Flatten The Distribution Of The Plaintexts. In This Paper, We Proposed A Differential Attack On One-to-many OPE By Exploiting The Differences Of The Ordered Ciphertexts. The Experimental Results Show That The Cloud Server Can Get A Good Estimate Of The Distribution Of Relevance Scores By A Differential Attack. Furthermore, When Having Some Background Information On The Outsourced Documents, The Cloud Server Can Accurately Infer The Encrypted Keywords Using The Estimated Distributions.

RE-ENCRYPTION- Reliable Re-Encryption In Unreliable Clouds

A Key Approach To Secure Cloud Computing Is For The Data Owner To Store Encrypted Data In The Cloud, And Issue Decryption Keys To Authorized Users. Then, When A User Is Revoked, The Data Owner Will Issue Re-encryption Commands To The Cloud To Re-encrypt The Data, To Prevent The Revoked User From Decrypting The Data, And To Generate New Decryption Keys To Valid Users, So That They Can Continue To Access The Data. However, Since A Cloud Computing Environment Is Comprised Of Many Cloud Servers, Such Commands May Not Be Received And Executed By All Of The Cloud Servers Due To Unreliable Network Communications. In This Paper, We Solve This Problem By Proposing A Time-based Re-encryption Scheme, Which Enables The Cloud Servers To Automatically Re-encrypt Data Based On Their Internal Clocks. Our Solution Is Built On Top Of A New Encryption Scheme, Attribute-based Encryption, To Allow Fine-grain Access Control, And Does Not Require Perfect Clock Synchronization For Correctness.

PERSONAL HEALTH DATA- Lifelong Personal Health Data And Application Software Via Virtual Machines The Cloud

Personal Health Records (PHRs) Should Remain The Lifelong Property Of Patients, Who Should Be Able To Show Them Conveniently And Securely To Selected Caregivers And Institutions. In This Paper, We Present MyPHRMachines, A Cloud-based PHR System Taking A Radically New Architectural Solution To Health Record Portability. In MyPHRMachines, Health-related Data And The Application Software To View And/or Analyze It Are Separately Deployed In The PHR System. After Uploading Their Medical Data To MyPHRMachines, Patients Can Access Them Again From Remote Virtual Machines That Contain The Right Software To Visualize And Analyze Them Without Any Need For Conversion. Patients Can Share Their Remote Virtual Machine Session With Selected Caregivers, Who Will Need Only A Web Browser To Access The Pre-loaded Fragments Of Their Lifelong PHR. We Discuss A Prototype Of MyPHRMachines Applied To Two Use Cases, I.e., Radiology Image Sharing And Personalized Medicine.

DIFFERENTIAL QUERY SERVICES- Towards Differential Query Services In Cost-Efficient Clouds

Cloud Computing As An Emerging Technology Trend Is Expected To Reshape The Advances In Information Technology An Efficient Information Retrieval For Ranked Queries (EIRQ) Scheme Is Recovery Of Ranked Files On User Demand. An EIRQ Worked Based On The Aggregation And Distribution Layer (ADL). An ADL Is Act As Mediator Between Cloud And End-users. An EIRQ Scheme Reduces The Communication Cost And Communication Overhead. Mask Matrix Is Used To Filter Out As What User Really Wants Matched Data Before Recurring To The Aggregation And Distribution Layer (ADL). A User Can Retrieve Files On Demand By Choosing Queries Of Different Ranks. This Feature Is Useful When There Are A Large Number Of Matched Files, But The User Only Needs A Small Subset Of Them. Under Different Parameter Settings, Extensive Evaluations Have Been Conducted On Both Analytical Models And On A Real Cloud Environment, In Order To Examine The Effectiveness Of Our Schemes To Avoid Small Scale Of Interruptions In Cloud Computing, Follow Two Essential Issues:-Privacy And Efficiency. Private Keyword Based File Retrieval Scheme Was Anticipated By Ostrovsky.

DATA SHARING IN PUBLIC CLOUDS- An Efficient Certificateless Encryption For Secure Data Sharing In Public Clouds

We Propose A Mediated Certificateless Encryption Scheme Without Pairing Operations For Securely Sharing Sensitive Information In Public Clouds. Mediated Certificateless Public Key Encryption (mCL-PKE) Solves The Key Escrow Problem In Identity Based Encryption And Certificate Revocation Problem In Public Key Cryptography. However, Existing MCL-PKE Schemes Are Either Inefficient Because Of The Use Of Expensive Pairing Operations Or Vulnerable Against Partial Decryption Attacks. In Order To Address The Performance And Security Issues, In This Paper, We First Propose A MCL-PKE Scheme Without Using Pairing Operations. We Apply Our MCL-PKE Scheme To Construct A Practical Solution To The Problem Of Sharing Sensitive Information In Public Clouds. The Cloud Is Employed As A Secure Storage As Well As A Key Generation Center. In Our System, The Data Owner Encrypts The Sensitive Data Using The Cloud Generated Users' Public Keys Based On Its Access Control Policies And Uploads The Encrypted Data To The Cloud. Upon Successful Authorization, The Cloud Partially Decrypts The Encrypted Data For The Users. The Users Subsequently Fully Decrypt The Partially Decrypted Data Using Their Private Keys. The Confidentiality Of The Content And The Keys Is Preserved With Respect To The Cloud, Because The Cloud Cannot Fully Decrypt The Information. We Also Propose An Extension To The Above Approach To Improve The Efficiency Of Encryption At The Data Owner. We Implement Our MCL-PKE Scheme And The Overall Cloud Based System, And Evaluate Its Security And Performance. Our Results Show That Our Schemes Are Efficient And Practical.

TRUST MANAGEMENT FOR CLOUD SERVICES Cloud Armor Supporting Reputation-based Trust Management For Cloud Services

Trust Management Is One Of The Most Challenging Issues For The Adoption And Growth Of Cloud Computing. The Highly Dynamic, Distributed, And Non-transparent Nature Of Cloud Services Introduces Several Challenging Issues Such As Privacy, Security, And Availability. Preserving Consumers' Privacy Is Not An Easy Task Due To The Sensitive Information Involved In The Interactions Between Consumers And The Trust Management Service. Protecting Cloud Services Against Their Malicious Users (e.g., Such Users Might Give Misleading Feedback To Disadvantage A Particular Cloud Service) Is A Difficult Problem. Guaranteeing The Availability Of The Trust Management Service Is Another Significant Challenge Because Of The Dynamic Nature Of Cloud Environments. In This Article, We Describe The Design And Implementation Of CloudArmor, A Reputation-based Trust Management Framework That Provides A Set Of Functionalities To Deliver Trust As A Service (TaaS), Which Includes I) A Novel Protocol To Prove The Credibility Of Trust Feedbacks And Preserve Users' Privacy, Ii) An Adaptive And Robust Credibility Model For Measuring The Credibility Of Trust Feedbacks To Protect Cloud Services From Malicious Users And To Compare The Trustworthiness Of Cloud Services, And Iii) An Availability Model To Manage The Availability Of The Decentralized Implementation Of The Trust Management Service. The Feasibility And Benefits Of Our Approach Have Been Validated By A Prototype And Experimental Studies Using A Collection Of Real-world Trust Feedbacks On Cloud Services.

OPTIMAL PERFORMANCE AND SECURITY- DROPS Division And Replication Of Data In Cloud For Optimal Performance And Security

Outsourcing Data To A Third-party Administrative Control, As Is Done In Cloud Computing, Gives Rise To Security Concerns. The Data Compromise May Occur Due To Attacks By Other Users And Nodes Within The Cloud. Therefore, High Security Measures Are Required To Protect Data Within The Cloud. However, The Employed Security Strategy Must Also Take Into Account The Optimization Of The Data Retrieval Time. In This Paper, We Propose Division And Replication Of Data In The Cloud For Optimal Performance And Security (DROPS) That Collectively Approaches The Security And Performance Issues. In The DROPS Methodology, We Divide A File Into Fragments, And Replicate The Fragmented Data Over The Cloud Nodes. Each Of The Nodes Stores Only A Single Fragment Of A Particular Data File That Ensures That Even In Case Of A Successful Attack, No Meaningful Information Is Revealed To The Attacker. Moreover, The Nodes Storing The Fragments, Are Separated With Certain Distance By Means Of Graph T-coloring To Prohibit An Attacker Of Guessing The Locations Of The Fragments. Furthermore, The DROPS Methodology Does Not Rely On The Traditional Cryptographic Techniques For The Data Security; Thereby Relieving The System Of Computationally Expensive Methodologies. We Show That The Probability To Locate And Compromise All Of The Nodes Storing The Fragments Of A Single File Is Extremely Low. We Also Compare The Performance Of The DROPS Methodology With 10 Other Schemes. The Higher Level Of Security With Slight Performance Overhead Was Observed.

SERVICE ORIENTED MOBILE SOCIAL NETWORK- Enabling Trustworthy Service Evaluation In Service-Oriented Mobile Social Networks

In This Paper, We Propose A Trustworthy Service Evaluation (TSE) System To Enable Users To Share Service Reviews In Service-oriented Mobile Social Networks (S-MSNs). Each Service Provider Independently Maintains A TSE For Itself, Which Collects And Stores Users' Reviews About Its Services Without Requiring Any Third Trusted Authority. The Service Reviews Can Then Be Made Available To Interested Users In Making Wise Service Selection Decisions. We Identify Three Unique Service Review Attacks, I.e., Linkability, Rejection, And Modification Attacks, And Develop Sophisticated Security Mechanisms For The TSE To Deal With These Attacks. Specifically, The Basic TSE (bTSE) Enables Users To Distributedly And Cooperatively Submit Their Reviews In An Integrated Chain Form By Using Hierarchical And Aggregate Signature Techniques. It Restricts The Service Providers To Reject, Modify, Or Delete The Reviews. Thus, The Integrity And Authenticity Of Reviews Are Improved. Further, We Extend The BTSE To A Sybil-resisted TSE (SrTSE) To Enable The Detection Of Two Typical Sybil Attacks. In The SrTSE, If A User Generates Multiple Reviews Toward A Vendor In A Predefined Time Slot With Different Pseudonyms, The Real Identity Of That User Will Be Revealed. Through Security Analysis And Numerical Results, We Show That The BTSE And The SrTSE Effectively Resist The Service Review Attacks And The SrTSE Additionally Detects The Sybil Attacks In An Efficient Manner. Through Performance Evaluation, We Show That The BTSE Achieves Better Performance In Terms Of Submission Rate And Delay Than A Service Review System That Does Not Adopt User Cooperation.

RECOMMEND FOR USER INTEREST AND SOCIAL CIRCLE - Personalized Recommendation Combining User Interest And Social Circle

With The Advent And Popularity Of Social Network, More And More Users Like To Share Their Experiences, Such As Ratings, Reviews, And Blogs. The New Factors Of Social Network Like Interpersonal Influence And Interest Based On Circles Of Friends Bring Opportunities And Challenges For Recommender System To Solve The Cold Start And Sparsity Problem Of Datasets. Some Of The Social Factors Have Been Used In RS, But Have Not Been Fully Considered. In This Paper, Three Social Factors, Personal Interest, Interpersonal Interest Similarity, And Interpersonal Influence, Fuse Into A Unified Personalized Recommendation Model Based On Probabilistic Matrix Factorization. The Factor Of Personal Interest Can Make The RS Recommend Items To Meet Users' Individualities, Especially For Experienced Users. Moreover, For Cold Start Users, The Interpersonal Interest Similarity And Interpersonal Influence Can Enhance The Intrinsic Link Among Features In The Latent Space. We Conduct A Series Of Experiments On Three Rating Datasets: Yelp, MovieLens, And Douban Movie. Experimental Results Show The Proposed Approach Outperforms The Existing RS Approaches.

DATA SHARING IN CLOUDS- SeDaSC Secure Data Sharing In Clouds

Cloud Storage Is An Application Of Clouds That Liberates Organizations From Establishing In-house Data Storage Systems. However, Cloud Storage Gives Rise To Security Concerns. In Case Of Group-shared Data, The Data Face Both Cloud-specific And Conventional Insider Threats. Secure Data Sharing Among A Group That Counters Insider Threats Of Legitimate Yet Malicious Users Is An Important Research Issue. In This Paper, We Propose The Secure Data Sharing In Clouds (SeDaSC) Methodology That Provides: 1) Data Confidentiality And Integrity; 2) Access Control; 3) Data Sharing (forwarding) Without Using Compute-intensive Reencryption; 4) Insider Threat Security; And 5) Forward And Backward Access Control. The SeDaSC Methodology Encrypts A File With A Single Encryption Key. Two Different Key Shares For Each Of The Users Are Generated, With The User Only Getting One Share. The Possession Of A Single Share Of A Key Allows The SeDaSC Methodology To Counter The Insider Threats. The Other Key Share Is Stored By A Trusted Third Party, Which Is Called The Cryptographic Server. The SeDaSC Methodology Is Applicable To Conventional And Mobile Cloud Computing Environments. We Implement A Working Prototype Of The SeDaSC Methodology And Evaluate Its Performance Based On The Time Consumed During Various Operations. We Formally Verify The Working Of SeDaSC By Using High-level Petri Nets, The Satisfiability Modulo Theories Library, And A Z3 Solver. The Results Proved To Be Encouraging And Show That SeDaSC Has The Potential To Be Effectively Used For Secure Data Sharing In The Cloud.

TRUST-BUT-VERIFY- Verifying Result Correctness Of Outsourced Frequent Item Set Mining In Data-mining-as-a-service Paradigm

Cloud Computing Is Popularizing The Computing Paradigm In Which Data Is Outsourced To A Third-party Service Provider (server) For Data Mining. Outsourcing, However, Raises A Serious Security Issue: How Can The Client Of Weak Computational Power Verify That The Server Returned Correct Mining Result? In This Paper, We Focus On The Specific Task Of Frequent Itemset Mining. We Consider The Server That Is Potentially Untrusted And Tries To Escape From Verification By Using Its Prior Knowledge Of The Outsourced Data. We Propose Efficient Probabilistic And Deterministic Verification Approaches To Check Whether The Server Has Returned Correct And Complete Frequent Itemsets. Our Probabilistic Approach Can Catch Incorrect Results With High Probability, While Our Deterministic Approach Measures The Result Correctness With 100 Percent Certainty. We Also Design Efficient Verification Methods For Both Cases That The Data And The Mining Setup Are Updated. We Demonstrate The Effectiveness And Efficiency Of Our Methods Using An Extensive Set Of Empirical Results On Real Datasets.

DATA SECURITY IN CLOUD COMPUTING- Data Security In Cloud Computing Using Blowfish Algorithm

Cloud Computing Has Great Potential Of Providing Robust Computational Power To The Society At Reduced Cost. It Enables Customers With Limited Computational Resources To Outsource Their Large Computation Workloads To The Cloud, And Economically Enjoy The Massive Computational Power, Bandwidth, Storage, And Even Appropriate Software That Can Be Shared In A Pay-per-use Manner .Storing Data In A Third Party’s Cloud System Causes Serious Concern Over Data Confidentiality. General Encryption Schemes Protect Data Confidentiality, But Also Limit The Functionality Of The Storage System Because A Few Operations Are Supported Over Encrypted Data. Constructing A Secure Storage System That Supports Multiple Functions Is Challenging When The Storage System Is Distributed And Has No Central Authority. We Propose A Threshold Proxy Re-encryption Scheme And Integrate It With A Decentralized Erasure Code Such That A Secure Distributed Storage System Is Formulated. The Distributed Storage System Not Only Supports Secure And Robust Data Storage And Retrieval, But Also Lets A User Forward His Data In The Storage Servers To Another User Without Retrieving The Data Back. The Main Technical Contribution Is That The Proxy Re-encryption Scheme Supports Encoding Operations Over Encrypted Messages As Well As Forwarding Operations Over Encoded And Encrypted Messages. Our Method Fully Integrates Encrypting, Encoding, And Forwarding. We Analyze And Suggest Suitable Parameters For The Number Of Copies Of A Message Dispatched To Storage Servers And The Number Of Storage Servers Queried By A Key Server.

SERVICE MANAGEMENT IN CLOUD- A Flexible Architecture For Service Management In The Cloud

Cloud Computing Is A Style Of Computing Where Different Capabilities Are Provided As A Service To Customers Using Internet Technologies. The Most Common Offered Services Are Infrastructure (IasS), Software (SaaS) And Platform (PaaS). This Work Integrates The Service Management Into The Cloud Computing Concept And Shows How Management Can Be Provided As A Service In The Cloud. Nowadays, Services Need To Adapt Their Functionalities Across Heterogeneous Environments With Different Technological And Administrative Domains. The Implied Complexity Of This Situation Can Be Simplified By A Service Management Architecture In The Cloud. This Paper Focuses On This Architecture, Taking Into Account Specific Service Management Functionalities, Like Incident Management Or KPI/SLA Management, And Provides A Complete Solution. The Proposed Architecture Is Based On A Distributed Set Of Agents, Using Semantic-based Techniques: A Shared Knowledge Plane, Instantiated In The Cloud, Has Been Introduced To Ensure Communication Between Agents.

CLOUD PROVIDERS- A Novel Economic Sharing Model In A Federation Of Selfish Cloud Providers

This Paper Presents A Novel Economic Model To Regulate Capacity Sharing In A Federation Of Hybrid Cloud Providers (CPs). The Proposed Work Models The Interactions Among The CPs As A Repeated Game Among Selfish Players That Aim At Maximizing Their Profit By Selling Their Unused Capacity In The Spot Market But Are Uncertain Of Future Workload Fluctuations. The Proposed Work First Establishes That The Uncertainty In Future Revenue Can Act As A Participation Incentive To Sharing In The Repeated Game. We, Then, Demonstrate How An Efficient Sharing Strategy Can Be Obtained Via Solving A Simple Dynamic Programming Problem. The Obtained Strategy Is A Simple Update Rule That Depends Only On The Current Workloads And A Single Variable Summarizing Past Interactions. In Contrast To Existing Approaches, The Model Incorporates Historical And Expected Future Revenue As Part Of The Virtual Machine (VM) Sharing Decision. Moreover, These Decisions Are Not Enforced Neither By A Centralized Broker Nor By Predefined Agreements. Rather, The Proposed Model Employs A Simple Grim Trigger Strategy Where A CP Is Threatened By The Elimination Of Future VM Hosting By Other CPs. Simulation Results Demonstrate The Performance Of The Proposed Model In Terms Of The Increased Profit And The Reduction In The Variance In The Spot Market VM Availability And Prices.

A SOCIAL COMPUTE CLOUD- Allocating And Sharing Infrastructure Resources Via Social Networks

Social Network Platforms Have Rapidly Changed The Way That People Communicate And Interact. They Have Enabled The Establishment Of, And Participation In, Digital Communities As Well As The Representation, Documentation And Exploration Of Social Relationships. We Believe That As `apps' Become More Sophisticated, It Will Become Easier For Users To Share Their Own Services, Resources And Data Via Social Networks. To Substantiate This, We Present A Social Compute Cloud Where The Provisioning Of Cloud Infrastructure Occurs Through “friend” Relationships. In A Social Compute Cloud, Resource Owners Offer Virtualized Containers On Their Personal Computer(s) Or Smart Device(s) To Their Social Network. However, As Users May Have Complex Preference Structures Concerning With Whom They Do Or Do Not Wish To Share Their Resources, We Investigate, Via Simulation, How Resources Can Be Effectively Allocated Within A Social Community Offering Resources On A Best Effort Basis. In The Assessment Of Social Resource Allocation, We Consider Welfare, Allocation Fairness, And Algorithmic Runtime. The Key Findings Of This Work Illustrate How Social Networks Can Be Leveraged In The Construction Of Cloud Computing Infrastructures And How Resources Can Be Allocated In The Presence Of User Sharing Preferences.

INVESTIGATE DATA CENTER PERFORMANCE- A Stochastic Model To Investigate Data Center Performance And QoS In IaaS Cloud Computing Systems

Cloud Data Center Management Is A Key Problem Due To The Numerous And Heterogeneous Strategies That Can Be Applied, Ranging From The VM Placement To The Federation With Other Clouds. Performance Evaluation Of Cloud Computing Infrastructures Is Required To Predict And Quantify The Cost-benefit Of A Strategy Portfolio And The Corresponding Quality Of Service (QoS) Experienced By Users. Such Analyses Are Not Feasible By Simulation Or On-the-field Experimentation, Due To The Great Number Of Parameters That Have To Be Investigated. In This Paper, We Present An Analytical Model, Based On Stochastic Reward Nets (SRNs), That Is Both Scalable To Model Systems Composed Of Thousands Of Resources And Flexible To Represent Different Policies And Cloud-specific Strategies. Several Performance Metrics Are Defined And Evaluated To Analyze The Behavior Of A Cloud Data Center: Utilization, Availability, Waiting Time, And Responsiveness. A Resiliency Analysis Is Also Provided To Take Into Account Load Bursts. Finally, A General Approach Is Presented That, Starting From The Concept Of System Capacity, Can Help System Managers To Opportunely Set The Data Center Parameters Under Different Working Conditions.

PRIVACY PRESERVING AUTHENTICATION - Shared Authority Based Privacy-preserving Authentication Protocol In Cloud Computing

Cloud Computing Is An Emerging Data Interactive Paradigm To Realize Users' Data Remotely Stored In An Online Cloud Server. Cloud Services Provide Great Conveniences For The Users To Enjoy The On-demand Cloud Applications Without Considering The Local Infrastructure Limitations. During The Data Accessing, Different Users May Be In A Collaborative Relationship, And Thus Data Sharing Becomes Significant To Achieve Productive Benefits. The Existing Security Solutions Mainly Focus On The Authentication To Realize That A User's Privative Data Cannot Be Illegally Accessed, But Neglect A Subtle Privacy Issue During A User Challenging The Cloud Server To Request Other Users For Data Sharing. The Challenged Access Request Itself May Reveal The User's Privacy No Matter Whether Or Not It Can Obtain The Data Access Permissions. In This Paper, We Propose A Shared Authority Based Privacy-preserving Authentication Protocol (SAPA) To Address Above Privacy Issue For Cloud Storage. In The SAPA, 1) Shared Access Authority Is Achieved By Anonymous Access Request Matching Mechanism With Security And Privacy Considerations (e.g., Authentication, Data Anonymity, User Privacy, And Forward Security); 2) Attribute Based Access Control Is Adopted To Realize That The User Can Only Access Its Own Data Fields; 3) Proxy Re-encryption Is Applied To Provide Data Sharing Among The Multiple Users. Meanwhile, Universal Composability (UC) Model Is Established To Prove That The SAPA Theoretically Has The Design Correctness. It Indicates That The Proposed Protocol Is Attractive For Multi-user Collaborative Cloud Applications.

SECURE AUTHERIZED DEDUPLICATION - A Hybrid Cloud Approach For Secure Authorized Deduplication

Data Deduplication Is One Of Important Data Compression Techniques For Eliminating Duplicate Copies Of Repeating Data, And Has Been Widely Used In Cloud Storage To Reduce The Amount Of Storage Space And Save Bandwidth. To Protect The Confidentiality Of Sensitive Data While Supporting Deduplication, The Convergent Encryption Technique Has Been Proposed To Encrypt The Data Before Outsourcing. To Better Protect Data Security, This Paper Makes The First Attempt To Formally Address The Problem Of Authorized Data Deduplication. Different From Traditional Deduplication Systems, The Differential Privileges Of Users Are Further Considered In Duplicate Check Besides The Data Itself. We Also Present Several New Deduplication Constructions Supporting Authorized Duplicate Check In A Hybrid Cloud Architecture. Security Analysis Demonstrates That Our Scheme Is Secure In Terms Of The Definitions Specified In The Proposed Security Model. As A Proof Of Concept, We Implement A Prototype Of Our Proposed Authorized Duplicate Check Scheme And Conduct Testbed Experiments Using Our Prototype. We Show That Our Proposed Authorized Duplicate Check Scheme Incurs Minimal Overhead Compared To Normal Operations.

AUDITING FOR SHARED DATAIN CLOUD Privacy-Preserving Public Auditing For Shared Data In The Cloud

With Cloud Data Services, It Is Commonplace For Data To Be Not Only Stored In The Cloud, But Also Shared Across Multiple Users. Unfortunately, The Integrity Of Cloud Data Is Subject To Skepticism Due To The Existence Of Hardware/software Failures And Human Errors. Several Mechanisms Have Been Designed To Allow Both Data Owners And Public Verifiers To Efficiently Audit Cloud Data Integrity Without Retrieving The Entire Data From The Cloud Server. However, Public Auditing On The Integrity Of Shared Data With These Existing Mechanisms Will Inevitably Reveal Confidential Information—identity Privacy—to Public Verifiers. In This Paper, We Propose A Novel Privacy-preserving Mechanism That Supports Public Auditing On Shared Data Stored In The Cloud. In Particular, We Exploit Ring Signatures To Compute Verification Metadata Needed To Audit The Correctness Of Shared Data. With Our Mechanism, The Identity Of The Signer On Each Block In Shared Data Is Kept Private From Public Verifiers, Who Are Able To Efficiently Verify Shared Data Integrity Without Retrieving The Entire File. In Addition, Our Mechanism Is Able To Perform Multiple Auditing Tasks Simultaneously Instead Of Verifying Them One By One. Our Experimental Results Demonstrate The Effectiveness And Efficiency Of Our Mechanism When Auditing Shared Data Integrity.

PERFORMACE AND COST EVALUATION - Performance And Cost Evaluation Of An Adaptive Encryption Architecture For Cloud Databases

The Cloud Database As A Service Is A Novel Paradigm That Can Support Several Internet-based Applications, But Its Adoption Requires The Solution Of Information Confidentiality Problems. We Propose A Novel Architecture For Adaptive Encryption Of Public Cloud Databases That Offers An Interesting Alternative To The Tradeoff Between The Required Data Confidentiality Level And The Flexibility Of The Cloud Database Structures At Design Time. We Demonstrate The Feasibility And Performance Of The Proposed Solution Through A Software Prototype. Moreover, We Propose An Original Cost Model That Is Oriented To The Evaluation Of Cloud Database Services In Plain And Encrypted Instances And That Takes Into Account The Variability Of Cloud Prices And Tenant Workloads During A Medium-term Period.

SOCIAL VIDEO SHARING IN CLOUD - AMES-Cloud Framework Of Adaptive Mobile Video Streaming And Efficient Social Video Sharing In The Clouds

While Demands On Video Traffic Over Mobile Networks Have Been Souring, The Wireless Link Capacity Cannot Keep Up With The Traffic Demand. The Gap Between The Traffic Demand And The Link Capacity, Along With Time-varying Link Conditions, Results In Poor Service Quality Of Video Streaming Over Mobile Networks Such As Long Buffering Time And Intermittent Disruptions. Leveraging The Cloud Computing Technology, We Propose A New Mobile Video Streaming Framework, Dubbed AMES-Cloud, Which Has Two Main Parts: Adaptive Mobile Video Streaming (AMoV) And Efficient Social Video Sharing (ESoV). AMoV And ESoV Construct A Private Agent To Provide Video Streaming Services Efficiently For Each Mobile User. For A Given User, AMoV Lets Her Private Agent Adaptively Adjust Her Streaming Flow With A Scalable Video Coding Technique Based On The Feedback Of Link Quality. Likewise, ESoV Monitors The Social Network Interactions Among Mobile Users, And Their Private Agents Try To Prefetch Video Content In Advance. We Implement A Prototype Of The AMES-Cloud Framework To Demonstrate Its Performance. It Is Shown That The Private Agents In The Clouds Can Effectively Provide The Adaptive Streaming, And Perform Video Sharing (i.e., Prefetching) Based On The Social Network Analysis.

DATA SHARING FOR DYNAMIC GROPUS IN CLOUD - MONA-Multi-Owner Data Sharing For Dynamic Groups In The Cloud

With The Character Of Low Maintenance, Cloud Computing Provides An Economical And Efficient Solution For Sharing Group Resource Among Cloud Users. Unfortunately, Sharing Data In A Multi-owner Manner While Preserving Data And Identity Privacy From An Untrusted Cloud Is Still A Challenging Issue, Due To The Frequent Change Of The Membership. In This Paper, We Propose A Secure Multi-owner Data Sharing Scheme, Named Mona, For Dynamic Groups In The Cloud. By Leveraging Group Signature And Dynamic Broadcast Encryption Techniques, Any Cloud User Can Anonymously Share Data With Others. Meanwhile, The Storage Overhead And Encryption Computation Cost Of Our Scheme Are Independent With The Number Of Revoked Users. In Addition, We Analyze The Security Of Our Scheme With Rigorous Proofs, And Demonstrate The Efficiency Of Our Scheme In Experiments

DATA SECURITY IN CLOUD COMPUTING - Ensuring Data Security In Cloud Computing

With The Advent Of Cloud Computing, Data Owners Are Motivated To Outsource Their Complex Data Management Systems From Local Sites To Commercial Public Cloud For Great Flexibility And Economic Savings. But For Protecting Data Privacy, Sensitive Data Has To Be Encrypted Before Outsourcing, Which Obsoletes Traditional Data Utilization Based On Plaintext Keyword Search. Thus, Enabling An Encrypted Cloud Data Search Service Is Of Paramount Importance. Considering The Large Number Of Data Users And Documents In Cloud, It Is Crucial For The Search Service To Allow Multi-keyword Query And Provide Result Similarity Ranking To Meet The Effective Data Retrieval Need. Related Works On Searchable Encryption Focus On Single Keyword Search Or Boolean Keyword Search, And Rarely Differentiate The Search Results. In This Paper, For The First Time, We Define And Solve The Challenging Problem Of Privacy-preserving Multi-keyword Ranked Search Over Encrypted Cloud Data (MRSE), And Establish A Set Of Strict Privacy Requirements For Such A Secure Cloud Data Utilization System To Become A Reality. Among Various Multi-keyword Semantics, We Choose The Efficient Principle Of “coordinate Matching”, I.e., As Many Matches As Possible, To Capture The Similarity Between Search Query And Data Documents, And Further Use “inner Product Similarity” To Quantitatively Formalize Such Principle For Similarity Measurement. We First Propose A Basic MRSE Scheme Using Secure Inner Product Computation, And Then Significantly Improve It To Meet Different Privacy Requirements In Two Levels Of Threat Models. Thorough Analysis Investigating Privacy And Efficiency Guarantees Of Proposed Schemes Is Given, And Experiments On The Real-world Dataset Further Show Proposed Schemes Indeed Introduce Low Overhead On Computation And Communication.

MULTI CLOUD ARCHITECTURE - Security And Privacy Enhancing Multi-Cloud Architecture

Security Challenges Are Still Among The Biggest Obstacles When Considering The Adoption Of Cloud Services. This Triggered A Lot Of Research Activities, Resulting In A Quantity Of Proposals Targeting The Various Cloud Security Threats. Alongside With These Security Issues, The Cloud Paradigm Comes With A New Set Of Unique Features, Which Open The Path Toward Novel Security Approaches, Techniques, And Architectures. This Paper Provides A Survey On The Achievable Security Merits By Making Use Of Multiple Distinct Clouds Simultaneously. Various Distinct Architectures Are Introduced And Discussed According To Their Security And Privacy Capabilities And Prospects.

SERVICE BASED APPLICATION IN THE CLOUD- A Decentralized Self-adaptation Mechanism For Service-based Applications In The Cloud

Cloud Computing, With Its Promise Of (almost) Unlimited Computation, Storage, And Bandwidth, Is Increasingly Becoming The Infrastructure Of Choice For Many Organizations. As Cloud Offerings Mature, Service-based Applications Need To Dynamically Recompose Themselves To Self-adapt To Changing QoS Requirements. In This Paper, We Present A Decentralized Mechanism For Such Self-adaptation, Using Market-based Heuristics. We Use A Continuous Double-auction To Allow Applications To Decide Which Services To Choose, Among The Many On Offer. We View An Application As A Multi-agent System And The Cloud As A Marketplace Where Many Such Applications Self-adapt. We Show Through A Simulation Study That Our Mechanism Is Effective For The Individual Application As Well As From The Collective Perspective Of All Applications Adapting At The Same Time.

SECURE DATA FORWARDING - A Secure Erasure Code Based Cloud Storage System With Secure Data Forwarding

A Cloud Storage System, Consisting Of A Collection Of Storage Servers, Provides Long-term Storage Services Over The Internet. Storing Data In A Third Party's Cloud System Causes Serious Concern Over Data Confidentiality. General Encryption Schemes Protect Data Confidentiality, But Also Limit The Functionality Of The Storage System Because A Few Operations Are Supported Over Encrypted Data. Constructing A Secure Storage System That Supports Multiple Functions Is Challenging When The Storage System Is Distributed And Has No Central Authority. We Propose A Threshold Proxy Re-encryption Scheme And Integrate It With A Decentralized Erasure Code Such That A Secure Distributed Storage System Is Formulated. The Distributed Storage System Not Only Supports Secure And Robust Data Storage And Retrieval, But Also Lets A User Forward His Data In The Storage Servers To Another User Without Retrieving The Data Back. The Main Technical Contribution Is That The Proxy Re-encryption Scheme Supports Encoding Operations Over Encrypted Messages As Well As Forwarding Operations Over Encoded And Encrypted Messages. Our Method Fully Integrates Encrypting, Encoding, And Forwarding. We Analyze And Suggest Suitable Parameters For The Number Of Copies Of A Message Dispatched To Storage Servers And The Number Of Storage Servers Queried By A Key Server. These Parameters Allow More Flexible Adjustment Between The Number Of Storage Servers And Robustness.

HIERARCHICAL ATTRIBUTE - HASBE-A Hierarchical Attribute Based Solution For Flexible And Scalable Access Control In Cloud Computing

Cloud Computing Has Emerged As One Of The Most Influential Paradigms In The IT Industry In Recent Years. Since This New Computing Technology Requires Users To Entrust Their Valuable Data To Cloud Providers, There Have Been Increasing Security And Privacy Concerns On Outsourced Data. Several Schemes Employing Attribute-based Encryption (ABE) Have Been Proposed For Access Control Of Outsourced Data In Cloud Computing; However, Most Of Them Suffer From Inflexibility In Implementing Complex Access Control Policies. In Order To Realize Scalable, Flexible, And Fine-grained Access Control Of Outsourced Data In Cloud Computing, In This Paper, We Propose Hierarchical Attribute-set-based Encryption (HASBE) By Extending Ciphertext-policy Attribute-set-based Encryption (ASBE) With A Hierarchical Structure Of Users. The Proposed Scheme Not Only Achieves Scalability Due To Its Hierarchical Structure, But Also Inherits Flexibility And Fine-grained Access Control In Supporting Compound Attributes Of ASBE. In Addition, HASBE Employs Multiple Value Assignments For Access Expiration Time To Deal With User Revocation More Efficiently Than Existing Schemes. We Formally Prove The Security Of HASBE Based On Security Of The Ciphertext-policy Attribute-based Encryption Scheme By Bethencourt And Analyze Its Performance And Computational Complexity. We Implement Our Scheme And Show That It Is Both Efficient And Flexible In Dealing With Access Control For Outsourced Data In Cloud Computing With Comprehensive Experiments.

RANKING PREDICTION - QoS Ranking Prediction For Cloud Services

Cloud Computing Is Becoming Popular. Building High-quality Cloud Applications Is A Critical Research Problem. QoS Rankings Provide Valuable Information For Making Optimal Cloud Service Selection From A Set Of Functionally Equivalent Service Candidates. To Obtain QoS Values, Real-world Invocations On The Service Candidates Are Usually Required. To Avoid The Time-consuming And Expensive Real-world Service Invocations, This Paper Proposes A QoS Ranking Prediction Framework For Cloud Services By Taking Advantage Of The Past Service Usage Experiences Of Other Consumers. Our Proposed Framework Requires No Additional Invocations Of Cloud Services When Making QoS Ranking Prediction. Two Personalized QoS Ranking Prediction Approaches Are Proposed To Predict The QoS Rankings Directly. Comprehensive Experiments Are Conducted Employing Real-world QoS Data, Including 300 Distributed Users And 500 Real-world Web Services All Over The World. The Experimental Results Show That Our Approaches Outperform Other Competing Approaches.

CQA POST VOTING PREDICTION - QAAN Question Answering Attention Networking For Community Question Classification

Community Question Answering (CQA) Provides Platforms For Users With Various Backgrounds To Obtain Information And Share Knowledge. In Recent Years, With The Rapid Development Of Such Online Platforms, An Enormous Amount Of Archive Data Has Accumulated, It Becomes More And More Difficult For Expert Users To Identify Desirable Questions. In Order To Reduce The Proportion Of Unanswered Questions In CQA, Facilitate Expert Users To Find The Questions They Are Interested In, Question Classification Becomes An Important Task Of CQA, Which Aims To Assign A Newly Posted Question To A Specific Preset Category. In This Paper, We Propose A Novel Question Answering Attention Network (QAAN) For Investigating The Role Of The Paired Answer Of Questions For Classification. Specifically, QAAN Studies The Correlation Between Question And Paired Answer, Taking The Questions As The Primary Part Of The Question Representation, And The Answer Information Is Aggregated Based On Similarity And Disparity With The Answer. Our Experiment Is Implemented On Yahoo! Answers Dataset. The Results Show That QAAN Outperforms All The Baseline Models.

REPRESENTATIVE TRAVEL ROUTE RECOMMENDATION- Personalized Tourism Route Recommendation System Based On Dynamic Clustering Of User Groups

Tourism Path Dynamic Planning Is An Asynchronous Group Model Planning Problem. It Is Required To Find Group Patterns With Similar Trajectory Behavior Under The Constraint Of Unequal Time Intervals. Traditional Trajectory Group Pattern Mining Algorithms Often Deal With GPS Data With Fixed Time Interval Sampling Constraints, So They Can Not Be Directly Used In Coterie Pattern Mining. At The Same Time, Traditional Group Pattern Mining Has The Problem Of Lack Of Semantic Information, Which Reduces The Integrity And Accuracy Of Personalized Travel Route Recommendation. Therefore, This Paper Proposes A Semantic Based Distance Sensitive Recommendation Strategy. In Order To Efficiently Process Large-scale Social Network Trajectory Data, This Paper Uses MapReduce Programming Model With Optimized Clustering To Mine Coterie Group Patterns. The Experimental Results Show That: Under MapReduce Programming Model, Coterie Group Pattern Mining With Optimized Clustering And Semantic Information Is Superior To Traditional Group Mode In Personalized Travel Route Recommendation Quality, And Can Effectively Process Large-scale Social Network Trajectory Data.

CREDIT CARD FRAUD DETECTION - Fraud Detection In Credit Card Data Using Unsupervised Machine Learning Based Scheme

Development Of Communication Technologies And E-commerce Has Made The Credit Card As The Most Common Technique Of Payment For Both Online And Regular Purchases. So, Security In This System Is Highly Expected To Prevent Fraud Transactions. Fraud Transactions In Credit Card Data Transaction Are Increasing Each Year. In This Direction, Researchers Are Also Trying The Novel Techniques To Detect And Prevent Such Frauds. However, There Is Always A Need Of Some Techniques That Should Precisely And Efficiently Detect These Frauds. This Paper Proposes A Scheme For Detecting Frauds In Credit Card Data Which Uses A Neural Network (NN) Based Unsupervised Learning Technique. Proposed Method Outperforms The Existing Approaches Of Auto Encoder (AE), Local Outlier Factor (LOF), Isolation Forest (IF) And K-Means Clustering. Proposed NN Based Fraud Detection Method Performs With 99.87% Accuracy Whereas Existing Methods AE, IF, LOF And K Means Gives 97%, 98%, 98% And 99.75% Accuracy Respectively.

SECURE MINING OF ASSOCIATION RULES - Scalable Privacy-Preserving Distributed Extremely Randomized Trees For Structured Data With Multiple Colluding Parties

Today, In Many Real-world Applications Of Machine Learning Algorithms, The Data Is Stored On Multiple Sources Instead Of At One Central Repository. In Many Such Scenarios, Due To Privacy Concerns And Legal Obligations, E.g., For Medical Data, And Communication/computation Overhead, For Instance For Large Scale Data, The Raw Data Cannot Be Transferred To A Center For Analysis. Therefore, New Machine Learning Approaches Are Proposed For Learning From The Distributed Data In Such Settings. In This Paper, We Extend The Distributed Extremely Randomized Trees (ERT) Approach W.r.t. Privacy And Scalability. First, We Extend Distributed ERT To Be Resilient W.r.t. The Number Of Colluding Parties In A Scalable Fashion. Then, We Extend The Distributed ERT To Improve Its Scalability Without Any Major Loss In Classification Performance. We Refer To Our Proposed Approach As K-PPD-ERT Or Privacy-Preserving Distributed Extremely Randomized Trees With K Colluding Parties.

TAXI DRIVERS ROUTE CHOICE BEHAVIOR USING THE TRACE RECORDS- A Mixed Path Size Logit-Based Taxi Customer-Search Model Considering Spatio-Temporal Factors In Route Choice

This Paper Introduces A Model To Analyze Route Choice Behavior Of Taxi Drivers For Finding Next Passenger In Urban Road Network. Considering The Situation Of Path Overlapping Between Selected Routes In The Process Of Customer-searching, A Mixed Path Size Logit Model Is Proposed To Analyze Route Choice Behaviors Through Considering Spatio-temporal Features Of Route Including Customer Generation Rate, Path Travel Time, Cumulative Intersection Delay, Path Distance, And Path Size. Specially, Customer Generation Rate Is Defined As Attraction Strength Based On Historical Pick-up Records In The Route, The Intersection Travel Delay And Path Travel Time Are Estimated Based On Large Scaled Taxi Global Positioning System Trajectories. In The Experiment, The GPS Data Were Collected From About 36000 Taxi Vehicles In Beijing At 30-s Interval During Six Months. In The Model Application, An Area Of Approximately 10 Square Kilometers In The Center Of Beijing Is Selected To Demonstrate The Effectiveness Of The Proposed Model. The Results Indicated That The MPSL Model Could Effectively Analyze The Route Choice Behavior In Customer-searching Process And Express Higher Accuracy Than Traditional Multinomial Logit Model And Basic PSL Model.

FILE TRANSFER USING CRYPTOGRAPHIC TECHNIQUE - Enhancing Secure Digital Communication Media Using Cryptographic Steganography Techniques

Data Hiding Technique Is The Process Of Anti-computer Forensic For Making The Data Difficult To Accessible. Steganography Is Merging Texts, Files, Or Other Multimedia Files Within Another Texts, Files, Or Other Multimedia Files To Reduce The Visible Attack And It Is An Approach Of Data Hiding Technique. Cryptography Is Changing The Readable Text To Illegible Information. This Paper Presents About Secure Communication Media Which Is Used In Transferring Text, Multimedia Or Relevant Digital File Between Sender And Receiver Securely. To Have Securing Communication Media, The Media Required To Reduce The Possible Threats And Vulnerabilities. Therefore, Transferred Media Is Main Thing To Consideration For Having Communication System Firmly. Data Hiding Techniques Are Used To Improve The Security Of Communication Media Using Salt Encryption. This Paper Is Proposed The Methodology To Develop The Secure Communication Media Using Combination Of Cryptography And Steganography Techniques By Describing Experimental Results From Difference Technical Analysis.

PREDICT LENGTH OF STAY OF STROKE PATIENTS USING DATA MINING TECHNIQUES - SNOMED CT-Based Standardized E-Clinical Pathways For Enabling Big Data Analytics In Healthcare

Automation Of Healthcare Facilities Represents A Challenging Task Of Streamlining A Highly Information-intensive Sector. Modern Healthcare Processes Produce Large Amounts Of Data That Have Great Potential For Health Policymakers And Data Science Researchers. However, A Considerable Portion Of Such Data Is Not Captured In Electronic Format And Hidden Inside The Paperwork. A Major Source Of Missing Data In Healthcare Is Paper-based Clinical Pathways (CPs). CPs Are Healthcare Plans That Detail The Interventions For The Treatment Of Patients, And Thus Are The Primary Source For Healthcare Data. However, Most CPs Are Used As Paper-based Documents And Not Fully Automated. A Key Contribution Towards The Full Automation Of CPs Is Their Proper Computer Modeling And Encoding Their Data With International Clinical Terminologies. We Present In This Research An Ontology-based CP Automation Model In Which CP Data Are Standardized With SNOMED CT, Thus Enabling Machine Learning Algorithms To Be Applied To CP-based Datasets. CPs Automated Under This Model Contribute Significantly To Reducing Data Missingness Problems, Enabling Detailed Statistical Analyses On CP Data, And Improving The Results Of Data Analytics Algorithms. Our Experimental Results On Predicting The Length Of Stay (LOS) Of Stroke Patients Using A Dataset Resulting From An E-clinical Pathway Demonstrate Improved Prediction Results Compared With LOS Prediction Using Traditional EHR-based Datasets. Fully Automated CPs Enrich Medical Datasets With More CP Data And Open New Opportunities For Machine Learning Algorithms To Show Their Full Potential In Improving Healthcare, Reducing Costs, And Increasing Patient Satisfaction

PREDICT CHANGING STUDENTS ATTITUDE USING DATA MINING - Supporting Teachers To Monitor Students Learning Progress In An Educational Environment With Robotics Activities

Educational Robotics Has Proven Its Positive Impact On The Performances And Attitudes Of Students. However, The Educational Environments That Employ Them Rarely Provide Teachers With Relevant Information That Can Be Used To Make An Effective Monitoring Of The Student Learning Progress. To Overcome These Limitations, In This Paper We Present IDEE (Integrated Didactic Educational Environment), An Educational Environment For Physics, That Uses EV3 LEGO Mindstorms R Educational Kit As Robotic Component. To Provide Support To Teachers, IDEE Includes A Dashboard That Provides Them With Information About The Students’ Learning Process. This Analysis Is Done By Means Of An Additive Factor Model (AFM). That Is A Well-known Technique In The Educational Data Mining Research Area. However, It Has Been Usually Employed To Carry Out Analysis About Students’ Performance Data Outside The System. This Can Be A Burden For The Teacher Who, In Most Cases, Is Not An Expert In Data Analysis. Our Goal In This Paper Is To Show How The Coefficients Of AFM Provide Valuable Information To The Teacher Without Requiring Any Deep Expertise In Data Analysis. In Addition, We Show An Improved Version Of The AFM That Provides A Deeper Understanding About The Students’ Learning Process.

MALWARE DETECTION IN GOOGLE PLAY - Towards De-Anonymization Of Google Play Search Rank Fraud

Search Rank Fraud, The Fraudulent Promotion Of Products Hosted On Peer-review Sites, Is Driven By Expert Workers Recruited Online, Often From Crowdsourcing Sites. In This Paper We Introduce The Fraud De-anonymization Problem, That Goes Beyond Fraud Detection, To Unmask The Human Masterminds Responsible For Posting Search Rank Fraud In Peer-review Sites. We Collect And Study Data From Crowdsourced Search Rank Fraud Jobs, And Survey The Capabilities And Behaviors Of 58 Search Rank Fraud Workers Recruited From 6 Crowdsourcing Sites. We Collect A Gold Standard Dataset Of Google Play User Accounts Attributed To 23 Crowdsourced Workers And Analyze Their Fraudulent Behaviors In The Wild. We Propose Dolos , A Fraud De-anonymization System That Leverages Traits And Behaviors We Extract From Our Studies, To Attribute Detected Fraud To Crowdsourcing Site Workers, Thus To Real Identities And Bank Accounts. We Introduce MCDense, A Min-cut Dense Component Detection Algorithm To Uncover Groups Of User Accounts Controlled By Different Workers, And Use Stylometry And Supervised Learning To Attribute Them To Crowdsourcing Site Profiles. Dolos Correctly Identified The Owners Of 95 Percent Of Fraud Worker-controlled Communities, And Uncovered Fraud Workers Who Promoted As Many As 97.5 Percent Of Fraud Apps We Collected From Google Play. When Evaluated On 13,087 Apps (820,760 Reviews), Which We Monitored Over More Than 6 Months, Dolos Identified 1,056 Apps With Suspicious Reviewer Groups. We Report Orthogonal Evidence Of Their Fraud, Including Fraud Duplicates And Fraud Re-posts. Dolos Significantly Outperformed Adapted Dense Subgraph Detection And Loopy Belief Propagation Competitors, On Two New Coverage Scores That Measure The Quality Of Detected Community Partitions.

LOCATION AWARE KEYWORD QUERY SUGGESTION - Integrating “Random Forest” With Indexing And Query Processing For Personalized Search

The Internet Has Become An Integral Part Of At Least 4.4 Billion Lives. An Average Person Looks At Their Device At Least 20 Times A Day. One Can Only Imagine The Amount Of Queries A Search Engine Gets On A Daily Basis. With The Help Of All The Data Acquired Over The Years, The Internet Updates Us With All The Biggest Trends And Live Events Happening All Over The World. A Search Engine Is Able To Provide Query Suggestions Based On The Number Of Times A Keyword Has Been Searched For Or The Current Query Relates To A Certain Trend. All These Trends Are Updated To Every Device Internationally Or Locally. This Concept Is Generalized Throughout All Devices That Use Any Kind Of Search Engine On Any Application. Through This Paper We Intend To Propose To Use Random Forest As A Predictive Model To Be Integrated With The Indexing Process Of The Search Engine To Produce Query Suggestions That A User Would Want To Search, Contrary To The Query Suggestions That Are Usually Displayed Based On Hyped Trends And Fashion.

USER TRUST AND ITEM RATINGS PREDICT - A Novel Implicit Trust Recommendation Approach For Rating Prediction

Rating Predictions, As An Application That Is Widely Used In Recommender Systems, Have Gradually Become A Valuable Way Which Can Help User Narrow Down Their Choices Quickly And Make Wise Decisions From The Vast Amount Of Information. However, Most Existing Collaborative Recommendation Models Suffer From Poor Accuracy Due To Data Sparsity And Cold Start Problems That Recommender Systems Contain Only A Few Explicit Data. To Solve This Problem, A New Implicit Trust Recommendation Approach (ITRA) Is Proposed To Generate Item Rating Prediction By Mining And Utilizing User Implicit Information In Recommender Systems. Specifically, User Trust Neighbor Set That Has Similar Preference And Taste With A Target User Is First Obtained By Trust Expansion Strategy Via User Trust Diffusion Features In A Trust Network. Then, The Trust Ratings Mined From User Trust Neighbors Are Used To Compute Trust Similarity Among Users Based On User Collaborative Filtering Model. Finally, Using The Above Filtered Trust Ratings And User Trust Similarity, The Prediction Results Are Generated By A Trust Weighting Method. In Addition, The Empirical Experiments Are Conducted On Three Real-world Datasets, And The Results Demonstrate That Our Rating Prediction Model Has Obvious Advantages Over The State-of-the-art Comparison Methods In Terms Of The Accuracy Of Recommendations.

PRIVACY POLICY INFERENCE OF USER-UPLOADED IMAGES - User Flagging For Posts At 3DTubeorg The First Social Platform For 3D-Exclusive Contents

Social Networks Have Been A Popular Way For A Community To Share Content, Information, And News. Despite Section 230 Of The Communications Decency Act Of 1996 Protecting Social Platforms From Legal Liability Regarding User Uploaded Contents Of Their Platforms In The USA, There Has Been A Recent Call For Some Jurisdiction Over Platform Management Practices. This Duty Of Potential Jurisdiction Would Be Especially Challenging For Social Networks That Are Rich In Multimedia Contents, Such As 3DTube.org, Since 3D Capabilities Have A History Of Attracting Adult Materials And Other Controversial Content. This Paper Presents The Design Of 3DTube.org To Address Two Major Issues: (1) The Need For A Social Media Platform Of 3D Contents And (2) The Policies And Designs For Mediation Of Said Contents. Content Mediation Can Be Seen As A Compromise Between Two Conflicting Goals: Platform Micromanaging Of Content, Which Is Resource-intensive, And User Notification Of Flagged Content And Material, Prior To Viewing. This Paper Details 3DTube.org's Solution To Such A Compromise.

SEMANTICALLY SECURE ENCRYPTED RELATIONAL DATA USING K -NEAREST NEIGHBOR CLASSIFICATION - A Distributed Storage And Computation K-Nearest Neighbor Algorithm Based Cloud-Edge Computing For Cyber-Physical-Social Systems

The K-nearest Neighbor (kNN) Algorithm Is A Classic Supervised Machine Learning Algorithm. It Is Widely Used In Cyber-physical-social Systems (CPSS) To Analyze And Mine Data. However, In Practical CPSS Applications, The Standard Linear KNN Algorithm Struggles To Efficiently Process Massive Data Sets. This Paper Proposes A Distributed Storage And Computation K-nearest Neighbor (D-kNN) Algorithm. The D-kNN Algorithm Has The Following Advantages: First, The Concept Of K-nearest Neighbor Boundaries Is Proposed And The K-nearest Neighbor Search Within The K-nearest Neighbors Boundaries Can Effectively Reduce The Time Complexity Of KNN. Second, Based On The K-neighbor Boundary, Massive Data Sets Beyond The Main Storage Space Are Stored On Distributed Storage Nodes. Third, The Algorithm Performs K-nearest Neighbor Searching Efficiently By Performing Distributed Calculations At Each Storage Node. Finally, A Series Of Experiments Were Performed To Verify The Effectiveness Of The D-kNN Algorithm. The Experimental Results Show That The D-kNN Algorithm Based On Distributed Storage And Calculation Effectively Improves The Operation Efficiency Of K-nearest Neighbor Search. The Algorithm Can Be Easily And Flexibly Deployed In A Cloud-edge Computing Environment To Process Massive Data Sets In CPSS.

COMPLICATION RISK PROFILING IN DIABETES CARE- A Bayesian Multi-Task And Feature Relationship Learning Approach

Diabetes Mellitus, Commonly Known As Diabetes, Is A Chronic Disease That Often Results In Multiple Complications. Risk Prediction Of Diabetes Complications Is Critical For Healthcare Professionals To Design Personalized Treatment Plans For Patients In Diabetes Care For Improved Outcomes. In This Paper, Focusing On Type 2 Diabetes Mellitus (T2DM), We Study The Risk Of Developing Complications After The Initial T2DM Diagnosis From Longitudinal Patient Records. We Propose A Novel Multi-task Learning Approach To Simultaneously Model Multiple Complications Where Each Task Corresponds To The Risk Modeling Of One Complication. Specifically, The Proposed Method Strategically Captures The Relationships (1) Between The Risks Of Multiple T2DM Complications, (2) Between Different Risk Factors, And (3) Between The Risk Factor Selection Patterns, Which Assumes Similar Complications Have Similar Contributing Risk Factors. The Method Uses Coefficient Shrinkage To Identify An Informative Subset Of Risk Factors From High-dimensional Data, And Uses A Hierarchical Bayesian Framework To Allow Domain Knowledge To Be Incorporated As Priors. The Proposed Method Is Favorable For Healthcare Applications Because In Addition To Improved Prediction Performance, Relationships Among The Different Risks And Among Risk Factors Are Also Identified. Extensive Experimental Results On A Large Electronic Medical Claims Database Show That The Proposed Method Outperforms State-of-the-art Models By A Significant Margin. Furthermore, We Show That The Risk Associations Learned And The Risk Factors Identified Lead To Meaningful Clinical Insights.

SUPPLY AND DEMAND CHAIN INTEGRATION- Sustainable Supply And Demand Chain Integration Within Global Manufacturing Industries

Given The Emerging Industrial Management Strategies Considering Three Pillars Of Sustainability In Particular, There Is A Vital Need To Determine The Differences Of Sustainability Practices Within Both Supply And Demand Distribution Systems Through Global Manufacturing Environments Providing With The Successful Global Trade And Logistics. This Research Paper Aims To Explore The Interactions And Advantages Of Sustainability Applications Within Both Supply And Demand Chain Management. The Research Framework Adopted Consists Of Survey Questionnaire Method Which Is Conducted Within A Global Tyre Manufacturing Company. The Research Results And Analysis Justify The Need For The Application Of Ethical Codes, Supply Chain Transformation And The Effective Association Of Industry Executives, Professional Bodies And The Government. The Research Study Also Identifies That The Vital Incentive Factors For The Organisation Towards Sustainable Supply Demand Chain (SSDC) Are Mostly The Financial Benefits Of Doing So And Therefore, A Positive Mind-set Shift Towards Greening Practices Is Required.

MULTI-KEY WORD RANKED SEARCH - Privacy Preserving Multi-Key Word Ranked Search Over Encrypted Cloud Data

With The Advent Of Cloud Computing, Data Owners Are Motivated To Outsource Their Complex Data Management Systems From Local Sites To The Commercial Public Cloud For Great Flexibility And Economic Savings. But For Protecting Data Privacy, Sensitive Data Have To Be Encrypted Before Outsourcing, Which Obsoletes Traditional Data Utilization Based On Plaintext Keyword Search. Thus, Enabling An Encrypted Cloud Data Search Service Is Of Paramount Importance. Considering The Large Number Of Data Users And Documents In The Cloud, It Is Necessary To Allow Multiple Keywords In The Search Request And Return Documents In The Order Of Their Relevance To These Keywords. Related Works On Searchable Encryption Focus On Single Keyword Search Or Boolean Keyword Search, And Rarely Sort The Search Results. In This Paper, For The First Time, We Define And Solve The Challenging Problem Of Privacy-preserving Multi-keyword Ranked Search Over Encrypted Data In Cloud Computing (MRSE). We Establish A Set Of Strict Privacy Requirements For Such A Secure Cloud Data Utilization System. Among Various Multi-keyword Semantics, We Choose The Efficient Similarity Measure Of "coordinate Matching," I.e., As Many Matches As Possible, To Capture The Relevance Of Data Documents To The Search Query. We Further Use "inner Product Similarity" To Quantitatively Evaluate Such Similarity Measure. We First Propose A Basic Idea For The MRSE Based On Secure Inner Product Computation, And Then Give Two Significantly Improved MRSE Schemes To Achieve Various Stringent Privacy Requirements In Two Different Threat Models. To Improve Search Experience Of The Data Search Service, We Further Extend These Two Schemes To Support More Search Semantics. Thorough Analysis Investigating Privacy And Efficiency Guarantees Of Proposed Schemes Is Given. Experiments On The Real-world Data Set Further Show Proposed Schemes Indeed Introduce Low Overhead On Computation And Communication.

SKYLINE PRODUCT - Finding Optimal Skyline Product Combination Under Price Promotion

Nowadays, With The Development Of E-commerce, A Growing Number Of Customers Choose To Go Shopping Online. To Find Attractive Products From Online Shopping Marketplaces, The Skyline Query Is A Useful Tool Which Offers More Interesting And Preferable Choices For Customers. The Skyline Query And Its Variants Have Been Extensively Investigated. However, To The Best Of Our Knowledge, They Have Not Taken Into Account The Requirements Of Customers In Certain Practical Application Scenarios. Recently, Online Shopping Marketplaces Usually Hold Some Price Promotion Campaigns To Attract Customers And Increase Their Purchase Intention. Considering The Requirements Of Customers In This Practical Application Scenario, We Are Concerned About Product Selection Under Price Promotion. We Formulate A Constrained Optimal Product Combination (COPC) Problem. It Aims To Find Out The Skyline Product Combinations Which Both Meet A Customer's Willingness To Pay And Bring The Maximum Discount Rate. The COPC Problem Is Significant To Offer Powerful Decision Support For Customers Under Price Promotion, Which Is Certified By A Customer Study. To Process The COPC Problem Effectively, We First Propose A Two List Exact (TLE) Algorithm. The COPC Problem Is Proven To Be NP-hard, And The TLE Algorithm Is Not Scalable Because It Needs To Process An Exponential Number Of Product Combinations. Additionally, We Design A Lower Bound Approximate (LBA) Algorithm That Has A Guarantee About The Accuracy Of The Results And An Incremental Greedy (IG) Algorithm That Has Good Performance. The Experiment Results Demonstrate The Efficiency And Effectiveness Of Our Proposed Algorithms.

SENTIMENTAL ANALYSIS- Age Related Sentimental Analysis For Efficient Review Mining

Natural Language Processing Has Been Continuous Field Of Interest Since 1950s. It Is Concerned With The Interaction Between Computers And Human’s Natural Languages. The History Of Natural Language Processing Started With Alan Turing’s Article Titled “Computer Machinery And Intelligence”. How Natural Language Is Processed By Computers Is Main Concern Of NLP. Speech Recognition, Text Analysis, Text Translation Are Few Areas Where Natural Language Processing Along With Artificial Intelligence Is Employed. NLP Includes Various Evaluation Tasks Such As Stemming, Grammar Induction, Topic Segmentation Etc. This Project Aims At Developing A Program That Is Used For Age Related Sentiment Analysis. Sentiment Analysis Refers To The Use Of Natural Language Processing, Text Analysis, Computational Linguistics, And Biometrics To Systematically Identify, Extract, Quantify, And Study Affective States And Subjective Information. Methods To Approach Sentiment Analysis Are Classified Mainly Into Knowledge Based Approach, Statistical Approach And Hybrid Approach. Provided A Text, Mood Of The Text Will Be Analysed. The Main Constraint That Is Applied Here Is Age. The Text Will Be Analysed Related To The Age. The Opinion Or Mood Behind The Particular Text Varies For Every Age Group Since Their Understanding Levels And Conceptual Knowledge Varies. Word Ambiguity Is Analysed And Based On The Keyword Detection And Context Analysis Ambiguity Is Removed. Age Is Taken Into Consideration While Analysing The Text And Hence For The Same Text In The Same Context Analysis Varies.

GEOGRAPHICAL PROBABILISTIC FACTOR MODEL - A General Geographical Probabilistic Factor Model For Point Of Interest Recommendation

The Problem Of Point Of Interest (POI) Recommendation Is To Provide Personalized Recommendations Of Places, Such As Restaurants And Movie Theaters. The Increasing Prevalence Of Mobile Devices And Of Location Based Social Networks (LBSNs) Poses Significant New Opportunities As Well As Challenges, Which We Address. The Decision Process For A User To Choose A POI Is Complex And Can Be Influenced By Numerous Factors, Such As Personal Preferences, Geographical Considerations, And User Mobility Behaviors. This Is Further Complicated By The Connection LBSNs And Mobile Devices. While There Are Some Studies On POI Recommendations, They Lack An Integrated Analysis Of The Joint Effect Of Multiple Factors. Meanwhile, Although Latent Factor Models Have Been Proved Effective And Are Thus Widely Used For Recommendations, Adopting Them To POI Recommendations Requires Delicate Consideration Of The Unique Characteristics Of LBSNs. To This End, In This Paper, We Propose A General Geographical Probabilistic Factor Model (Geo-PFM) Framework Which Strategically Takes Various Factors Into Consideration. Specifically, This Framework Allows To Capture The Geographical Influences On A User's Check-in Behavior. Also, User Mobility Behaviors Can Be Effectively Leveraged In The Recommendation Model. Moreover, Based Our Geo-PFM Framework, We Further Develop A Poisson Geo-PFM Which Provides A More Rigorous Probabilistic Generative Process For The Entire Model And Is Effective In Modeling The Skewed User Check-in Count Data As Implicit Feedback For Better POI Recommendations. Finally, Extensive Experimental Results On Three Real-world LBSN Datasets (which Differ In Terms Of User Mobility, POI Geographical Distribution, Implicit Response Data Skewness, And User-POI Observation Sparsity), Show That The Proposed Recommendation Methods Outperform State-of-the-art Latent Factor Models By A Significant Margin

SCALABLE GRAPH-BASED RANKING MODEL- EMR A Scalable Graph-based Ranking Model For Content-based Image Retrieval

Graph-based Ranking Models Have Been Widely Applied In Information Retrieval Area. In This Paper, We Focus On A Well Known Graph-based Model - The Ranking On Data Manifold Model, Or Manifold Ranking. Particularly, It Has Been Successfully Applied To Content-based Image Retrieval, Because Of Its Outstanding Ability To Discover Underlying Geometrical Structure Of The Given Image Database. However, Manifold Ranking Is Computationally Very Expensive, Which Significantly Limits Its Applicability To Large Databases Especially For The Cases That The Queries Are Out Of The Database. We Propose A Novel Scalable Graph-based Ranking Model Called Efficient Manifold Ranking (EMR), Trying To Address The Shortcomings Of MR From Two Main Perspectives: Scalable Graph Construction And Efficient Ranking Computation. Specifically, We Build An Anchor Graph On The Database Instead Of A Traditional K-nearest Neighbor Graph, And Design A New Form Of Adjacency Matrix Utilized To Speed Up The Ranking. An Approximate Method Is Adopted For Efficient Out-of-sample Retrieval. Experimental Results On Some Large Scale Image Databases Demonstrate That EMR Is A Promising Method For Real World Retrieval Applications.

ROUTE-SAVER- Leveraging Route APIs For Accurate And Efficient Query Processing At Location Based Services

Location-based Services (LBS) Enable Mobile Users To Query Points-of-interest (e.g., Restaurants, Cafes) On Various Features (e.g., Price, Quality, Variety). In Addition, Users Require Accurate Query Results With Up-to-date Travel Times. Lacking The Monitoring Infrastructure For Road Traffic, The LBS May Obtain Live Travel Times Of Routes From Online Route APIs In Order To Offer Accurate Results. Our Goal Is To Reduce The Number Of Requests Issued By The LBS Significantly While Preserving Accurate Query Results. First, We Propose To Exploit Recent Routes Requested From Route APIs To Answer Queries Accurately. Then, We Design Effective Lower/upper Bounding Techniques And Ordering Techniques To Process Queries Efficiently. Also, We Study Parallel Route Requests To Further Reduce The Query Response Time. Our Experimental Evaluation Shows That Our Solution Is Three Times More Efficient Than A Competitor, And Yet Achieves High Result Accuracy (above 98 Percent).

TWEET SEGMENTATION - Tweet Segmentation And Its Application To Named Entity Recognition

Twitter Has Attracted Millions Of Users To Share And Disseminate Most Up-to-date Information, Resulting In Large Volumes Of Data Produced Everyday. However, Many Applications In Information Retrieval (IR) And Natural Language Processing (NLP) Suffer Severely From The Noisy And Short Nature Of Tweets. In This Paper, We Propose A Novel Framework For Tweet Segmentation In A Batch Mode, Called HybridSeg. By Splitting Tweets Into Meaningful Segments, The Semantic Or Context Information Is Well Preserved And Easily Extracted By The Downstream Applications. HybridSeg Finds The Optimal Segmentation Of A Tweet By Maximizing The Sum Of The Stickiness Scores Of Its Candidate Segments. The Stickiness Score Considers The Probability Of A Segment Being A Phrase In English (i.e., Global Context) And The Probability Of A Segment Being A Phrase Within The Batch Of Tweets (i.e., Local Context). For The Latter, We Propose And Evaluate Two Models To Derive Local Context By Considering The Linguistic Features And Term-dependency In A Batch Of Tweets, Respectively. HybridSeg Is Also Designed To Iteratively Learn From Confident Segments As Pseudo Feedback. Experiments On Two Tweet Data Sets Show That Tweet Segmentation Quality Is Significantly Improved By Learning Both Global And Local Contexts Compared With Using Global Context Alone. Through Analysis And Comparison, We Show That Local Linguistic Features Are More Reliable For Learning Local Context Compared With Term-dependency. As An Application, We Show That High Accuracy Is Achieved In Named Entity Recognition By Applying Segment-based Part-of-speech (POS) Tagging.

PRIVACY AND DATA CONFIDENTIALITY - Fast A Fast Clustering-Based Database With Privacy And Data Confidentiality

In Order To Prevent The Disclosure Of Sensitive Information And Protect Users' Privacy, The Generalization And Suppression Of Technology Is Often Used To Anonymize The Quasi-identifiers Of The Data Before Its Sharing. Data Streams Are Inherently Infinite And Highly Dynamic Which Are Very Different From Static Datasets, So That The Anonymization Of Data Streams Needs To Be Capable Of Solving More Complicated Problems. The Methods For Anonymizing Static Datasets Cannot Be Applied To Data Streams Directly. In This Paper, An Anonymization Approach For Data Streams Is Proposed With The Analysis Of The Published Anonymization Methods For Data Streams. This Approach Scans The Data Only Once To Recognize And Reuse The Clusters That Satisfy The Anonymization Requirements For Speeding Up The Anonymization Process. Experimental Results On The Real Dataset Show That The Proposed Method Can Reduce The Information Loss That Is Caused By Generalization And Suppression And Also Satisfies The Anonymization Requirements And Has Low Time And Space Complexity.

INSTANT MESSAGE USING DATA MINING AND ONTOLOGY- Framework For Survelliance Of Instant Messages In Instant Messengers And Social Networking Sites Using Data Mining And Ontology

Innumerable Terror And Suspicious Messages Are Sent Through Instant Messengers (IM) And Social Networking Sites (SNS) Which Are Untraced, Leading To Hindrance For Network Communications And Cyber Security. We Propose A Framework That Discover And Predict Such Messages That Are Sent Using IM Or SNS Like Facebook, Twitter, LinkedIn, And Others. Further, These Instant Messages Are Put Under Surveillance That Identifies The Type Of Suspected Cyber Threat Activity By Culprit Along With Their Personnel Details. Framework Is Developed Using Ontology Based Information Extraction Technique (OBIE), Association Rule Mining (ARM) A Data Mining Technique With Set Of Pre-defined Knowledge-based Rules (logical), For Decision Making Process That Are Learned From Domain Experts And Past Learning Experiences Of Suspicious Dataset Like GTD (Global Terrorist Database). The Experimental Results Obtained Will Aid To Take Prompt Decision For Eradicating Cyber Crimes.

WEB SERVICES RECOMMENDATION - Diversifying Web Service Recommendation Results Via Exploring Service Usage History

The Last Decade Has Witnessed A Tremendous Growth Of Web Services As A Major Technology For Sharing Data, Computing Resources, And Programs On The Web. With The Increasing Adoption And Presence Of Web Services, Design Of Novel Approaches For Effective Web Service Recommendation To Satisfy Users’ Potential Requirements Has Become Of Paramount Importance. Existing Web Service Recommendation Approaches Mainly Focus On Predicting Missing QoS Values Of Web Service Candidates Which Are Interesting To A User Using Collaborative Filtering Approach, Content-based Approach, Or Their Hybrid. These Recommendation Approaches Assume That Recommended Web Services Are Independent To Each Other, Which Sometimes May Not Be True. As A Result, Many Similar Or Redundant Web Services May Exist In A Recommendation List. In This Paper, We Propose A Novel Web Service Recommendation Approach Incorporating A User's Potential QoS Preferences And Diversity Feature Of User Interests On Web Services. User's Interests And QoS Preferences On Web Services Are First Mined By Exploring The Web Service Usage History. Then We Compute Scores Of Web Service Candidates By Measuring Their Relevance With Historical And Potential User Interests, And Their QoS Utility. We Also Construct A Web Service Graph Based On The Functional Similarity Between Web Services. Finally, We Present An Innovative Diversity-aware Web Service Ranking Algorithm To Rank The Web Service Candidates Based On Their Scores, And Diversity Degrees Derived From The Web Service Graph. Extensive Experiments Are Conducted Based On A Real World Web Service Dataset, Indicating That Our Proposed Web Service Recommendation Approach Significantly Improves The Quality Of The Recommendation Results Compared With Existing Methods.

DATA RETRIEVAL PROCESS- Generating Boolean Matrix For Data Retrieval Process

An Data Retrieval (DR) Or Information Retrieval (IR) Process Begins When A User Enters A Query Into The System. Queries Are Formal Statements Of Information Needs, For Example Search Strings In Web Search Engines. In IR A Query Does Not Uniquely Identify A Single Object In The Collection. Instead, Several Objects May Match The Query, Perhaps With Different Degrees Of Relevancy. An Object Is An Entity Which Keeps Or Stores Information In A Database. User Queries Are Matched To Objects Stored In The Database. Depending On The Application The Data Objects May Be, For Example, Text Documents, Images Or Videos. The Documents Themselves Are Not Kept Or Stored Directly In The IR System, But Are Instead Represented In The System By Document Surrogates. Most IR Systems Compute A Numeric Score On How Well Each Objects In The Database Match The Query, And Rank The Objects According To This Value. The Top Ranking Objects Are Then Shown To The User. The Process May Then Be Iterated If The User Wishes To Refine The Query. In This Paper We Try To Explain IR Methods And Asses Them From Two View Points And Finally Propose A Simple Method For Ranking Terms And Documents On IR And Implement The Method And Check The Result.

UNCERTAIN OBJECT- Query Aware Determinization Of Uncertain Objects

This Paper Considers The Problem Of Determinizing Probabilistic Data To Enable Such Data To Be Stored In Legacy Systems That Accept Only Deterministic Input. Probabilistic Data May Be Generated By Automated Data Analysis/enrichment Techniques Such As Entity Resolution, Information Extraction, And Speech Processing. The Legacy System May Correspond To Pre-existing Web Applications Such As Flickr, Picasa, Etc. The Goal Is To Generate A Deterministic Representation Of Probabilistic Data That Optimizes The Quality Of The End-application Built On Deterministic Data. We Explore Such A Determinization Problem In The Context Of Two Different Data Processing Tasks-triggers And Selection Queries. We Show That Approaches Such As Thresholding Or Top-1 Selection Traditionally Used For Determinization Lead To Suboptimal Performance For Such Applications. Instead, We Develop A Query-aware Strategy And Show Its Advantages Over Existing Solutions Through A Comprehensive Empirical Evaluation Over Real And Synthetic Datasets.

EFFECTIVE AND EFFICIENT CLUSTERING METHOD - Effective And Efficient Clustering Methods For Correlated Probabilistic Graphs

Recently, Probabilistic Graphs Have Attracted Significant Interests Of The Data Mining Community. It Is Observed That Correlations May Exist Among Adjacent Edges In Various Probabilistic Graphs. As One Of The Basic Mining Techniques, Graph Clustering Is Widely Used In Exploratory Data Analysis, Such As Data Compression, Information Retrieval, Image Segmentation, Etc. Graph Clustering Aims To Divide Data Into Clusters According To Their Similarities, And A Number Of Algorithms Have Been Proposed For Clustering Graphs, Such As The PKwikCluster Algorithm, Spectral Clustering, K-path Clustering, Etc. However, Little Research Has Been Performed To Develop Efficient Clustering Algorithms For Probabilistic Graphs. Particularly, It Becomes More Challenging To Efficiently Cluster Probabilistic Graphs When Correlations Are Considered. In This Paper, We Define The Problem Of Clustering Correlated Probabilistic Graphs. To Solve The Challenging Problem, We Propose Two Algorithms, Namely The PEEDR And The CPGS Clustering Algorithm. For Each Of The Proposed Algorithms, We Develop Several Pruning Techniques To Further Improve Their Efficiency. We Evaluate The Effectiveness And Efficiency Of Our Algorithms And Pruning Methods Through Comprehensive Experiments.

DOCUMENT ANNOTATION UISNG CONTENT AND QUERYING VALUE- Facilitating Document Annotation Using Content And Querying Value

A Large Number Of Organizations Today Generate And Share Textual Descriptions Of Their Products, Services, And Actions. Such Collections Of Textual Data Contain Significant Amount Of Structured Information, Which Remains Buried In The Unstructured Text. While Information Extraction Algorithms Facilitate The Extraction Of Structured Relations, They Are Often Expensive And Inaccurate, Especially When Operating On Top Of Text That Does Not Contain Any Instances Of The Targeted Structured Information. We Present A Novel Alternative Approach That Facilitates The Generation Of The Structured Metadata By Identifying Documents That Are Likely To Contain Information Of Interest And This Information Is Going To Be Subsequently Useful For Querying The Database. Our Approach Relies On The Idea That Humans Are More Likely To Add The Necessary Metadata During Creation Time, If Prompted By The Interface; Or That It Is Much Easier For Humans (and/or Algorithms) To Identify The Metadata When Such Information Actually Exists In The Document, Instead Of Naively Prompting Users To Fill In Forms With Information That Is Not Available In The Document. As A Major Contribution Of This Paper, We Present Algorithms That Identify Structured Attributes That Are Likely To Appear Within The Document, By Jointly Utilizing The Content Of The Text And The Query Workload. Our Experimental Evaluation Shows That Our Approach Generates Superior Results Compared To Approaches That Rely Only On The Textual Content Or Only On The Query Workload, To Identify Attributes Of Interest.

PRIVACY PROTECTION - Supporting Privacy Protection In Personalized Web Search

Personalized Web Search (PWS) Has Demonstrated Its Effectiveness In Improving The Quality Of Various Search Services On The Internet. However, Evidences Show That Users' Reluctance To Disclose Their Private Information During Search Has Become A Major Barrier For The Wide Proliferation Of PWS. We Study Privacy Protection In PWS Applications That Model User Preferences As Hierarchical User Profiles. We Propose A PWS Framework Called UPS That Can Adaptively Generalize Profiles By Queries While Respecting User-specified Privacy Requirements. Our Runtime Generalization Aims At Striking A Balance Between Two Predictive Metrics That Evaluate The Utility Of Personalization And The Privacy Risk Of Exposing The Generalized Profile. We Present Two Greedy Algorithms, Namely GreedyDP And GreedyIL, For Runtime Generalization. We Also Provide An Online Prediction Mechanism For Deciding Whether Personalizing A Query Is Beneficial. Extensive Experiments Demonstrate The Effectiveness Of Our Framework. The Experimental Results Also Reveal That GreedyIL Significantly Outperforms GreedyDP In Terms Of Efficiency.

TRUSTEDDB- A Trusted Hardware-Based Database With Privacy And Data Confidentiality

Traditionally, As Soon As Confidentiality Becomes A Concern, Data Are Encrypted Before Outsourcing To A Service Provider. Any Software-based Cryptographic Constructs Then Deployed, For Server-side Query Processing On The Encrypted Data, Inherently Limit Query Expressiveness. Here, We Introduce TrustedDB, An Outsourced Database Prototype That Allows Clients To Execute SQL Queries With Privacy And Under Regulatory Compliance Constraints By Leveraging Server-hosted, Tamper-proof Trusted Hardware In Critical Query Processing Stages, Thereby Removing Any Limitations On The Type Of Supported Queries. Despite The Cost Overhead And Performance Limitations Of Trusted Hardware, We Show That The Costs Per Query Are Orders Of Magnitude Lower Than Any (existing Or) Potential Future Software-only Mechanisms. TrustedDB Is Built And Runs On Actual Hardware, And Its Performance And Costs Are Evaluated Here.

FAST CLUSTERING- A Fast Clustering-Based Feature Subset Selection Algorithm For High-Dimensional Data

Feature Selection Involves Identifying A Subset Of The Most Useful Features That Produces Compatible Results As The Original Entire Set Of Features. A Feature Selection Algorithm May Be Evaluated From Both The Efficiency And Effectiveness Points Of View. While The Efficiency Concerns The Time Required To Find A Subset Of Features, The Effectiveness Is Related To The Quality Of The Subset Of Features. Based On These Criteria, A Fast Clustering-based Feature Selection Algorithm (FAST) Is Proposed And Experimentally Evaluated In This Paper. The FAST Algorithm Works In Two Steps. In The First Step, Features Are Divided Into Clusters By Using Graph-theoretic Clustering Methods. In The Second Step, The Most Representative Feature That Is Strongly Related To Target Classes Is Selected From Each Cluster To Form A Subset Of Features. Features In Different Clusters Are Relatively Independent, The Clustering-based Strategy Of FAST Has A High Probability Of Producing A Subset Of Useful And Independent Features. To Ensure The Efficiency Of FAST, We Adopt The Efficient Minimum-spanning Tree (MST) Clustering Method. The Efficiency And Effectiveness Of The FAST Algorithm Are Evaluated Through An Empirical Study. Extensive Experiments Are Carried Out To Compare FAST And Several Representative Feature Selection Algorithms, Namely, FCBF, ReliefF, CFS, Consist, And FOCUS-SF, With Respect To Four Types Of Well-known Classifiers, Namely, The Probability-based Naive Bayes, The Tree-based C4.5, The Instance-based IB1, And The Rule-based RIPPER Before And After Feature Selection. The Results, On 35 Publicly Available Real-world High-dimensional Image, Microarray, And Text Data, Demonstrate That The FAST Not Only Produces Smaller Subsets Of Features But Also Improves The Performances Of The Four Types Of Classifiers.

DISTRIBUTED PROCESSING - Distributed Processing Of Probabilistic Top-k Queries In Wireless Sensor Networks

In This Paper, We Introduce The Notion Of Sufficient Set And Necessary Set For Distributed Processing Of Probabilistic Top-k Queries In Cluster-based Wireless Sensor Networks. These Two Concepts Have Very Nice Properties That Can Facilitate Localized Data Pruning In Clusters. Accordingly, We Develop A Suite Of Algorithms, Namely, Sufficient Set-based (SSB), Necessary Set-based (NSB), And Boundary-based (BB), For Intercluster Query Processing With Bounded Rounds Of Communications. Moreover, In Responding To Dynamic Changes Of Data Distribution In The Network, We Develop An Adaptive Algorithm That Dynamically Switches Among The Three Proposed Algorithms To Minimize The Transmission Cost. We Show The Applicability Of Sufficient Set And Necessary Set To Wireless Sensor Networks With Both Two-tier Hierarchical And Tree-structured Network Topologies. Experimental Results Show That The Proposed Algorithms Reduce Data Transmissions Significantly And Incur Only Small Constant Rounds Of Data Communications. The Experimental Results Also Demonstrate The Superiority Of The Adaptive Algorithm, Which Achieves A Near-optimal Performance Under Various Conditions.

XML RETRIEVAL - Using Personalization To Improve XML Retrieval

As The Amount Of Information Increases Every Day And The Users Normally Formulate Short And Ambiguous Queries, Personalized Search Techniques Are Becoming Almost A Must. Using The Information About The User Stored In A User Profile, These Techniques Retrieve Results That Are Closer To The User Preferences. On The Other Hand, The Information Is Being Stored More And More In An Semi-structured Way, And XML Has Emerged As A Standard For Representing And Exchanging This Type Of Data. XML Search Allows A Higher Retrieval Effectiveness, Due To Its Ability To Retrieve And To Show The User Specific Parts Of The Documents Instead Of The Full Document. In This Paper We Propose Several Personalization Techniques In The Context Of XML Retrieval. We Try To Combine The Different Approaches Where Personalization May Be Applied: Query Reformulation, Re-ranking Of Results And Retrieval Model Modification. The Experimental Results Obtained From A User Study Using A Parliamentary Document Collection Support The Validity Of Our Approach.

ASSOCIATION RULE AND THE APRIORI ALGORTHIM- A Data Mining Project -Discovering Association Rules Using The Apriori Algorithm

Data Mining Has A Lot Of E-Commerce Applications. The Key Problem Is How To Find Useful Hidden Patterns For Better Business Applications In The Retail Sector. For The Solution Of These Problems, The Apriori Algorithm Is One Of The Most Popular Data Mining Approaches For Finding Frequent Item Sets From A Transaction Dataset And Derives Association Rules. Rules Are The Discovered Knowledge From The Data Base. Finding Frequent Item Set (item Sets With Frequency Larger Than Or Equal To A User Specified Minimum Support) Is Not Trivial Because Of Its Combinatorial Explosion. Once Frequent Item Sets Are Obtained, It Is Straightforward To Generate Association Rules With Confidence Larger Than Or Equal To A User Specified Minimum Confidence. The Paper Illustrating Apriori Algorithm On Simulated Database And Finds The Association Rules On Different Confidence Value.

TEXT CLASSIFICATION AND CLUSTERING- Similarity Measure For Text Classification And Clustering

Measuring The Similarity Between Documents Is An Important Operation In The Text Processing Field. In This Paper, A New Similarity Measure Is Proposed. To Compute The Similarity Between Two Documents With Respect To A Feature, The Proposed Measure Takes The Following Three Cases Into Account: A) The Feature Appears In Both Documents, B) The Feature Appears In Only One Document, And C) The Feature Appears In None Of The Documents. For The First Case, The Similarity Increases As The Difference Between The Two Involved Feature Values Decreases. Furthermore, The Contribution Of The Difference Is Normally Scaled. For The Second Case, A Fixed Value Is Contributed To The Similarity. For The Last Case, The Feature Has No Contribution To The Similarity. The Proposed Measure Is Extended To Gauge The Similarity Between Two Sets Of Documents. The Effectiveness Of Our Measure Is Evaluated On Several Real-world Data Sets For Text Classification And Clustering Problems. The Results Show That The Performance Obtained By The Proposed Measure Is Better Than That Achieved By Other Measures.

ASSOCIATION RULE MINING - An Efficient Multi-Party Communication Scheme With Association Rule Mining

A Protocol For Secure Mining Of Association Rules In Horizontally Distributed Databases. Our Protocol, Like Theirs, Is Based On The Fast Distributed Mining (FDM) Algorithm Which Is An Unsecured Distributed Version Of The Apriori Algorithm. The Main Ingredients In Our Protocol Are Two Novel Secure Multi-party Algorithms One That Computes The Union Of Private Subsets That Each Of The Interacting Players Hold, And Another That Tests The Inclusion Of An Element Held By One Player In A Subset Held By Another. Our Protocol Offers Enhanced Privacy With Respect To The Protocol. In Addition, It Is Simpler And Is Significantly More Efficient In Terms Of Communication Rounds, Communication Cost And Computational Cost.

DUPLICATE DETECTION- Progressive Duplicate Detection

Duplicate Detection Is The Process Of Identifying Multiple Representations Of Same Real World Entities. Today, Duplicate Detection Methods Need To Process Ever Larger Datasets In Ever Shorter Time: Maintaining The Quality Of A Dataset Becomes Increasingly Difficult. We Present Two Novel, Progressive Duplicate Detection Algorithms That Significantly Increase The Efficiency Of Finding Duplicates If The Execution Time Is Limited: They Maximize The Gain Of The Overall Process Within The Time Available By Reporting Most Results Much Earlier Than Traditional Approaches. Comprehensive Experiments Show That Our Progressive Algorithms Can Double The Efficiency Over Time Of Traditional Duplicate Detection And Significantly Improve Upon Related Work.

RRW- A Robust And Reversible Watermarking Technique For Relational Data

Advancement In Information Technology Is Playing An Increasing Role In The Use Of Information Systems Comprising Relational Databases. These Databases Are Used Effectively In Collaborative Environments For Information Extraction; Consequently, They Are Vulnerable To Security Threats Concerning Ownership Rights And Data Tampering. Watermarking Is Advocated To Enforce Ownership Rights Over Shared Relational Data And For Providing A Means For Tackling Data Tampering. When Ownership Rights Are Enforced Using Watermarking, The Underlying Data Undergoes Certain Modifications; As A Result Of Which, The Data Quality Gets Compromised. Reversible Watermarking Is Employed To Ensure Data Quality Along-with Data Recovery. However, Such Techniques Are Usually Not Robust Against Malicious Attacks And Do Not Provide Any Mechanism To Selectively Watermark A Particular Attribute By Taking Into Account Its Role In Knowledge Discovery. Therefore, Reversible Watermarking Is Required That Ensures; (i) Watermark Encoding And Decoding By Accounting For The Role Of All The Features In Knowledge Discovery; And, (ii) Original Data Recovery In The Presence Of Active Malicious Attacks. In This Paper, A Robust And Semi-blind Reversible Watermarking (RRW) Technique For Numerical Relational Data Has Been Proposed That Addresses The Above Objectives. Experimental Studies Prove The Effectiveness Of RRW Against Malicious Attacks And Show That The Proposed Technique Outperforms Existing Ones.

PRIVACY PRESERVING SOCIAL MEDIA DATA PUBLISHING- Privacy Preserving Social Media Data Publishing For Personalized Rank Based Recommendation

Personalized Recommendation Is Crucial To Help Users Find Pertinent Information. It Often Relies On A Large Collection Of User Data, In Particular Users' Online Activity (e.g., Tagging/rating/checking-in) On Social Media, To Mine User Preference. However, Releasing Such User Activity Data Makes Users Vulnerable To Inference Attacks, As Private Data (e.g., Gender) Can Often Be Inferred From The Users' Activity Data. In This Paper, We Proposed PrivRank, A Customizable And Continuous Privacy-preserving Social Media Data Publishing Framework Protecting Users Against Inference Attacks While Enabling Personalized Ranking-based Recommendations. Its Key Idea Is To Continuously Obfuscate User Activity Data Such That The Privacy Leakage Of User-specified Private Data Is Minimized Under A Given Data Distortion Budget, Which Bounds The Ranking Loss Incurred From The Data Obfuscation Process In Order To Preserve The Utility Of The Data For Enabling Recommendations. An Empirical Evaluation On Both Synthetic And Real-world Datasets Shows That Our Framework Can Efficiently Provide Effective And Continuous Protection Of User-specified Private Data, While Still Preserving The Utility Of The Obfuscated Data For Personalized Ranking-based Recommendation. Compared To State-of-the-art Approaches, PrivRank Achieves Both A Better Privacy Protection And A Higher Utility In All The Ranking-based Recommendation Use Cases We Tested

ACTIVE LEARNING FOR RANKING - Active Learning For Ranking Through Expected Loss Optimization

Learning To Rank Arises In Many Data Mining Applications, Ranging From Web Search Engine, Online Advertising To Recommendation System. In Learning To Rank, The Performance Of A Ranking Model Is Strongly Affected By The Number Of Labeled Examples In The Training Set; On The Other Hand, Obtaining Labeled Examples For Training Data Is Very Expensive And Time-consuming. This Presents A Great Need For The Active Learning Approaches To Select Most Informative Examples For Ranking Learning; However, In The Literature There Is Still Very Limited Work To Address Active Learning For Ranking. In This Paper, We Propose A General Active Learning Framework, Expected Loss Optimization (ELO), For Ranking. The ELO Framework Is Applicable To A Wide Range Of Ranking Functions. Under This Framework, We Derive A Novel Algorithm, Expected Discounted Cumulative Gain (DCG) Loss Optimization (ELO-DCG), To Select Most Informative Examples. Then, We Investigate Both Query And Document Level Active Learning For Raking And Propose A Two-stage ELO-DCG Algorithm Which Incorporate Both Query And Document Selection Into Active Learning. Furthermore, We Show That It Is Flexible For The Algorithm To Deal With The Skewed Grade Distribution Problem With The Modification Of The Loss Function. Extensive Experiments On Real-world Web Search Data Sets Have Demonstrated Great Potential And Effectiveness Of The Proposed Framework And Algorithms.

REPRESENTATIVE PATTERN SETS- A Flexible Approach To Finding Representative Pattern Sets

Frequent Pattern Mining Often Produces An Enormous Number Of Frequent Patterns, Which Imposes A Great Challenge On Visualizing, Understanding And Further Analysis Of The Generated Patterns. This Calls For Finding A Small Number Of Representative Patterns To Best Approximate All Other Patterns. In This Paper, We Develop An Algorithm Called MinRPset To Find A Minimum Representative Pattern Set With Error Guarantee. MinRPset Produces The Smallest Solution That We Can Possibly Have In Practice Under The Given Problem Setting, And It Takes A Reasonable Amount Of Time To Finish When The Number Of Frequent Closed Patterns Is Below One Million. MinRPset Is Very Space-consuming And Time-consuming On Some Dense Datasets When The Number Of Frequent Closed Patterns Is Large. To Solve This Problem, We Propose Another Algorithm Called FlexRPset, Which Provides One Extra Parameter K To Allow Users To Make A Trade-off Between Result Size And Efficiency. We Adopt An Incremental Approach To Let The Users Make The Trade-off Conveniently. Our Experiment Results Show That MinRPset And FlexRPset Produce Fewer Representative Patterns Than RPlocal-an Efficient Algorithm That Is Developed For Solving The Same Problem.

IMAGE COMPRESSION ALGORITHM USING BINARY SPACE PARTITION SCHEME- Hierarchical Representation Of Plain Areas Of Post-Interpolation Residuals For Image Compression

We Propose An Algorithm For Encoding Quantized Post-interpolation Residuals Within The Framework Of Hierarchical Image Compression. This Coding Algorithm Is Based On A Hierarchical Representation Of The Plain Areas Of Quantized Post-interpolation Residuals To Improve The Coding Efficiency Of These Areas. The Proposed Algorithm Reorders The Post-interpolation Residuals To Increase The Size Of The Plain Areas. We Embed The Proposed Coding Algorithm For Post-interpolation Residuals Into A Hierarchical Image Compression Method. This Method Is Based On Interpolation The Image Scale Levels Using More Resampled Scale Levels Of The Same Image. The Errors Of This Interpolation (post-interpolation Residuals) Are Then Quantized And Encoded. We Use The Proposed Algorithm To Encode The Quantized Post-interpolation Residuals Of The Hierarchical Compression Method. We Perform Computational Experiments To Study The Effectiveness Of The Proposed Algorithm For A Set Of Natural Images. We Experimentally Confirm That The Use Of The Proposed Coding Algorithm For Post-interpolation Residuals Makes It Possible To Increase The Efficiency Of The Hierarchical Method Of Image Compression.

EFFICIENT JOINT ENCRYPTION AND DATA HIDING- A Security Technique For Authentication And Security Of Medical Images In Health Information Systems

In This Paper An Efficient Crypto-watermarking Algorithm Is Proposed To Secure Medical Images Transmitted In Tele-medicine Applications. The Proposed Algorithm Uses Standard Encryption Methods And Reversible Watermarking Techniques To Provide Security To The Transmitted Medical Images As Well As To Control Access Privileges At The Receiver Side. The Algorithm Jointly Embeds Two Watermarks In Two Domains Using Encryption And Reversible Watermarking To Avoid Any Interference Between The Watermarks. The Authenticity And Integrity Of Medical Images Can Be Verified In The Spatial Domain, The Encrypted Domain, Or In Both Domains. The Performance Of The Proposed Algorithm Is Evaluated Using Test Medical Images Of Different Modalities. The Algorithm Preforms Well In Terms Of Visual Quality Of The Watermarked Images And In Terms Of The Available Embedding Capacity.

EFFICIENT JOINT ENCRYPTION AND DATA HIDING- A Security Technique For Authentication And Security Of Medical Images In Health Information Systems

Medical Images Stored In Health Information Systems, Cloud Or Other Systems Are Of Key Importance. Privacy And Security Needs To Be Guaranteed For Such Images Through Encryption And Authentication Processes. Encrypted And Watermarked Images In This Domain Needed To Be Reversible So That The Plain Image Operated On In The Encryption And Watermarking Process Can Be Fully Recoverable Due To The Sensitivity Of The Data Conveyed In Medical Images. In This Paper, We Proposed A Fully Recoverable Encrypted And Watermarked Image Processing Technique For The Security Of Medical Images In Health Information Systems. The Approach Is Used To Authenticate And Secure The Medical Images. Our Results Showed To Be Very Effective And Reliable For Fully Recoverable Images.

REAL-TIME AUTOMATIC LICENSE PLATE RECOGNITION SYSTEM- Real-time Automatic License Plate Recognition System Using YOLOv4

We Introduce A Real-time Automatic License Plate Recognition System That Is Computationally Lighter By Eliminating The ROI Setting Step, Without Deteriorating Recognition Performance. Conventional License Plate Recognition Systems Exhibit Two Main Problems. First, Clear License Plate Visibility Is Required. Second, Processing Actual Field Data Is Computationally Intensive And The ROI Needs To Be Set. To Overcome These Problems, We Performed Plate Localization Directly On The Entire Image, And Conducted Research Taking Low Quality License Plate Detection Into Account. We Aim To Recognize The License Plates Of Cars Moving At High Speeds On The Road As Well As Stationary Cars Using The NVIDIA Jetson TX2 Module, Which Is An Embedded Computing Device.

- Reversible Data Hiding In Encrypted Images Based On Reversible Integer Transformation And Quadtree-Based Partition

This Paper Presents An Improved Secure Reversible Data Hiding Scheme In Encrypted Images Based On Integer Transformation, Which Does Not Need Using A Data Hider Key To Protect The Embedded Secret Data. We First Segment The Original Image Into Blocks Of Various Sizes Based On The Quadtree-based Image Partition. For Each Block, We Reserve M Least Significant Bits (LSBs) Of Each Pixel As Embedding Room Based On The Reversible Integer Transformation. In Order To Improve The Security Of The Image Encryption, We Pad The MLSBs Of Each Pixel Using The Corresponding (8-m) Most Significant Bits (MSBs) Information After The Transformation, Which Protects The Security Of The Encryption Key. Then, We Encrypt The Transformed Image With A Standard Stream Cipher. After The Image Encryption, The Data Hider Embeds The Secret Data In The MLSBs Of The Encrypted Images Through An Exclusive Or Operation. On The Receiving Side, The Receiver Can Extract The Secret Data After The Image Decryption And Recover The Original Image Without Loss Of Quality. The Security