PROJECT TITLE :
Empirical Analysis and Validation of Security Alerts Filtering Techniques - 2017
System directors address security incidents through a variety of monitors, like intrusion detection systems, event logs, security data and event management systems. Monitors generate giant volumes of alerts that overwhelm the operations team and make forensics time-consuming. Filtering could be a consolidated technique to scale back the quantity of alerts. In spite of the amount of filtering proposals, few studies have addressed the validation of filtering ends up in real production datasets. This paper analyzes a range of state-of-the-art filtering techniques that are used to deal with security datasets. We tend to use 14 months of alerts generated in a SaaS Cloud. Our analysis aims to measure and compare the reduction of the alerts volume obtained by the filters. The analysis highlights pros and cons of every filter and provides insights into the sensible implications of filtering as affected by the characteristics of a dataset. We complement the analysis with a method to validate the output of a filter in absence of ground truth, i.e., the data of the incidents occurred in the system at the time the alerts were generated. The analysis addresses blacklist, conceptual clustering and bytes techniques, and our filtering proposal primarily based on term weighting.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here