Big data and security intelligence are the two very hot topics in security. We are collecting more and more information from both the infrastructure, but increasingly also directly from our applications. This vast amount of data gets increasingly hard to understand. Terms like map reduce, hadoop, spark, elasticsearch, data science, etc. are part of many discussions. But what are those technologies and techniques? And what do they have to do with security analytics/intelligence? We will see that none of these technologies are sufficient in our quest to defend our networks and information. Data visualization is the only approach that scales to the ever changing threat landscape and infrastructure configurations. Using big data visualization techniques, you uncover hidden patterns of data, identify emerging vulnerabilities and attacks, and respond decisively with countermeasures that are far more likely to succeed than conventional methods. Something that is increasingly referred to as hunting. The attendees will learn about log analysis, big data, information visualization, data sources for IT security, and learn how to generate visual representations of IT data. The training is filled with hands-on exercises utilizing the DAVIX 2014 live CD.
The workshop has undergone quite some updates. Here are some highlights:
Log Management and SIEM
Security Visualization Use-Cases
Tools to gather data:
We are also using a number of visualization tools to analyze example data in the labs:
Under the log management section, we are going to discuss:
The section on big data is covering the following:
Raffael Marty is one of the world's most recognized authorities on security data analytics and visualization. Raffy is the founder and CEO of pixlcloud, a next generation visual analytics platform. With a track record at companies including IBM Research and ArcSight, he is thoroughly familiar with established practices and emerging trends in big data analytics. He has served as Chief Security Strategist with Splunk and was a co-founder of Loggly, a cloud-based log management solution. Author of Applied Security Visualization and frequent speaker at academic and industry events, Raffy is a leading thinker and advocate of visualization for unlocking data insights. For more than 14 years, Raffy has worked in the security and log management space to help Fortune 500 companies defend themselves against sophisticated adversaries and has trained organizations around the world in the art of data visualization for security. Zen meditation has become an important part of Raffy's life, sometimes leading to insights not in data but in life.
This graph visualization shows the propagation of malware through a deliberately infected computer network. Twelve machines in the network were infected to see how the traffic spread to other machines. Over 7800 machines were included in the dataset.
All network in a single chart. Yellow links indicate benign traffic; red links indicate traffic with at least 1 infected packet. Nodes are sized by volume of traffic.
Data taken from the MyDoom-A.tar.gz, available here
Image generated with KeyLines.
Using a dataset from http://www.uvic.ca/engineering/ece/isot/datasets/index.php, this graph shows botnet traffic between 5000 computers at the University of San Diego. Different colors were used to indicate different protocols. Nodes represent computers and were sized by degree. Edges represent packets, weighted by packet size. Image generated using KeyLines.
Visual Analytics, especially the exploration of data requires a scalable and flexible data backend. It is not uncommon that gigabytes, maybe even terabytes of data need to be queried for a specific analytics tasks. Furthermore, the more context around log data is available, the more expressive the data gets and the deeper the insight that can be discovered in the data. How can we gather all that context and combine it with both network-based, as well as host-based data? What are the data access requirements? How can we run data mining algorithms, such as clustering across all of the data? What kind of data store do we need for that? Do we need a search engine as a backend? Or a columnar data store?
I recently wrote a paper about the topic of a security data lake that is a concept of a data backend enabling a variety of processing and access use-cases. A short introduction to the topic is available as well.
Maybe at a later point in time, I will try to address the topic of data science and techniques, as well as workflows to make all that big data actionable. How do you take a terabyte of data and find actual insights? Just dropping that data into a network graph visualization is not going to help. You need a bit more to make that happen. But again, more on that later.
If you want to learn more about how to visualize and analyze terabytes of data, attend the Visual Analytics Workshop at BlackHat 2015 in Las Vegas.
Again, here is where you download the paper.
This is a screenshot from a tool called Mondrian where we show network traffic. DNS traffic in particular. The bar charts show the breakdown of sources, destinations, and ports. The parallel coordinate shows all three variables at the same time. The red parts highlight an interesting visual pattern. What is it?