Security Domain Knowledge and Visualization

I am quite frustrated with a lot of the research papers and tools that get published. In a lot of cases you can just tell that the authors and developers of certain tools have good intentions, but unfortunately no, or limited, domain knowledge.
One example was a recent paper I read about some visualization tool. They were talking about occlusion and how filtering can help address that problem. Absolutely. I could not agree more. However, the context was security alarms. It was proposed that one of the most effective ways to deal with occlusion was to filter based on the alarm priority and only show a certain alarm level. Well, why would I need a visualization tool for that? I can use grep to do so. And if you are going to visualize only the highest priority alerts (or any level of priority for that matter), you are loosing context. It might have been enormeously important to see those level 10 alerts in context with all the level one alerts. That's why you want to use visualization, to see relationships. The relationships among level 10 alerts are limited and most likely there won't be many!
The second point I want to get accross about visualization (or in general security research) papers, is the use of the wrong data to verify and justify a tool's usefulness. Simulated data feeds, artificially generated user behavior, etc. is just a really really bad way of testing or at least justifying why a tool is well suited for finding important/relevant events. And if you are going to use metrics on top of that data which talk for example about recall and precision, you are just in the wrong profession. Get that tool on a real network where people are trying to solve real problems!

Utterly right!

Utterly right!

No tool should filter data. But the user should have the ability to do it, if he needs a more detailed view on a special event. First of all, the relation between the nodes is important - that's right. Presenting aggregated and filtered data means, that a algorithm - a predefined filter/pattern - already dropped information!! Talking about security issues you're obliged to paying regard to all kind of information, even if a pattern says that it is nonrelevant. Concerning this matter, you're absolutely right.
But what about enabling the user to filter data based on that complete initial view?
Newbies or inexperienced users won't be able to interpret an overview correctly. They see big circles and fat lines. Wow great.
So let's talk about the professional users. If they spot something interesting, they finally have to do a grep on the source data to get a closer look. What if a grep results in 12.000 lines? Why shouldn't we enable them to do that grep visually by presenting a filtered view right on the mentioned spot? You're just masking the data, which doesn't care you at the moment to get a clearer look.
In my opinion - that kind of filtering should be available.
You're absolutely right dooming filters or pattern matching which limits the scope by default.
Filtering - in general - is a way to select relevant data.
What is "relevant" should be tagged by the user - not by predefined rules.

yup. completely

yup. completely agree.
Likewise all the solutions that only display aggregate data or single source data without providing a way to represent both with some sense of source (I don't care what the graph looks like from all my sources, I want to see what it looks like from each router in relation)
There are tons of lousy solutions that are clearly built by people who have never had to actually do analysis and are just looking for an entertaining project. drives me batty.