Exploring the Origins of Threat Hunting

1128

This post was originally published here by Sqrrl Team.

Threat hunting is one of the fastest-growing information security practices today. But what really defines threat hunting and how did the practice start?

Recently, Sqrrl partnered with Richard Bejtlich from TaoSecurity to bring together a panel discussion comprised of the original General Electric CIRT incident handler team. These responders helped to pioneer some of the most common tools and techniques used in hunting today. In this roundtable conversation, they covered the origins of threat hunting, from the network ā€œhunter-killerā€ teams used by the Air Force up until the present day.

You can listen to the full panel discussionĀ here.

Air Force and Early Adopter Perspectives

Richard Bejtlich (RB): A few months ago I did a post on my blog, TaoSecurity, talking about the origins of the term threat hunting. It is a very popular term. Itā€™s been used in the industry now for definitely the last five years and those of you whoā€™ve read my blog have known that I have used that term in the past but Sqrrl asked me, ā€œWhere did it come from? When I started thinking about that I thought it would be interesting to have the people who I interacted with on a daily basis when I worked at General Electric who did some of the early threat hunting. Weā€™re going to talk about the present, how people are using threat hunting to better protect their organizations today, and then the future, what sorts of developments you could expect in this field. We have some topics that Iā€™ll be throwing out along the way but with that, Iā€™d like to start by talking a little bit about the past.

Can you talk a little bit about your experience with threat hunting in the past?

Bamm Visscher (BV): Oddly enough when most people think of incident detection you start with real time alerts but in my day we actually started out with something we call ā€œbatchā€ now. We didnā€™t have the technology to be on alert for something in real time and respond and look up all the information. Instead we had tools that collected data over a period of time, usually 24 hours. The analysts and myself would look at things that were marked, and review the past 24 hours of data for abnormalities.Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā  Ā 

It wasnā€™t until later on that we really started seeing real time alerts. Weā€™d open an incident and response, weā€™d do all those fun things and start asking the hard questions of why is it taking 24 hours to find this out or 18 hours or even eights hours? Why didnā€™t we know this yesterday? We started really going down the road of getting the technology that would allow us to do real time alerts but as you can imagine back in the day we didnā€™t really have all the horsepower and some other things that we needed to be able to do to look at the stuff in real time.

Where did you first start to hear the term threat hunting?

BV: Ā Really, it was the Air Force. Thereā€™s no time in my mind that I can think of that we suddenly started using this term as hunting. To tell you the truth it just seems like it was just a natural transition to define what more we were doing. It really came down to getting more and more visibility. Necessity drove what we did and so we started out with batch analysis. We found out we wanted to do real time analysis and then later, as we had more time and resources, we said, ā€œOh, wow. We just got visibility into a new location or different type of data. We donā€™t know really what to do with this stuff but weā€™re going to go around and start hunting and focusing time and energy on only looking at data sets, looking for data or anomalies, activities, so on and so forth that we hadnā€™t seen before. Thatā€™s really what brought that along. We would take, a lot of times it was our top tier talent, the guys that had the most experience, and say, ā€œLetā€™s spend a day, a week, a month or whatever going over this stuff,ā€ versus the other teams that were focused on the batch analysis or real time analysis.

RB: Right. I think it might be useful for the audience to consider as well that one of the driving factors behind this type of analysis was knowing that things were occurring, there were events occurring, intrusions, campaigns, et cetera, that were happening that were not being caught using the methods of the day. That drove a need for innovation, a need to say, ā€œWhat else can we do to find this activity?ā€ Because we sort of take a pessimistic but kind of a realistic point of view that says things are happening. Things will always be happening. They sort of pass below the level of the water at certain points and other points, theyā€™re like the iceberg breaking above the water and thatā€™s when you get a detection using your traditional methods. If youā€™re not always out there assuming the adversary is somewhere in some network that youā€™re concerned about, youā€™re kind of not doing the best you could as far as your detection mission.

How did you use early network sources for what later became called threat hunting?

David Bianco (DB): The first thing that we did was to build some network-based indicators of things that we have used with our own research. We went out and said, ā€œweā€™ve found these indicators from the adversaries,ā€ and setting those back into the detection cycle, kind of like a rudimentary intel cycle where we were pretty much producing our own intel and doing our own analysis and pushing it into detection, in most cases all from one person doing it in the same session.

As you know, it doesnā€™t sound a lot like threat hunting in the terms that we often talk about it with big data analytics and things like that. My definition of threat hunting actually is a little bit more expansive than that and definitely includes pretty much anything where you are the human driver trying to make decisions about what you want to be able to find in your network. That definitely includes things like the kind of rudimentary threat intelligence cycle that weā€™re setting up. I think thatā€™s kind of where I got started with that threat hunting piece coming from the threat intelligence and the detection engineering of signature writing and trying to come up with the minimal set of signatures or indicators that would detect the maximal number of adversary activities.

Can you talk about the role of signatures or structure detection as it relates to threat hunting?

RB: I think there might a sense in the industry that a very simplistic view that signature is bad and if youā€™re not free flowing all the time then youā€™re missing something. I imagine thatā€™s not how [David] see things.

DB: Ā Ā No. I definitely donā€™t. I understand why people are confused by that because for the for longest time automated detection was just based on signatures. In most cases things like your SIM rules or your IDS rules or your antivirus updates tend to be relatively static. We started over the past few years to tell people a static automated detection is not enough. Which got blown up out of proportion to ā€œsignatures are deadā€ which is not true.

The interesting thing about that is that people who say signatures are dead are probably still using signature-based detection somewhere in their environment even if they just have antivirus on the host or something like that. The other interesting thing is that when you end up doing your big data analysis with machine learning and all the buzz words you can throw at it, you have to then figure out how to turn that into automated detection. I may not always want to spend 36 CPU hours on a detection in my cluster.

If I analyze what Iā€™ve actually found, many times I actually boil that down into a signature. It turns out that you might use machine learning, clustering, visualization, and other tools in order to help you understand the problem. But, when you understand the problem you find out ā€œhey, this is actually not too difficult to find.ā€ Now that I know what Iā€™m looking for, if this value is in this range, this value is in this range and this third value is also in this range, maybe thatā€™s malicious activity.

When you break that down into a detection platform for automation, in most cases that comes out to be a signature. The role of signatures is actually kind of complex and they have a bad rap but they donā€™t really deserve it. The signatures just by themselves, a static set of signatures, yes, not going to do the job for you but when you consider the set of signatures that you have to be a dynamic set thatā€™s influenced by what you are looking for and what youā€™re finding in that data and youā€™re updating constantly those signatures as part of your threat hunting operation, theyā€™re very valuable.

Early Technologies and Approaches

At GE, how did you go out to systems and get information from them and then use that to find something that was suspicious or malicious?

Ken Bradley (KB: I think it was the byproduct of looking for specific artifact and indicators relevant to a threat actor at the time during an incident. There wasnā€™t any tool available that we could plug in but we wrote the script to just collect real simple data, like list of running processes from machines. In a case where if you pull a list of running processes off of one machine youā€™re not necessarily going to be able to look at that and determine whatā€™s bad. But, if you can pull a list of running processes off of all 100,000 of your machines that youā€™ve got running on your network and you put that data together and start looking through it. A lot of times things will bubble up to the top, a pattern if you will.

The most successful data that we collected off of the machines at that stage was AutoRun details. Thatā€™s a tool thatā€™s been around for a long time and still continues to be probably one of the more efficient host detection tools that you can run. We started collecting the AutoRun which is just a list of all the persistence mechanisms that are stored on a Windows system and this is predominantly in a Windows environment. We werenā€™t doing that much within UNIX systems. With the AutoRun study youā€™re getting a list of all the different persistence areas, an MD5 Hash and a SHA hash, a number of different hashes thatā€™ll identify the specific executable or whatever the object is thatā€™s getting called.

Similar to what I stated about a list of running processes on one machine, the AutoRun detail gives you a lot more you can work with but then when you start coming up with a way to collect that off of all of your systems or at least the majority of your systems and compound that together, you can really begin to find things, even if you donā€™t know what youā€™re looking for.

A lot of times the specific malware, again, if youā€™re working with one of the more ā€œback in the dayā€ APT groups most of their malware didnā€™t show up as one single MD5. It wasnā€™t an effective way to look for them but you could often leverage things like PSExec or some of the native processes that were running on the Windows system and that might get flagged that way.

We really didnā€™t, again, have all these things like Elasticsearch and big data analytics. I donā€™t even know if those were really terms that we used back in that day. We just kind of kept it simple and wrote a small database. I believe we had a Visual Basic client that we pushed out to the systems that invoked the Microsoft HTTP library and pushed that data to a little PHP web server and we would do the best that we could to organize it and kind of stack it up to use kind of a current term. Yeah, that was it. Managing scale wasnā€™t so much about adding boxes to it. It was just ā€¦ becoming accustomed and getting used to the fact that youā€™re not going to be able to get all of the information from your systems and that youā€™re going to have to be okay with 60 or 70% versus 100%. Thatā€™s the way that we dealt with scale back then.

How valuable it was to be able to make changes quickly?

KB: Ā That is probably the biggest area that you really need. I think the most important part when youā€™re looking at widespread host based collections across an enterprise, that seed that you mentioned the ability to change and do it on your terms without having to go through a whole order of change control or get 20 approvals. That was key.

I think back to some very specific incidents that we worked in that time frame when we were usually almost hand-in-hand combat with somebody on the other end of a keyboard and it wasnā€™t uncommon for us to witness them change some piece of malware that they were using to enact an active backdoor any environment as we detected some of the change. If we didnā€™t have a capability to really quickly identify that and get that plugged into the system and kind of perform another wide sweep or change our signatures that we were using or change our analytic patterns within a short period of time I donā€™t think that we wouldā€™ve had some of the successes weā€™d had. I should say the remediation efforts or the containment efforts that we were doing would not work much longer. The ability to be able to adjust whether it be your signatures or this kind of analytic chain is very important.

RB: Ā Ā Ā I agree with that, too. Iā€™d much rather have a rapid development and collection cycle rather than a complete coverage and I think your percentages are spot on. Lots of people think in terms of success being 95% or above but if you have a massive enterprise and youā€™re getting anywhere near 60%, thatā€™s probably going to be considered a win.

What sorts of things you did with the team and how you would rip apart stuff that was found and then turn that into sources of data for the hunt mission?

Tyler Hudak (TH): Ā Ā Like Ken said, when we would come across a lot of the malware that the adversaries were using weā€™d have to go through and figure out what they were doing. If I remember right, VirusTotal was still around or had been around for a while but wasnā€™t as advanced as it was today and we werenā€™t uploading any of our stuff to VirusTotal anyways because we were afraid that these were targeted attacks. We didnā€™t want to reveal that we were attacked and let the adversary know that, ā€œHey, we found your malware.ā€

Ā A lot of the things that we would do is take the malware that we found during our engagements or during our investigation efforts and really just sit down and pull it apart. Going through just the normal reverse engineering that a lot of people do now and picking it apart, figuring out its capabilities, how it interacts with the system, any clues that we could find to go out and search for things. Ken was talking about using auto runs across the entire organization. I remember being able to determine that a specific backdoor left by an adversary would drop a persistence key in the registry with maybe a dozen different values that were hard coded and then we could go in and look at the stuff that Ken had grabbed through his auto run stuff and just query and find all these other systems that we didnā€™t know had been compromised at some point in time or had a sleeper agent and found that we could then go in and look.

Doing the manual reverse engineering that we did only gets you so far. Itā€™s slow. It takes time and so I remember we had created an automated malware analysis platform and this is before things like Cuckoo had come out. We were just kind of going based off of research that other people had done and really just kind of rolling our own but having our own malware analysis platform where we could take a piece of malware and throw it in there and very quickly, within a couple minutes, get information such as what network connections it tried to create, what registry entries it modified or files it modified. Even down to seeing what mutexes on the operating system, it created helped us to very quickly identify how the malware worked and how we could find the malware within our systems and even going as far as identifying the malware based off of the MO that it worked.

In our next post, weā€™ll talk through practical hunting techniques, as well as methods of getting to know your environment when hunting.

Photo:drivingsales.com

Ad

No posts to display