Showing posts with label best practices. Show all posts
Showing posts with label best practices. Show all posts

Monday, November 6, 2017

Kernel debug Best Practices or "Why "fw ctl zdebug..." should not be used"

Over last several days I have seen rapidly growing amount of posts at CPUG and CP Community where "fw ctl zdebug..." command was mentioned, used and advised.

Although some of you already know my position for the matter, I have decided to write a post about the growing custom to use zdebug instead of employing full fw ctl debug mechanism.

Kernel debug in general


Check Point FW is essentially a Linux-based system with a kernel module inserted between drivers and OS IP stack. If you do not know what I am talking about, you may want to look into this post with an explanatory video for the matter.

Extracting information about kernel based security decisions is rather tricky, so Check Point developed an elaborate tool to read some info about various FW kernel modules actions.

In a nutshell, each kernel module has multiple debug flags that force code to start printing out some information. I have numerous posts in this blog explaining different flags, tips and tricks with kernel debug and also providing links to CP kernel debug documents.

Debug buffer


It is important to understand FW kernel is always printing out some debug messages. For most of the kernel modules, error and warning flags are active, and the output goes to /var/log/messages by default. This is not practical for debug, so before starting kernel debug, an engineer needs to set a buffer which would receive debug output instead of /var/log/messages file.

To do so, the following command is used: fw ctl debug -buf XXXXX, where XXXXX is the buffer size in KB. The maximum possible buffer today is 32 MB, but I advise my students to use 99999 to make sure they get maximum buffer possible anyway.

Kernel can be very chatty, so having a bigger buffer would ensure less kernel messages being lost.

Debug modules and flags


FW kernel is a complex structure. It is built with multiple modules. Each of the modules has its own flags. One can run a single debug session with multiple flags raised for several modules. To raise debug flags, one use one or several commands of this type:

fw ctl debug -m (module name) (+|-) (list of flags)

It is essential that + and - options allow you to raise and remove flags on the fly, even during an already running debug session. List of modules and flags can be found by the first link in this post.

Printing info out of buffer


Raising flags is not enough, as to get information, you need to start reading buffer out with this command:

fw ctl kdebug -f (with some options)

There will be A LOT of information, so never do this on the console. Use SSH session or redirect to a file.

Stopping debug


Once you collected the relevant info, you need to reset kernel debug to the default settings, otherwise you FW will continue printing out tons of unnecessary info. To do so, run

fw ctl debug 0

What is fw ctl zdebug then?

fw ctl zdebug is an internal R&D macros to cut corners when developing and testing new features in the sterile environment. It is equivalent to the following sequence of commands:

fw ctl debug -buf 1024
fw ctl debug (your options)
fw ctl kdebug -f
-------(waiting for Ctrl-C)
fw ctl debug 0

Why is this a problem?


If you are still reading this post and get to this line, you probably think zdebug is a god sent miracle. It simplifies so many things, it is the only way to run debug in production environment! Right? 

Wrong. To make it plain, here is the list of problematic point with this way of doing things:

1. The buffer is way too small. Lots and lots of messages might be just lost because buffers does not have enough room to hold them before read.
2. It is not flexible enough. Running debug in production requires lots of consideration and certain amount of caution. After all, you are asking FW kernel to do extra things, lots of them. The best practice is to start with a single flag or two and expand area of research in the fly trying to catch an issue. This is impossible to do with fw ctl zdebug macros.
3. It is too simple to use. You could say, what a funny argument. Yet, let's think about it. To master kernel debug as described above, one has to understand kernel structure, dependencies, flags and modules. You don't have to do any of that to run fw ctl zdebug drop, and many people do just that. 

My personal position on this is that kernel debug is a sensitive and risky operation. It requires understanding of the technology and the tool itself beforehand. Without such understanding one could miss messages, complicate things and in some very limited cases, crash the GW under debug. The latter I have not seen for quite some time, though.


-----------
Support CPET project and this blog with your donations to https://www.paypal.me/cpvideonuggets 


Sunday, July 23, 2017

CPET session 3 - it is on!

The next Check Point Expert Talks session will take place on Sunday 30th of July at 14:00 CET. You have chosen Kernel Debug Best Practices as the topic.

The session is limited to 100 participants. If you cannot join, video recording will be available later on.

To put the session in your calendar, use invitation link.

Otherwise, use this link information to join.

-----------
CPET project relies on your support. 
Participate in the talks and help us with your donations to https://www.paypal.me/cpvideonuggets 
Follow us on Facebook and Twitter. 




Monday, July 10, 2017

CPET session 3 - choose the topic and time

Do not miss the opportunity to choose what and when will be discussed on the third CPET live session.

This time I am proposing three different subjects:

1. Details of Policy Installation with Check Point
2. Kernel Debugging Best Practices  - Chosen
3. Open Questions and Answers discussion

Note: if option 3 is chosen, I will ask to submit questions in advance, so I could go through them. 10 minutes will be left for further discussion anyway.

The proposed times are:

1. Saturday, 29th of July, 18:00 CET
2. Sunday, 30th of July, 14:00 CET - Chosen


The pool is now closed. Session details and invitation are here.



-----------
CPET project relies on your support. Participate in the talks and help us with your donations to https://www.paypal.me/cpvideonuggets 
Follow us on Facebook and Twitter. 

Monday, May 15, 2017

Wcry lesson - we learn that we do not learn

Wannacry ransomware wreaked havoc around the globe, infecting and putting out of commission more than two hundred thousands computers. One could consider this as a brutal and effective crashtest for common security practices. Test that we have failed, miserably. Just look at the map of affected countries...



The situation could be completely different, if IT security adhered to a small set of very basic security practices, such as

Educate end users

One of the Wcry vectors is a phishing email. We all know that it is not wise clicking on email links, right? Wrong, apparently. People are still doing that. Teaching users simple security awareness practices is vital to avoid such incidents.

Scan incoming emails and downloads

One of the classic cases of Threat Emulation is scanning and detonating file attachments and downloads. Every decent security vendor has an appropriate offering in this field. 

Anti-phishing tools are also widely available, both onsite and cloud based.

Patch your systems timely

SMB vulnerability used by Wcry to propagate was patched by Microsoft in March 2017, two month before the event. Two month!

Use IPS for virtual patching

Okay, you say, we could patch all supported Windows machines, but how about XP, 8 and 2003? Even if there was no patched for unsupported Windows flavors, simple IPS virtual patching would do. How hard it can be, really?

Filter incoming traffic, segment your networks

To prevent the initial infection coming from Internet through SMB, one only needed to filter out incoming SMB traffic. Same to prevent lateral movement of the worm in segmented networks. Simple FW rules denying such traffic would do.

Backups, backups, backups

In case of infection, there is always a plan B - restoring systems from backups. If you have any. If you keep them safe. Safe in this context means offline. 



Simple and widely known best security practices could save the day. Yes, we have all seen recently that our networks are out there for anyone who wants to take them over. How sad is that?

-----------
To support this blog send your donations to https://www.paypal.me/cpvideonuggets