Wednesday, February 19, 2014

VSX R6x to R7x migration tips

As you all know, VSX R65 is out of support, and VSX R67 is about to be. Check Point is recommending customers to move to R7x versions as quick as possible. Here are some tips about the migration procedure:

1. Remember - there is no upgrade in place. To "upgrade" VSX one has to re-install with the clean target version and to run vsx_util upgrade/reconfigure on MGMT part.
2. Run lab trials of vsx_util part of operation. In every second case of my VSX upgrades MGMT part of migration fails or hangs. If this happens to you in production, you are doomed. Replicate your SMS or MDM in the lab and run vsx_util upgrade. Correct all the error on the way, if necessary.
3. Use multi-step upgrade on MGMT side, if you skip a version. In my practice, running vsx_util upgrade R67 -> R75.40VS and then again R75.40VS -> R77 is much safer than just going all the way up. In most of the cases vsx_util just hangs for half an our and then fails, if version is skipped.
4. Take extreme caution if you are going to replace your old cluster with new HW. In case of replacing open servers with Check Point appliances and vice versa one has to run vsx_util change_interfaces to rename all VSX interfaces before pushing configuration with vsx_util reconfigure to the new machine. In many cases you can successfully rename interfaces with the script, but old interfaces are not removed. If this happens, open VSX cluster object with SmartDashboard and remove them manually before running vsx_util reconfigure.

And the last tip. If you are reading this, and it does not make any sense, read through VSX administration manual and upgrade guides.

MDM and VSX are my favourite products with Check Point, but they are also the most delicate and fragile ones. Any mistake in the process can cost you dearly.

If you are interested, I am inviting all readers to my classroom to MDM and VSX training. We discuss the architecture of both products, best practices with them, maintenance and upgrade procedures. To get more information, send me a personal message here or just use our Training Center enquiry form


  1. nice post. Unfortunately I am not able to attend your training :( Hopefully in the future.

    Btw...could renaming the interfaces in the /etc/udev/rules.d/ be a work-around for the interface naming, as well?

    And a side note: rename the interfaces in dashboard before taking an export and running reconfigure for new appliances. From dashboard it will try pushing the config which is not possible before the reconfigure, right?!


  2. No, changing names locally is not supported and should not be done, like ever.

    As for dashboard, sometimes old interfaces are still listed there. They should be removed before proceeding with vsx_util reconfigure

  3. You can define some debug flags which stops the MDM communicating with the VSX cluster to rename/remove interfaces before or during upgrade. Look at the "How to Migrate an IPSO VSX Cluster to Gaia R75.40VS" page 4 and 7 which are not directly related to IPSO.

    But I agree. Upgrade with caution. It tends to fail randomly from R6x. I have very bad experience with R75.40VS in general so I would avoid using this release as a version to jump from R6x to R77.10.

    1. Btw... the commands mentioned on page 7 can also be used to delete all VSX references from a CMA providing the possibility to do a cma_migrate to another MDM. This shoudl probably be done on a cloned installation of the production MDM if relevant :)

    2. You mean these flags?

    3. Jonas, there a way to migrate with VSX objects. But it is not supported, and there is a very BIG reason why :-)

  4. Valeri,

    About vsx_util - I certainly agree that large maintenance operations should be tested in a lab.
    Having said that, all vsx_util commands have an option to resume from the last point. If you know of specific cases where this doesn't work please say so.
    As for hanging in vsx_util reconfigure stage - in most cases this is because the module takes more than 20 minutes to complete the operation, which causes a timeout. However, the operation still continues on the module side, and if you wait until "fw vsx fetch_cpd" finishes on the module side (which admittedly takes more time than we would like it to take), and then resume the operation, it should be good.
    In the next R77 HFA we'll very likely (no guarantees...) have a new feature that will send progress reports from the modules, which will solve the timeout issue.

    As for changing interface names - starting from R77.40VS mgmt, if choosing to perform the change on the mgmt only (which is very likely if you are changing appliances), there's a question in vsx_util that asks you if you want to remove the old interface. What management version did you use?


    1. David, thanks for your input. It is not reconfigure that fails, it is "vsx_util upgrade" that hangs miserably for hours with no activity. Yes, it seems it can run from the failing point, but most of the customers do not take any chances and revert to the initial state.

      Also, there is no R77.40VS version, you must mean R75 there. In R75.40VS, change interface procedure works only partially. Although you are asked to remove old interfaces, and you do that, they are staying on the object.

      So one has to remove them manually from GUI before running reconfigure

    2. OK, so it seems I have to eat my hat. I'll tell you if it was tasty.

      Turns out that there's a bug, and when choosing to remove renamed interfaces, only some of the interfaces are being removed. If you perform the rename one-by-one, this can be avoided. We'll write an SK on this.

      As for vsx_util upgrade - what management version are you using?

  5. The described issues were observed on all available versions: R75.40VS. R76, R77. In some cases one could not upgrade from R67 to 77 directly because vsx_util were hanging, bit upgrade in two stpes, like R67 -> R75.40VS -> R77 was working.