Two last weeks I was dealing with an unpleasant case where VSX customer of mine could not provision some changes on VSX cluster.
The issue seemed to be MGMT DB corruption. There were some "wrong" static routes propagated to iVR and eVR from a Virtual System. The funny part was that these routes were not defined on VS itself, at least that what we thought at the beginning.
We have tried to adjust DB manually, to remove warp links, etc. Nothing helped. As soon as we have touched the VS in question, all these deleted routes were back on VRs again.
It was absolutely weird, because we could not find any reference for propagated networks in any of MGMT databases, neither on Main CMA, nor on Target one.
Finally when playing with the system we have mentioned that system is pushing new static routes to VRs if VS NAT Addresses definition is touched.
Here is the deal. If VS is connected to VR and has some static NAT rules, to make them work you can define explicit static routes on VS and then propagate them to adjacent VRs to make NAT work. On physical system a similar case would be to create static APR entries on an adjacent router to point it to FW.
On VSX there is a better way to make the same. If you go to VS topology tab, there is "NAT Addresses" button just bellow static routes table. If you press the button, you can add all static NAT IP addresses there. Once you are done, the system will calculate the closest IP network to cover the defined IP addresses and then will propagate this network static route to VRs.
That was our problem. Someone put there some IP addresses but then decided to go on with explicit static routes instead. There were two conflicting static routes, on explicitly defined, the other one manually added on VS and propagated. Provisioning did not catch an error, but VSX behavior after both routes were getting there was... well.. was not ideal.
Check Point PS engineer involved in the resolution has promised to add a new SK entry for the matter.