Disable DAST Resource Clean Up When K8s Debugging

With the 1.7.2 release of the Kubernetes-based Deepfactor portal, users can run multiple DAST (web or API) scans simultaneously. The maximum number of concurrent scans depends on the amount of memory and CPU available in your cluster. For each scan, Deepfactor creates a pod that requires a minimum of 8GB RAM and a maximum of 16GB memory.

The scan diagnostic archive is created locally by the scan pod and stored on a new persistent volume. By default, once the scan completes either successfully or unsuccessfully, the zap-resources including the zap-pod and persistent volumes are cleaned up to make sure that the resources are not utilized indefinitely on your K8s cluster. The disadvantage of this setting is that if the scan encountered an error, the diagnostic archive will not be available for debugging purposes. If your scan fails for some reason, it is advisable to turn this auto-clean up of the scan pod off and re-run the scan so you can access and share the diagnostic logs with us.

How can I disable the cleanup of the scan pod and its resources?

Once the DeeFactor portal is installed, the cleanup is enabled by default. Perform the following steps to disable cleanup of scan pod and its resources:

  • Get the appsettings configmap in the deepfactor namespace
$ kubectl get cm -n deepfactor
# You should see a output like the following:
# In your case, instead of `beta` it could be something else, depending on what
# release name you provided during installation time. The important string to see
# is appsettings. So in this case the right config map that is of our interest is
# `beta-deepfactor-appsettings`
NAME DATA AGE
beta-deepfactor-appsettings 1 10d
beta-deepfactor-cvesvc 1 10d
beta-deepfactor-dfstartup 1 10d
beta-deepfactor-nginx 3 10d
beta-deepfactor-postgres 3 10d
beta-ingress-nginx-controller 0 10d
...
...
  • Edit the appsettings configmap.
kubectl edit cm -n deepfactor <appsettings-cm-name>
# In this case, <appsettings-cm-name> = beta-deepfactor-appsettings

Once this command is executed, an editor will pop up with the configmap YAML configuration. Change the value of WebAppScanNoCleanUp and WebAppScanNoTearDown to true.

"WebAppScanNoCleanUp": true,
"WebAppScanNoTearDown": true,

Please note that setting WebAppScanNoCleanUp to false and WebAppScanNoTearDown to true means that all the relevant k8s resources, i.e. zap-pod, zap-service and zap-configmap will be deleted once the scan completes, on success or error, except for the PVC.
Setting WebAppScanNoTearDown to false will mean that the PVC will also be deleted once the scan completes, on success or error. WebAppScanNoTearDown is not valid when WebAppScanNoCleanUp is set to true. WebAppScanNoTearDown is applicable only when the WebAppScanNoCleanUp is set to false.

  • Save the changes according to your requirement.

  • Restart the dfwebscan pod in the deepfactor namespace by simply deleting it. Make sure you perform this step only when there are no scans running, otherwise the scans that are running will remain stuck in running state. The command to restart the dfwebscan pod is:

kubectl delete pod <pod-name> -n deepfactor

How can I cleanup the scan pod and its resources manually after disabling auto cleanup?

If you started a DAST scan with the above settings true, then k8s resources related to the scan will not be released. These unreleased resources can prevent new zap pods from getting scheduled when resources are exhausted. The solution is to clean up the older zap scan pod and k8s resources.

When you start a scan, the following resources are created in your K8s cluster:

  • Job
    The name of the Job will be zap-<scan-uuid>

  • Service
    The name of the service will be zap-<scan-uuid>

  • ConfigMap
    The name of the config-map will be dfwebscan-<scan-uuid>

  • PVC
    The name of the PVC will be zap-<scan-uuid>

The resources that are associated with prior scans can be deleted using the kubectl command. Please be extra cautious while executing these commands, as deleting the wrong resources can cause undesired behavior or downtime. Be especially cautious while deleting PVCs.

# Deleting an old zap job
kubectl get job -n deepfactor
# Select the zap job that is no more relevant
kubectl delete job -n deepfactor <job-name>

# Deleting an old zap service
kubectl get svc -n deepfactor
# Select the zap service
kubectl delete svc -n deepfactor <svc-name>

# Deleting an old configmap
kubectl get cm -n deepfactor
# Select the configmap
kubectl delete cm -n deepfactor <configmap-name>

# Deleting an old PVC.
kubectl get pvc -n deepfactor
# Select the PVC
kubectl delete pvc -n deepfactor <pvc-name>

 

I started my first DAST scan and the ZAP pod is pending. How can I fix it?

The pending state means that the ZAP pod could not be scheduled onto a node. This may occur for a variety of reasons. ZAP pods have no other policies applied, so this may be due to no nodes available that have the required CPU and memory resources.
It is probably a good idea to describe the pod first and check what has caused the pod to be in a pending state. The command to check is the following:

kubectl get pod -n deepfactor
# Select the zap pod that is pending and replace the name at <zap-pod-name>
kubectl describe pod <zap-pod-name> -n deepfactor
<zap-pod-name> = The name of the pod is zap-<some-random-string>

The output of that command will give info about why the pod was not scheduled. If the output states that CPU/Memory requirements were not met, please see the following advice as follows.

You need to increase the memory and CPU capacity on a k8s node. Please check with the relevant provider link on how to do this.

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.