Root Cause
what would happen with this amount of traffic is the service would scale up, which would remove the activator from the data path (i.e. SKS in serve mode) and send traffic directly to the pod. But because you’ve set max replicas to one, the service can’t scale up and alleviate the pressure.
Potential solution:
There’s a couple of things you can try in this situation:
- Increasing the limits in the net-istio-controller pod does fix the problem but this is not satisfying enough as size can be unpredictable and may not be acceptable.
- set “target burst capacity” to 0 so that the activator is only on the path when the app is scaled to zero
- look into scaling the data plane and/or tweaking the activator cacapity
target burst capacity – https://knative.dev/docs/serving/load-balancing/target-burst-capacity/#setting-the-target-burst-capacity
Commands:
kubectl describe pod -n knative-serving activator-774d4ff4b8-4l5vp | grep Reason
Reason: OOMKilled
Check net-istio-controller pod:
$ kubectl get pods -n knative-serving
$ kubectl logs net-istio-controller-8d456687b-hq95g -n knative-serving
$ kubectl get secrets -A | wc -l
Reference
- https://github.com/knative/serving/issues/13583#issuecomment-1377339654
Latest posts by rajeshkumar (see all)
- How to get Blackduck Trial version? - December 3, 2023
- PHP ionCube Error: cannot be decoded by this version of the ionCube Loader - December 2, 2023
- Cloudbees CD/RO Error: ectool - November 24, 2023