Istio erorrs: 0/1 nodes are available: 1 Insufficient cpu. preemptionIstio erorrs: knative-

Problem

$ kubectl get pods -n knative-serving
NAME                                     READY   STATUS    RESTARTS        AGE
activator-85ccbfb994-v6c8g               1/1     Running   2 (3m56s ago)   3h15m
autoscaler-cc8b7dbdb-9lg77               1/1     Running   2 (3m56s ago)   3h15m
controller-6f9fb85fbd-wd6lc              1/1     Running   2 (3m56s ago)   3h15m
domain-mapping-676b79f95b-8dqdz          1/1     Running   2 (3m56s ago)   3h15m
domainmapping-webhook-85bdfb7f6b-smx67   1/1     Running   3 (3m56s ago)   3h15m
net-istio-controller-5c767878f6-fsr6w    0/1     Pending   0               9m16s
net-istio-webhook-84f67c7f48-tqnq5       1/1     Running   1 (3m56s ago)   9m16s
webhook-664745d5cb-t26p5                 1/1     Running   3 (3m56s ago)   3h15m
$ kubectl describe pod net-istio-controller-5c767878f6-fsr6w -n=knative-servin


Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  8m31s  default-scheduler  0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
  Warning  FailedScheduling  2m45s  default-scheduler  0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
PS C:\minikube>

Solution

Start minikube with 16384 MB of memory and 4 CPUs

Tanzu Error: Error: current server “” not found in tanzu config

Problem:

rajesh@ubuntu:~$ tanzu cluster create raju
Error: current server "" not found in tanzu config
Usage:
  tanzu cluster create CLUSTER_NAME [flags]

Flags:
  -d, --dry-run       Does not create cluster, but show the deployment YAML instead
  -f, --file string   Configuration file from which to create a cluster
  -h, --help          help for create
      --tkr string    TanzuKubernetesRelease(TKr) to be used for creating the workload cluster. If TKr name prefix is provided, the latest compatible TKr matching the TKr name prefix would be used

Global Flags:
      --log-file string   Log file path
  -v, --verbose int32     Number for the log level verbosity(0-9)

Error: exit status 1

✖  exit status 1

Tanzu Error: docker is not installed or not reachable. Verify it’s installed, running

Error

rajesh@ubuntu:~/tce-linux-amd64-v0.12.1$ tanzu unmanaged-cluster create one-cluster

📁 Created cluster directory

🧲 Resolving and checking Tanzu Kubernetes release (TKr) compatibility file
   projects.registry.vmware.com/tce/compatibility
   Compatibility file exists at /home/rajesh/.config/tanzu/tkg/unmanaged/compatibility/projects.registry.vmware.com_tce_compatibility_v9

🔧 Resolving TKr
   projects.registry.vmware.com/tce/tkr:v1.22.7-2
   TKr exists at /home/rajesh/.config/tanzu/tkg/unmanaged/bom/projects.registry.vmware.com_tce_tkr_v1.22.7-2
   Rendered Config: /home/rajesh/.config/tanzu/tkg/unmanaged/one-cluster/config.yaml
   Bootstrap Logs: /home/rajesh/.config/tanzu/tkg/unmanaged/one-cluster/bootstrap.log

🔧 Processing Tanzu Kubernetes Release

🎨 Selected base image
   projects.registry.vmware.com/tce/kind:v1.22.7

📦 Selected core package repository
   projects.registry.vmware.com/tce/repo-12:0.12.0

📦 Selected additional package repositories
   projects.registry.vmware.com/tce/main:0.12.0

📦 Selected kapp-controller image bundle
   projects.registry.vmware.com/tce/kapp-controller-multi-pkg:v0.30.1

🚀 Creating cluster one-cluster
   Cluster creation using kind!
   ❤️  Checkout this awesome project at https://kind.sigs.k8s.io
failed to create cluster, Error: system checks detected issues, please resolve first: [docker is not installed or not reachable. Verify it's installed, running, and your user has permissions to interact with it. Error when attempting to run docker ps: command "docker ps" failed with error: exit status 1]
Error: exit status 7

✖  exit status 7

Solution

You are not allowed to run / install tanzu using root user but normal user. where as normal user is allowed to run docker command which is run internally when you fire tanzu command. In order to fix this, you need to add user into a linux group called “docker”. Use following command

$ sudo gpasswd -a $USER docker

or

sudo gpasswd -a rajesh docker

and Close and open a SSH terminal again to make effective.

Laravel Issues: oauth_access_tokens.ibd file size issues

oauth_access_tokens table stored all access tokens when ever user gets login but tokens are not deleted

$  find . -type f -size +100M
./ds_blog/wp_posts.ibd
./ds_blog/wp_page_visit_history.MYD
./lms2022/mdl_logstore_standard_log.ibd
./ds_trainer_ms/oauth_access_tokens.ibd
./ds_classes_ms/oauth_access_tokens.ibd
./ds_lms/mdl_logstore_standard_log.ibd
./ds_rating_ms/oauth_access_tokens.ibd

Any one have better solution for this problem?

virtualbox issues – oracle vm virtualbox 7.0.0 needs the microsoft visual c++ 2019 redistributable package



oracle vm virtualbox 7.0.0 needs the microsoft visual c++ 2019 redistributable package

Solution:

Install Microsoft Visual C++ Redistributable

FROM – https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170

minikube error: kn quickstart minikube – minikube create: piping output: exit status 65


Error

PS C:\minikube> kn quickstart minikube
Running Knative Quickstart using Minikube
Minikube version is: v1.28.0

☸ Creating Minikube cluster...

Using the standard minikube driver for your system
If you wish to use a different driver, please configure minikube using
    minikube config set driver <your-driver>

* [knative] minikube v1.28.0 on Microsoft Windows 10 Enterprise 10.0.19044 Build 19044
* Automatically selected the hyperv driver. Other choices: virtualbox, ssh
* Starting control plane node knative in cluster knative
* Creating hyperv VM (CPUs=3, Memory=3078MB, Disk=20000MB) ...
! StartHost failed, but will try again: creating host: create: precreate: Hyper-V PowerShell Module is not available
* Creating hyperv VM (CPUs=3, Memory=3078MB, Disk=20000MB) ...
* Failed to start hyperv VM. Running "minikube delete -p knative" may fix it: creating host: create: precreate: Hyper-V PowerShell Module is not available
! Startup with hyperv driver failed, trying with alternate driver virtualbox: Failed to start host: creating host: create: precreate: Hyper-V PowerShell Module is not available
! Failed to delete cluster knative, proceeding with retry anyway.
* Starting control plane node knative in cluster knative
* Creating virtualbox VM (CPUs=3, Memory=3078MB, Disk=20000MB) ...
! StartHost failed, but will try again: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory
* Creating virtualbox VM (CPUs=3, Memory=3078MB, Disk=20000MB) ...
* Failed to start virtualbox VM. Running "minikube delete -p knative" may fix it: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory

X Exiting due to PR_HYPERV_MODULE_NOT_INSTALLED: Failed to start host: creating host: create: precreate: Hyper-V PowerShell Module is not available
* Suggestion: Run: 'Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Tools-All -All'
* Documentation: https://www.altaro.com/hyper-v/install-hyper-v-powershell-module/
* Related issue: https://github.com/kubernetes/minikube/issues/9040

Error: creating cluster: existing cluster: new cluster: minikube create: piping output: exit status 65
Usage:
  kn-quickstart minikube [flags]

Flags:
  -h, --help                        help for minikube
      --install-eventing            install Eventing on quickstart cluster
      --install-serving             install Serving on quickstart cluster
  -k, --kubernetes-version string   kubernetes version to use (1.x.y)
  -n, --name string                 minikube cluster name to be used by kn-quickstart (default "knative")

creating cluster: existing cluster: new cluster: minikube create: piping output: exit status 65
Error: exit status 1
PS C:\minikube>

Solution

kn quickstart command is not stable in i7 windows 10 dell machine. We need to check more.

lets try old method to install knative in minikube.

Minikube error: VT-X/AMD-v is enabled but still its showing “This computer doesn’t have VT-X/AMD-v enable”

Problem


* [knative] minikube v1.28.0 on Microsoft Windows 10 Enterprise 10.0.19044 Build 19044
* Automatically selected the hyperv driver. Other choices: virtualbox, ssh
* Starting control plane node knative in cluster knative
* Creating hyperv VM (CPUs=3, Memory=3078MB, Disk=20000MB) ...
! StartHost failed, but will try again: creating host: create: precreate: Hyper-V PowerShell Module is not available
* Creating hyperv VM (CPUs=3, Memory=3078MB, Disk=20000MB) ...
* Failed to start hyperv VM. Running "minikube delete -p knative" may fix it: creating host: create: precreate: Hyper-V PowerShell Module is not available
! Startup with hyperv driver failed, trying with alternate driver virtualbox: Failed to start host: creating host: create: precreate: Hyper-V PowerShell Module is not available
! Failed to delete cluster knative, proceeding with retry anyway.
* Starting control plane node knative in cluster knative
* Creating virtualbox VM (CPUs=3, Memory=3078MB, Disk=20000MB) ...
! StartHost failed, but will try again: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory
* Creating virtualbox VM (CPUs=3, Memory=3078MB, Disk=20000MB) ...
* Failed to start virtualbox VM. Running "minikube delete -p knative" may fix it: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory

X Exiting due to PR_HYPERV_MODULE_NOT_INSTALLED: Failed to start host: creating host: create: precreate: Hyper-V PowerShell Module is not available
* Suggestion: Run: 'Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Tools-All -All'
* Documentation: https://www.altaro.com/hyper-v/install-hyper-v-powershell-module/
* Related issue: https://github.com/kubernetes/minikube/issues/9040

Error: creating cluster: existing cluster: new cluster: minikube create: piping output: exit status 65
Usage:
  kn-quickstart minikube [flags]

Solution

If you have already enabled the feature in the BIOS, make sure you did not enable the Windows Hyper-V feature as well. Otherwise VirtualBox will not run.

How to disable Hyper-V in command line?

In an elevated Command Prompt write this :

To disable:

$ bcdedit /set hypervisorlaunchtype off

To enable:

$ bcdedit /set hypervisorlaunchtype auto 
(From comments - restart to take effect)

Powershell command

Disable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All

Minikube Error:


PS C:\minikube> minikube start
* minikube v1.28.0 on Microsoft Windows 10 Enterprise 10.0.19044 Build 19044
* Automatically selected the hyperv driver. Other choices: virtualbox, ssh
* Downloading VM boot image ...
    > minikube-v1.28.0-amd64.iso....:  65 B / 65 B [---------] 100.00% ? p/s 0s
    > minikube-v1.28.0-amd64.iso:  274.45 MiB / 274.45 MiB  100.00% 5.83 MiB p/
* Starting control plane node minikube in cluster minikube
* Downloading Kubernetes v1.25.3 preload ...
    > preloaded-images-k8s-v18-v1...:  385.44 MiB / 385.44 MiB  100.00% 5.81 Mi
* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
! StartHost failed, but will try again: creating host: create: precreate: Hyper-V PowerShell Module is not available
* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
* Failed to start hyperv VM. Running "minikube delete" may fix it: creating host: create: precreate: Hyper-V PowerShell Module is not available
! Startup with hyperv driver failed, trying with alternate driver virtualbox: Failed to start host: creating host: create: precreate: Hyper-V PowerShell Module is not available
! Failed to delete cluster minikube, proceeding with retry anyway.
* Starting control plane node minikube in cluster minikube
* Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
! StartHost failed, but will try again: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory
* Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
* Failed to start virtualbox VM. Running "minikube delete" may fix it: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory

X Exiting due to PR_HYPERV_MODULE_NOT_INSTALLED: Failed to start host: creating host: create: precreate: Hyper-V PowerShell Module is not available
* Suggestion: Run: 'Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Tools-All -All'
* Documentation: https://www.altaro.com/hyper-v/install-hyper-v-powershell-module/
* Related issue: https://github.com/kubernetes/minikube/issues/9040

Solution

$ minikube start --driver=virtualbox --no-vtx-check

Knative Error: Liveness probe failed and Readiness probe failed


root@ubuntu:/home/rajesh# kubectl get pods -n knative-serving
NAME                                     READY   STATUS             RESTARTS        AGE
activator-77cddd575c-dgvtz               0/1     CrashLoopBackOff   4 (93s ago)     11m
autoscaler-8555bc9579-wj5ph              0/1     Running            6 (49s ago)     11m
controller-756bdcdfb7-qvmjc              0/1     CrashLoopBackOff   4 (50s ago)     11m
domain-mapping-6b7d89b8b9-8cwcx          0/1     CrashLoopBackOff   4 (75s ago)     10m
domainmapping-webhook-7d8bdf476c-jf6xf   0/1     CrashLoopBackOff   7 (2m18s ago)   10m
net-istio-controller-7c5968d955-28chq    0/1     CrashLoopBackOff   4 (25s ago)     10m
net-istio-webhook-858d578f5f-dpxsr       0/1     Error              4 (2m24s ago)   10m
webhook-77ccd77dcc-kjkkg                 0/1     CrashLoopBackOff   7 (2m44s ago)   10m
root@ubuntu:/home/rajesh# kubectl logs activator-77cddd575c-dgvtz -n knative-serving
2022/12/01 07:36:00 Registering 3 clients
2022/12/01 07:36:00 Registering 3 informer factories
2022/12/01 07:36:00 Registering 3 informers
root@ubuntu:/home/rajesh# kubectl describe pod activator-77cddd575c-dgvtz -n knative-serving
Events:
Type     Reason                  Age                    From               Message
----     ------                  ----                   ----               -------
Normal   Scheduled               12m                    default-scheduler  Successfully assigned knative-serving/activator-77cddd575c-dgvtz to worker-1
Warning  FailedCreatePodSandBox  12m                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8ec772008e6a3648f69d14eb3d5a1f6c15143ade36cd5d54619f8f68963270cf": error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": net/http: TLS handshake timeout
Warning  FailedCreatePodSandBox  11m                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "74258a75a518a38f21b8a8bc8cd1a9e2369c2be3123399699342aec7b8e1b240": error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": net/http: TLS handshake timeout
Normal   Pulling                 11m                    kubelet            Pulling image "gcr.io/knative-releases/knative.dev/serving/cmd/activator@sha256:d7f05e8bae04b1a55ab2f44735b974aa5bdcbd277f5f6f6fad6cc47864c3716f"
Normal   Pulled                  10m                    kubelet            Successfully pulled image "gcr.io/knative-releases/knative.dev/serving/cmd/activator@sha256:d7f05e8bae04b1a55ab2f44735b974aa5bdcbd277f5f6f6fad6cc47864c3716f" in 57.26505714s
Normal   Created                 10m                    kubelet            Created container activator
Normal   Started                 10m                    kubelet            Started container activator
Warning  Unhealthy               9m42s (x4 over 10m)    kubelet            Liveness probe failed: Get "http://192.168.1.7:8012/": dial tcp 192.168.1.7:8012: connect: connection refused
Warning  Unhealthy               2m27s (x100 over 10m)  kubelet            Readiness probe failed: Get "http://192.168.1.7:8012/": dial tcp 192.168.1.7:8012: connect: connection refused
root@ubuntu:/home/rajesh# kubectl logs controller-756bdcdfb7-qvmjc -n knative-serving
2022/12/01 07:54:30 Registering 5 clients
2022/12/01 07:54:30 Registering 5 informer factories
2022/12/01 07:54:30 Registering 14 informers
2022/12/01 07:54:30 Registering 9 controllers
2022/12/01 07:56:01 Error reading/parsing logging configuration: timed out waiting for the condition: Get "https://10.96.0.1:443/api/v1/namespaces/knative-serving/configmaps/config-logging": dial tcp 10.96.0.1:443: i/o timeout
kubectl describe pod controller-756bdcdfb7-qvmjc -n knative-serving
Events:
Type     Reason       Age                  From               Message
----     ------       ----                 ----               -------
Normal   Scheduled    33m                  default-scheduler  Successfully assigned knative-serving/controller-756bdcdfb7-qvmjc to worker-1
Warning  FailedMount  32m                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-9d27j" : failed to fetch token: Post "https://192.168.1.13:6443/api/v1/namespaces/knative-serving/serviceaccounts/controller/token": read tcp 192.168.1.15:38366->192.168.1.13:6443: use of closed network connection
Warning  FailedMount  32m                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-9d27j" : failed to fetch token: Post "https://192.168.1.13:6443/api/v1/namespaces/knative-serving/serviceaccounts/controller/token": read tcp 192.168.1.15:46438->192.168.1.13:6443: use of closed network connection
Warning  FailedMount  32m                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-9d27j" : failed to fetch token: Post "https://192.168.1.13:6443/api/v1/namespaces/knative-serving/serviceaccounts/controller/token": read tcp 192.168.1.15:59522->192.168.1.13:6443: use of closed network connection
Warning  FailedMount  32m                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-9d27j" : failed to fetch token: Post "https://192.168.1.13:6443/api/v1/namespaces/knative-serving/serviceaccounts/controller/token": read tcp 192.168.1.15:33944->192.168.1.13:6443: use of closed network connection
Normal   Pulling      32m                  kubelet            Pulling image "gcr.io/knative-releases/knative.dev/serving/cmd/controller@sha256:9102224b906b702c2875a0360dbec1f073db0809faada35ffd15d1593f67552b"
Normal   Pulled       31m                  kubelet            Successfully pulled image "gcr.io/knative-releases/knative.dev/serving/cmd/controller@sha256:9102224b906b702c2875a0360dbec1f073db0809faada35ffd15d1593f67552b" in 37.833153239s
Normal   Started      26m (x4 over 31m)    kubelet            Started container controller
Normal   Created      24m (x5 over 31m)    kubelet            Created container controller
Normal   Pulled       24m (x4 over 30m)    kubelet            Container image "gcr.io/knative-releases/knative.dev/serving/cmd/controller@sha256:9102224b906b702c2875a0360dbec1f073db0809faada35ffd15d1593f67552b" already present on machine
Warning  BackOff      101s (x75 over 28m)  kubelet  
root@ubuntu:/home/rajesh# kubectl logs domain-mapping-6b7d89b8b9-8cwcx -n knative-serving
2022/12/01 07:53:50 Registering 4 clients
2022/12/01 07:53:50 Registering 3 informer factories
2022/12/01 07:53:50 Registering 4 informers
2022/12/01 07:53:50 Registering 1 controllers
2022/12/01 07:55:21 Error reading/parsing logging configuration: timed out waiting for the condition: Get "https://10.96.0.1:443/api/v1/namespaces/knative-serving/configmaps/config-logging": dial tcp 10.96.0.1:443: i/o timeout
kubectl describe pod domain-mapping-6b7d89b8b9-8cwcx -n knative-serving
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  34m                   default-scheduler  Successfully assigned knative-serving/domain-mapping-6b7d89b8b9-8cwcx to worker-1
Normal   Pulling    34m                   kubelet            Pulling image "gcr.io/knative-releases/knative.dev/serving/cmd/domain-mapping@sha256:35633cb04a19f542a43b2dfd45609addec451d634372c1ee15c9ecb6a204bba4"
Normal   Pulled     33m                   kubelet            Successfully pulled image "gcr.io/knative-releases/knative.dev/serving/cmd/domain-mapping@sha256:35633cb04a19f542a43b2dfd45609addec451d634372c1ee15c9ecb6a204bba4" in 19.240964239s
Normal   Created    26m (x5 over 33m)     kubelet            Created container domain-mapping
Normal   Started    26m (x5 over 33m)     kubelet            Started container domain-mapping
Normal   Pulled     26m (x4 over 32m)     kubelet            Container image "gcr.io/knative-releases/knative.dev/serving/cmd/domain-mapping@sha256:35633cb04a19f542a43b2dfd45609addec451d634372c1ee15c9ecb6a204bba4" already present on machine
Warning  BackOff    3m54s (x74 over 30m)  kubelet            Back-off restarting failed container