This page provides a list of commands and log locations that you can use to debug your your Sisense deployment on Linux.

You'll have to replace namespace "sisense" with "gigadat-eks"

Restart all Kubernetes cluster pods:

 

kubectl delete pods -n sisense --all

 

Get kubectl tab completions.

source <(kubectl completion bash) 2>/dev/null

Get helm tab completions:

source <(helm completion bash) 2>/dev/null 

Get a list of pods:

kubectl -n companyname-eks get pods

OR

kubectl -n sisense get pods -o wide

Access management logs:

kubectl -n sisense logs $(kubectl -n sisense get pods -l app="management" -o custom-columns=":.metadata.name" )

 

Tail the log and print the last 10 lines:

kubectl -n sisense -f --tail=10 logs $(kubectl -n sisense get pods -l app="management" -o custom-columns=":.metadata.name" )

 

Get Kubernetes events:

kubectl -n sisense get events

 

Monitor Kubernetes events:

kubectl -n sisense get events --watch

 

Restart and build the Sisense service:

Note: This restarts the build service only.

kubectl -n companyname-eks delete pod $(kubectl -n companyname-eks get pods -l app="build" -o custom-columns=":.metadata.name" )

 

Restart all services:

kubectl -n companyname-eks delete pods $(kubectl -n comapanyname-eks get pods -o custom-columns=":.metadata.name")

 

Shut-off Sisense single node deployment:

kubectl scale -n sisense deployment --all --replicas=0

 

Restore Sisense single node deployment:

kubectl scale -n sisense deployment --all --replicas=1

Note: This is not recommended in multi-node deployments as some of the services have more than 1 replica. Sisense recommends you use the installer to update the load system.

The location of log directories:

On the first installed node.

/var/log/sisense/namespace/

/var/log/sisense/sisense/combined.log -- logs of all services

For each service, there is a log file that you can retrieve, for example:

/var/log/sisense/sisense/query.log

/var/log/sisense/sisense/api-gateway.log

 

Get Sisense CLI:

kubectl -n companyname-eks cp $(kubectl -n companyname-eks get pods -l app="management" -o custom-columns=":.metadata.name" ):/etc/sisense.sh .

source sisense.sh

login_sisense <server-ip>:30845 <sisense-admin>

 

Get a list of ElasticCubes from the CLI:

si elasticubes list

 

Build an ElastiCube from the Sisense CLI:

si elasticubes build -name ariel -type full

 

Get GlusterFS topology information:

kubectl exec -it $(kubectl get pods -l name="heketi" -o custom-columns=":.metadata.name" ) heketi-cli topology info

 

View disk usage for shared storage:

kubectl -n sisense exec -it $(kubectl -n sisense get pods -l app="management" -o custom-columns=":.metadata.name" ) -- bash -c "df -H /opt/sisense/storage"

 

See all device allocations in the GlusterFS:

for i in $(kubectl exec -it $(kubectl get pods -l name="heketi" -o custom-columns=":.metadata.name" ) heketi-cli node list | awk '{gsub(/Id\:/,""); print $1}') ; do kubectl exec -it $(kubectl get pods -l name="heketi" -o custom-columns=":.metadata.name" ) heketi-cli node info $i ; done

 

Expand Sisense disk volume by 1GB:

VOL=$(kubectl get persistentvolumes $(kubectl get persistentvolumeclaims -n sisense storage -o custom-columns=":.spec.volumeName") -o custom-columns=":.spec.glusterfs.path" | grep vol | awk '{gsub(/vol_/,""); print $1}')

kubectl exec -it $(kubectl get pods -l name="heketi" -o custom-columns=":.metadata.name" ) -- bash -c "heketi-cli volume expand --volume=$VOL --expand-size=1"

Get all services:

kubectl get services -n sisense

 

Execute bash on a pod:

kubectl -n sisense exec -it $(kubectl -n sisense get pods -l app="management" -o custom-columns=":.metadata.name") bash

 

List all blocked devices in the system:

lsblk

 

Get Kubernetes dashboard URL:

kubectl cluster-info

 

Get an Admin user token:

kubectl describe secret -n kube-system admin-user-token

List helm releases:

helm list --all

 

Expose the Message Broker Management UI:

kubectl port-forward -n sisense pod/sisense-rabbitmq-ha-0 30999:15672 --address=0.0.0.0

 

Expose the Sisense application database internal communication port:

kubectl port-forward --address 0.0.0.0 -n sisense sisense-mongodb-replicaset-0 30846:27017

 

Kill all evicted pods:

kubectl get po --all-namespaces | grep Evicted | awk '{print $2, "--namespace", $1}' | xargs kubectl delete pod

 

Get the Grafana of the cluster:

kubectl -n monitoring get svc prom-operator-grafana

 

From <https://documentation.sisense.com/latest/linux/debuglinux.htm#gsc.tab=0>