How to address high latency issues in the application?
How should I approach to explain the answer if the interview committee asks me the following question:
“Imagine you are working in a Kubernetes-based environment where you are trying to manage a multi-layer application. One day you have started receiving alerts that the application is experiencing high latency and frequent timeout. Describe the steps how can you take to diagnose and resolve the issue?”
In the context of DevOps, you should approach the following way to approach this particular interview question:
Identification of the problem
Checking pod status
Tell them that you would begin by Checking pod status first.
Kubectl gets pods –all-namespaces -o wide
Examine pod logs
Next, you would approach to review the logs for errors or warnings.
Kubectl logs -n
Resources utilization and bottleneck
Analysis of the metrics
You can use tools such as Prometheus and Grafana for visualizing metrics such as CPU, memory, and network usage.
Kubectl top pods -n
Network and connectivity
Checking service connectivity
You should try to verify that the service is up to date and has the correct IPs and ports.
kubectl get svc -o wide -n
Scaling and resources limit
Reviewing horizontal pod autoscale
You should try to ensure that the HPA is configured and scaling as expected.
Kubectl get hpa -n
Increase resource limit
You can edit deployment resources if the limits are too low.
Resources:
Limits:
Cpu: “500m”
Memory: “256Mi”
Requests:
Cpu: “250m”
Memory: “128Mi”
Health checking (HealthController.java)
You can implement a detailed health checker to monitor the health of your particular application.
Import org.springframework.web.bind.annotation.GetMapping;
Import org.springframework.web.bind.annotation.RestController;
@RestController
Public class HealthController {
@GetMapping(“/health”)
Public Health health() {
Return new Health(“UP”, “Application is running”);
}
}
Class Health {
Private String status;
Private String message;
Public Health(String status, String message) {
This.status = status;
This.message = message;
}
Public String getStatus() {
Return status;
}
Public String getMessage() {
Return message;
}
}
Logging (logging seevjcy.java)
You can implement a logging service for the purpose y capturing critical application logs.
Import org.slf4j.Logger;
Import org.slf4j.LoggerFactory;
Import org.springframework.stereotype.Service;
@Service
Public class LoggingService {
Private static final Logger logger = LoggerFactory.getLogger(LoggingService.class);
Public void logInfo(String message) {
Logger.info(message);
}
Public void logError(String message, Throwable throwable) {
Logger.error(message, throwable);
}
}
Metrics(Metrics Services.java)
You can use the micrometer to collect and export metrics to Prometheus.
Import io.micrometer.core.instrument.MeterRegistry;
Import io.micrometer.core.instrument.Timer;
Import org.springframework.stereotype.Service;
@Service
Public class MetricsService {
Private final MeterRegistry meterRegistry;
Private final Timer requestTimer;
Public MetricsService(MeterRegistry meterRegistry) {
This.meterRegistry = meterRegistry;
This.requestTimer = Timer.builder(“http.requests”)
.description(“HTTP request latency”)
.register(meterRegistry);
}
Public void recordRequest(long startTime) {
Long duration = System.currentTimeMillis() – startTime;
requestTimer.record(duration, java.util.concurrent.TimeUnit.MILLISECONDS);
}
}
Kubernetes deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-app
spec:
replicas: 3
selector:
matchLabels:
app: kubernetes-app
template:
metadata:
labels:
app: kubernetes-app
spec:
containers:
Name: kubernetes-app
Image: /kubernetes-app:latest
Ports:
containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
apiVersion: v1
kind: Service
metadata:
name: kubernetes-app-service
spec:
selector:
app: kubernetes-app
ports:
protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer