What is the common standard for Service Meshes?
Service Mesh Specification (SMS)
Service Mesh Technology (SMT)
Service Mesh Interface (SMI)
Service Mesh Function (SMF)
A widely referenced interoperability standard in the service mesh ecosystem is the Service Mesh Interface (SMI), so C is correct. SMI was created to provide a common set of APIs for basic service mesh capabilities—helping users avoid being locked into a single mesh implementation for core features. While service meshes differ in architecture and implementation (e.g., Istio, Linkerd, Consul), SMI aims to standardize how common behaviors are expressed.
In cloud native architecture, service meshes address cross-cutting concerns for service-to-service communication: traffic policies, observability, and security (mTLS, identity). Rather than baking these concerns into every application, a mesh typically introduces data-plane proxies and a control plane to manage policy and configuration. SMI sits above those implementations as a common API model.
The other options are not commonly used industry standards. You may see other efforts and emerging APIs, but among the listed choices, SMI is the recognized standard name that appears in cloud native discussions and tooling integrations.
Also note a practical nuance: even with SMI, not every mesh implements every SMI spec fully, and many users still adopt mesh-specific CRDs and APIs for advanced features. But for this question’s framing—“common standard”—Service Mesh Interface is the correct answer.
In Kubernetes, what is the primary purpose of using annotations?
To control the access permissions for users and service accounts.
To provide a way to attach metadata to objects.
To specify the deployment strategy for applications.
To define the specifications for resource limits and requests.
Annotations in Kubernetes are a flexible mechanism for attaching non-identifying metadata to Kubernetes objects. Their primary purpose is to store additional information that is not used for object selection or grouping, which makes Option B the correct answer.
Unlike labels, which are designed to be used for selection, filtering, and grouping of resources (for example, by Services or Deployments), annotations are intended purely for informational or auxiliary purposes. They allow users, tools, and controllers to store arbitrary key–value data on objects without affecting Kubernetes’ core behavior. This makes annotations ideal for storing data such as build information, deployment timestamps, commit hashes, configuration hints, or ownership details.
Annotations are commonly consumed by external tools and controllers rather than by the Kubernetes scheduler or control plane for decision-making. For example, ingress controllers, service meshes, monitoring agents, and CI/CD systems often read annotations to enable or customize specific behaviors. Because annotations are not used for querying or selection, Kubernetes places no strict size or structure requirements on their values beyond general object size limits.
Option A is incorrect because access permissions are managed using Role-Based Access Control (RBAC), which relies on roles, role bindings, and service accounts—not annotations. Option C is incorrect because deployment strategies (such as RollingUpdate or Recreate) are defined in the specification of workload resources like Deployments, not through annotations. Option D is also incorrect because resource limits and requests are specified explicitly in the Pod or container spec under the resources field.
In summary, annotations provide a powerful and extensible way to associate metadata with Kubernetes objects without influencing scheduling, selection, or identity. They support integration, observability, and operational tooling while keeping core Kubernetes behavior predictable and stable. This design intent is clearly documented in Kubernetes metadata concepts, making Option B the correct and verified answer.
What is the default value for authorization-mode in Kubernetes API server?
--authorization-mode=RBAC
--authorization-mode=AlwaysAllow
--authorization-mode=AlwaysDeny
--authorization-mode=ABAC
The Kubernetes API server supports multiple authorization modes that determine whether an authenticated request is allowed to perform an action (verb) on a resource. Historically, the API server’s default authorization mode was AlwaysAllow, meaning that once a request was authenticated, it would be authorized without further checks. That is why the correct answer here is B.
However, it’s crucial to distinguish “default flag value” from “recommended configuration.” In production clusters, running with AlwaysAllow is insecure because it effectively removes authorization controls—any authenticated user (or component credential) could do anything the API permits. Modern Kubernetes best practices strongly recommend enabling RBAC (Role-Based Access Control), often alongside Node and Webhook authorization, so that permissions are granted explicitly using Roles/ClusterRoles and RoleBindings/ClusterRoleBindings. Many managed Kubernetes distributions and kubeadm-based setups commonly enable RBAC by default as part of cluster bootstrap profiles, even if the API server’s historical default flag value is AlwaysAllow.
So, the exam-style interpretation of this question is about the API server flag default, not what most real clusters should run. With RBAC enabled, authorization becomes granular: you can control who can read Secrets, who can create Deployments, who can exec into Pods, and so on, scoped to namespaces or cluster-wide. ABAC (Attribute-Based Access Control) exists but is generally discouraged compared to RBAC because it relies on policy files and is less ergonomic and less commonly used. AlwaysDeny is useful for hard lockdown testing but not for normal clusters.
In short: AlwaysAllow is the API server’s default mode (answer B), but RBAC is the secure, recommended choice you should expect to see enabled in almost any serious Kubernetes environment.
=========
When a Kubernetes Secret is created, how is the data stored by default in etcd?
As Base64-encoded strings that provide simple encoding but no actual encryption.
As plain text values that are directly stored without any obfuscation or additional encoding.
As compressed binary objects that are optimized for space but not secured against access.
As encrypted records automatically protected using the Kubernetes control plane master key.
By default, Kubernetes Secrets are stored in etcd as Base64-encoded values, which makes option A the correct answer. This is a common point of confusion because Base64 encoding is often mistaken for encryption, but in reality, it provides no security—only a reversible text encoding.
When a Secret is defined in a Kubernetes manifest or created via kubectl, its data fields are Base64-encoded before being persisted in etcd. This encoding ensures that binary data (such as certificates or keys) can be safely represented in JSON and YAML formats, which require text-based values. However, anyone with access to etcd or the Secret object via the Kubernetes API can easily decode these values.
Option B is incorrect because Secrets are not stored as raw plaintext; they are encoded using Base64 before storage. Option C is incorrect because Kubernetes does not compress Secret data by default. Option D is incorrect because Secrets are not encrypted at rest by default. Encryption at rest must be explicitly configured using an encryption provider configuration in the Kubernetes API server.
Because of this default behavior, Kubernetes strongly recommends additional security measures when handling Secrets. These include enabling encryption at rest for etcd, restricting access to Secrets using RBAC, using short-lived ServiceAccount tokens, and integrating with external secret management systems such as HashiCorp Vault or cloud provider key management services.
Understanding how Secrets are stored is critical for designing secure Kubernetes clusters. While Secrets provide a convenient abstraction for handling sensitive data, they rely on cluster-level security controls to ensure confidentiality. Without encryption at rest and proper access restrictions, Secret data remains vulnerable to unauthorized access.
Therefore, the correct and verified answer is Option A: Kubernetes stores Secrets as Base64-encoded strings in etcd by default, which offers encoding but not encryption.
What is the main purpose of the Open Container Initiative (OCI)?
Accelerating the adoption of containers and Kubernetes in the industry.
Creating open industry standards around container formats and runtimes.
Creating industry standards around container formats and runtimes for private purposes.
Improving the security of standards around container formats and runtimes.
B is correct: the OCI’s main purpose is to create open, vendor-neutral industry standards for container image formats and container runtimes. Standardization is critical in container orchestration because portability is a core promise: you should be able to build an image once and run it across different environments and runtimes without rewriting packaging or execution logic.
OCI defines (at a high level) two foundational specs:
Image specification: how container images are packaged (layers, metadata, manifests).
Runtime specification: how to run a container (filesystem setup, namespaces/cgroups behavior, lifecycle).These standards enable interoperability across tooling. For example, higher-level runtimes (like containerd or CRI-O) rely on OCI-compliant components (often runc or equivalents) to execute containers consistently.
Why the other options are not the best answer:
A (accelerating adoption) might be an indirect outcome, but it’s not the OCI’s core charter.
C is contradictory (“industry standards” but “for private purposes”)—OCI is explicitly about open standards.
D (improving security) can be helped by standardization and best practices, but OCI is not primarily a security standards body; its central function is format and runtime interoperability.
In Kubernetes specifically, OCI is part of the “plumbing” that makes runtimes replaceable. Kubernetes talks to runtimes via CRI; runtimes execute containers via OCI. This layering helps Kubernetes remain runtime-agnostic while still benefiting from consistent container behavior everywhere.
Therefore, the correct choice is B: OCI creates open standards around container formats and runtimes.
=========
Which of the following is a recommended security habit in Kubernetes?
Run the containers as the user with group ID 0 (root) and any user ID.
Disallow privilege escalation from within a container as the default option.
Run the containers as the user with user ID 0 (root) and any group ID.
Allow privilege escalation from within a container as the default option.
The correct answer is B. A widely recommended Kubernetes security best practice is to disallow privilege escalation inside containers by default. In Kubernetes Pod/Container security context, this is represented by allowPrivilegeEscalation: false. This setting prevents a process from gaining more privileges than its parent process—commonly via setuid/setgid binaries or other privilege-escalation mechanisms. Disallowing privilege escalation reduces the blast radius of a compromised container and aligns with least-privilege principles.
Options A and C are explicitly unsafe because they encourage running as root (UID 0 and/or GID 0). Running containers as root increases risk: if an attacker breaks out of the application process or exploits kernel/runtime vulnerabilities, having root inside the container can make privilege escalation and lateral movement easier. Modern Kubernetes security guidance strongly favors running as non-root (runAsNonRoot: true, explicit runAsUser), dropping Linux capabilities, using read-only root filesystems, and applying restrictive seccomp/AppArmor/SELinux profiles where possible.
Option D is the opposite of best practice. Allowing privilege escalation by default increases the attack surface and violates the idea of secure defaults.
Operationally, this habit is often enforced via admission controls and policies (e.g., Pod Security Admission in “restricted” mode, or policy engines like OPA Gatekeeper/Kyverno). It’s also important for compliance: many security baselines require containers to run as non-root and to prevent privilege escalation.
So, the recommended security habit among the choices is clearly B: Disallow privilege escalation.
=========
Which control plane component is responsible for updating the node Ready condition if a node becomes unreachable?
The kube-proxy
The node controller
The kubectl
The kube-apiserver
The correct answer is B: the node controller. In Kubernetes, node health is monitored and reflected through Node conditions such as Ready. The Node Controller (a controller that runs as part of the control plane, within the controller-manager) is responsible for monitoring node heartbeats and updating node status when a node becomes unreachable or unhealthy.
Nodes periodically report status (including kubelet heartbeats) to the API server. The Node Controller watches these updates. If it detects that a node has stopped reporting within expected time windows, it marks the node condition Ready as Unknown (or otherwise updates conditions) to indicate the control plane can’t confirm node health. This status change then influences higher-level behaviors such as Pod eviction and rescheduling: after grace periods and eviction timeouts, Pods on an unhealthy node may be evicted so the workload can be recreated on healthy nodes (assuming a controller manages replicas).
Option A (kube-proxy) is a node component for Service traffic routing and does not manage node health conditions. Option C (kubectl) is a CLI client; it does not participate in control plane health monitoring. Option D (kube-apiserver) stores and serves Node status, but it doesn’t decide when a node is unreachable; it persists what controllers and kubelets report. The “decision logic” for updating the Ready condition in response to missing heartbeats is the Node Controller’s job.
So, the component that updates the Node Ready condition when a node becomes unreachable is the node controller, which is option B.
=========
What is an important consideration when choosing a base image for a container in a Kubernetes deployment?
It should be minimal and purpose-built for the application to reduce attack surface and improve performance.
It should always be the latest version to ensure access to the newest features.
It should be the largest available image to ensure all dependencies are included.
It can be any existing image from the public repository without consideration of its contents.
Choosing an appropriate base image is a critical decision in building containerized applications for Kubernetes, as it directly impacts security, performance, reliability, and operational efficiency. A key best practice is to select a minimal, purpose-built base image, making option A the correct answer.
Minimal base images—such as distroless images or slim variants of common distributions—contain only the essential components required to run the application. By excluding unnecessary packages, shells, and utilities, these images significantly reduce the attack surface. Fewer components mean fewer potential vulnerabilities, which is especially important in Kubernetes environments where containers are often deployed at scale and exposed to dynamic network traffic.
Smaller images also improve performance and efficiency. They reduce image size, leading to faster image pulls, quicker Pod startup times, and lower network and storage overhead. This is particularly beneficial in large clusters or during frequent deployments, scaling events, or rolling updates. Kubernetes’ design emphasizes fast, repeatable deployments, and lightweight images align well with these goals.
Option B is incorrect because always using the latest image version can introduce instability or unexpected breaking changes. Kubernetes best practices recommend using explicitly versioned and tested images to ensure predictable behavior and reproducibility. Option C is incorrect because large images increase the attack surface, slow down deployments, and often include unnecessary dependencies that are never used by the application. Option D is incorrect because blindly using public images without inspecting their contents or provenance introduces serious security and compliance risks.
Kubernetes documentation and cloud-native security guidance consistently emphasize the principle of least privilege and minimalism in container images. A well-chosen base image supports secure defaults, faster operations, and easier maintenance, all of which are essential for running reliable workloads in production Kubernetes environments.
Therefore, the correct and verified answer is Option A.
Imagine you're releasing open-source software for the first time. Which of the following is a valid semantic version?
1.0
2021-10-11
0.1.0-rc
v1beta1
Semantic Versioning (SemVer) follows the pattern MAJOR.MINOR.PATCH with optional pre-release identifiers (e.g., -rc, -alpha.1) and build metadata. Among the options, 0.1.0-rc matches SemVer rules, so C is correct.
0.1.0-rc breaks down as: MAJOR=0, MINOR=1, PATCH=0, and -rc indicates a pre-release (“release candidate”). Pre-release versions are valid SemVer and are explicitly allowed to denote versions that are not yet considered stable. For a first-time open-source release, 0.x.y is common because it signals the API may still change in backward-incompatible ways before reaching 1.0.0.
Why the other options are not correct SemVer as written:
1.0 is missing the PATCH segment; SemVer requires three numeric components (e.g., 1.0.0).
2021-10-11 is a date string, not MAJOR.MINOR.PATCH.
v1beta1 resembles Kubernetes API versioning conventions, not SemVer.
In cloud-native delivery and Kubernetes ecosystems, SemVer matters because it communicates compatibility. Incrementing MAJOR indicates breaking changes, MINOR indicates backward-compatible feature additions, and PATCH indicates backward-compatible bug fixes. Pre-release tags allow releasing candidates for testing without claiming full stability. This is especially useful for open-source consumers and automation systems that need consistent version comparison and upgrade planning.
So, the only valid semantic version in the choices is 0.1.0-rc, option C.
=========
Which of the following scenarios would benefit the most from a service mesh architecture?
A few applications with hundreds of Pod replicas running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in a single cluster, each one providing multiple services.
Tens of distributed applications running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in multiple clusters, each one providing multiple services.
A service mesh is most valuable when service-to-service communication becomes complex at large scale—many services, many teams, and often multiple clusters. That’s why D is the best fit: thousands of distributed applications across multiple clusters. In that scenario, the operational burden of securing, observing, and controlling east-west traffic grows dramatically. A service mesh (e.g., Istio, Linkerd) addresses this by introducing a dedicated networking layer (usually sidecar proxies such as Envoy) that standardizes capabilities across services without requiring each application to implement them consistently.
The common “mesh” value-adds are: mTLS for service identity and encryption, fine-grained traffic policy (retries, timeouts, circuit breaking), traffic shifting (canary, mirroring), and consistent telemetry (metrics, traces, access logs). Those features become increasingly beneficial as the number of services and cross-service calls rises, and as you add multi-cluster routing, failover, and policy management across environments. With thousands of applications, inconsistent libraries and configurations become a reliability and security risk; the mesh centralizes and standardizes these behaviors.
In smaller environments (A or C), you can often meet requirements with simpler approaches: Kubernetes Services, Ingress/Gateway, basic mTLS at the edge, and application-level libraries. A single large cluster (B) can still benefit from a mesh, but adding multiple clusters increases complexity: traffic management across clusters, identity trust domains, global observability correlation, and consistent policy enforcement. That’s where mesh architectures typically justify their additional overhead (extra proxies, control plane components, operational complexity).
So, the “most benefit” scenario is the largest, most distributed footprint—D.
=========
What is the correct hierarchy of Kubernetes components?
Containers → Pods → Cluster → Nodes
Nodes → Cluster → Containers → Pods
Cluster → Nodes → Pods → Containers
Pods → Cluster → Containers → Nodes
The correct answer is C: Cluster → Nodes → Pods → Containers. This expresses the fundamental structural relationship in Kubernetes. A cluster is the overall system (control plane + nodes) that runs your workloads. Inside the cluster, you have nodes (worker machines—VMs or bare metal) that provide CPU, memory, storage, and networking. The scheduler assigns workloads to nodes.
Workloads are executed as Pods, which are the smallest deployable units Kubernetes schedules. Pods represent one or more containers that share networking (one Pod IP and port space) and can share storage volumes. Within each Pod are containers, which are the actual application processes packaged with their filesystem and runtime dependencies.
The other options are incorrect because they break these containment relationships. Containers do not contain Pods; Pods contain containers. Nodes do not exist “inside” Pods; Pods run on nodes. And the cluster is the top-level boundary that contains nodes and orchestrates Pods.
This hierarchy matters for troubleshooting and design. If you’re thinking about capacity, you reason at the node and cluster level (node pools, autoscaling, quotas). If you’re thinking about application scaling, you reason at the Pod level (replicas, HPA, readiness probes). If you’re thinking about process-level concerns, you reason at the container level (images, security context, runtime user, resources). Kubernetes intentionally uses this layered model so that scheduling and orchestration operate on Pods, while the container runtime handles container execution details.
So the accurate hierarchy from largest to smallest unit is: Cluster → Nodes → Pods → Containers, which corresponds to C.
=========
What is the practice of bringing financial accountability to the variable spend model of cloud resources?
FaaS
DevOps
CloudCost
FinOps
The practice of bringing financial accountability to cloud spending—where costs are variable and usage-based—is called FinOps, so D is correct. FinOps (Financial Operations) is an operating model and culture that helps organizations manage cloud costs by connecting engineering, finance, and business teams. Because cloud resources can be provisioned quickly and billed dynamically, traditional budgeting approaches often fail to keep pace. FinOps addresses this by introducing shared visibility, governance, and optimization processes that enable teams to make cost-aware decisions while still moving fast.
In Kubernetes and cloud-native architectures, variable spend shows up in many ways: autoscaling node pools, over-provisioned resource requests, idle clusters, persistent volumes, load balancers, egress traffic, managed services, and observability tooling. FinOps practices encourage tagging/labeling for cost attribution, defining cost KPIs, enforcing budget guardrails, and continuously optimizing usage (right-sizing resources, scaling policies, turning off unused environments, and selecting cost-effective architectures).
Why the other options are incorrect: FaaS (Function as a Service) is a compute model (serverless), not a financial accountability practice. DevOps is a cultural and technical practice focused on collaboration and delivery speed, not specifically cloud cost accountability (though it can complement FinOps). CloudCost is not a widely recognized standard term in the way FinOps is.
In practice, FinOps for Kubernetes often involves improving resource efficiency: aligning requests/limits with real usage, using HPA/VPA appropriately, selecting instance types that match workload profiles, managing cluster autoscaler settings, and allocating shared platform costs to teams via labels/namespaces. It also includes forecasting and anomaly detection, because cloud-native spend can spike quickly due to misconfigurations (e.g., runaway autoscaling or excessive log ingestion).
So, the correct term for financial accountability in cloud variable spend is FinOps (D).
=========
Which of the following observability data streams would be most useful when desiring to plot resource consumption and predicted future resource exhaustion?
stdout
Traces
Logs
Metrics
The correct answer is D: Metrics. Metrics are numeric time-series measurements collected at regular intervals, making them ideal for plotting resource consumption over time and forecasting future exhaustion. In Kubernetes, this includes CPU usage, memory usage, disk I/O, network throughput, filesystem usage, Pod restarts, and node allocatable vs requested resources. Because metrics are structured and queryable (often with Prometheus), you can compute rates, aggregates, percentiles, and trends, and then apply forecasting methods to predict when a resource will run out.
Logs and traces have different purposes. Logs are event records (strings) that are great for debugging and auditing, but they are not naturally suited to continuous quantitative plotting unless you transform them into metrics (log-based metrics). Traces capture end-to-end request paths and latency breakdowns; they help you find slow spans and dependency bottlenecks, not forecast CPU/memory exhaustion. stdout is just a stream where logs might be written; by itself it’s not an observability data type used for capacity trending.
In Kubernetes observability stacks, metrics are typically scraped from components and workloads: kubelet/cAdvisor exports container metrics, node exporters expose host metrics, and applications expose business/system metrics. The metrics pipeline (Prometheus, OpenTelemetry metrics, managed monitoring) enables dashboards and alerting. For resource exhaustion, you often alert on “time to fill” (e.g., predicted disk fill in < N hours), high sustained utilization, or rapidly increasing error rates due to throttling.
Therefore, the most appropriate data stream for plotting consumption and predicting exhaustion is Metrics, option D.
=========
Which field in a Pod or Deployment manifest ensures that Pods are scheduled only on nodes with specific labels?
resources:
disktype: ssd
labels:
disktype: ssd
nodeSelector:
disktype: ssd
annotations:
disktype: ssd
In Kubernetes, Pod scheduling is handled by the Kubernetes scheduler, which is responsible for assigning Pods to suitable nodes based on a set of constraints and policies. One of the simplest and most commonly used mechanisms to control where Pods are scheduled is the nodeSelector field. The nodeSelector field allows you to constrain a Pod so that it is only eligible to run on nodes that have specific labels.
Node labels are key–value pairs attached to nodes by cluster administrators or automation tools. These labels typically describe node characteristics such as hardware type, disk type, geographic zone, or environment. For example, a node might be labeled with disktype=ssd to indicate that it has SSD-backed storage. When a Pod specification includes a nodeSelector with the same key–value pair, the scheduler will only consider nodes that match this label when placing the Pod.
Option A (resources) is incorrect because resource specifications are used to define CPU and memory requests and limits for containers, not to influence node selection based on labels. Option B (labels) is also incorrect because Pod labels are metadata used for identification, grouping, and selection by other Kubernetes objects such as Services and Deployments; they do not affect scheduling decisions. Option D (annotations) is incorrect because annotations are intended for storing non-identifying metadata and are not interpreted by the scheduler for placement decisions.
The nodeSelector field is evaluated during scheduling, and if no nodes match the specified labels, the Pod will remain in a Pending state. While nodeSelector is simple and effective, it is considered a basic scheduling mechanism. For more advanced scheduling requirements—such as expressing preferences, using set-based matching, or combining multiple conditions—Kubernetes also provides node affinity and anti-affinity. However, nodeSelector remains a foundational and widely used feature for enforcing strict node placement based on labels, making option C the correct and verified answer according to Kubernetes documentation.
A platform engineer wants to ensure that a new microservice is automatically deployed to every cluster registered in Argo CD. Which configuration best achieves this goal?
Set up a Kubernetes CronJob that redeploys the microservice to all registered clusters on a schedule.
Manually configure every registered cluster with the deployment YAML for installing the microservice.
Create an Argo CD ApplicationSet that uses a Git repository containing the microservice manifests.
Use a Helm chart to package the microservice and manage it with a single Application defined in Argo CD.
Argo CD is a declarative GitOps continuous delivery tool designed to manage Kubernetes applications across one or many clusters. When the requirement is to automatically deploy a microservice to every cluster registered in Argo CD, the most appropriate and scalable solution is to use an ApplicationSet.
The ApplicationSet controller extends Argo CD by enabling the dynamic generation of multiple Argo CD Applications from a single template. One of its most powerful features is the cluster generator, which automatically discovers all clusters registered with Argo CD and creates an Application for each of them. By combining this generator with a Git repository containing the microservice manifests, the platform engineer ensures that the microservice is consistently deployed to all existing clusters—and any new clusters added in the future—without manual intervention.
This approach aligns perfectly with GitOps principles. The desired state of the microservice is defined once in Git, and Argo CD continuously reconciles that state across all target clusters. Any updates to the microservice manifests are automatically rolled out everywhere in a controlled and auditable manner. This provides strong guarantees around consistency, scalability, and operational simplicity.
Option A is incorrect because a CronJob introduces imperative redeployment logic and does not integrate with Argo CD’s reconciliation model. Option B is not scalable or maintainable, as it requires manual configuration for each cluster and increases the risk of configuration drift. Option D, while useful for packaging applications, still results in a single Application object and does not natively handle multi-cluster fan-out by itself.
Therefore, the correct and verified answer is Option C: creating an Argo CD ApplicationSet backed by a Git repository, which is the recommended and documented solution for multi-cluster application delivery in Argo CD.
What is the primary purpose of a Horizontal Pod Autoscaler (HPA) in Kubernetes?
To automatically scale the number of Pod replicas based on resource utilization.
To track performance metrics and report health status for nodes and Pods.
To coordinate rolling updates of Pods when deploying new application versions.
To allocate and manage persistent volumes required by stateful applications.
The Horizontal Pod Autoscaler (HPA) is a core Kubernetes feature designed to automatically scale the number of Pod replicas in a workload based on observed metrics, making option A the correct answer. Its primary goal is to ensure that applications can handle varying levels of demand while maintaining performance and resource efficiency.
HPA works by continuously monitoring metrics such as CPU utilization, memory usage, or custom and external metrics provided through the Kubernetes metrics APIs. Based on target thresholds defined by the user, the HPA increases or decreases the number of replicas in a scalable resource like a Deployment, ReplicaSet, or StatefulSet. When demand increases, HPA adds more Pods to handle the load. When demand decreases, it scales down Pods to free resources and reduce costs.
Option B is incorrect because tracking performance metrics and reporting health status is handled by components such as the metrics-server, monitoring systems, and observability tools—not by the HPA itself. Option C is incorrect because rolling updates are managed by Deployment strategies, not by the HPA. Option D is incorrect because persistent volume management is handled by Kubernetes storage resources and CSI drivers, not by autoscalers.
HPA operates at the Pod replica level, which is why it is called “horizontal” scaling—scaling out or in by changing the number of Pods, rather than adjusting resource limits of individual Pods (which would be vertical scaling). This makes HPA particularly effective for stateless applications that can scale horizontally to meet demand.
In practice, HPA is commonly used in production Kubernetes environments to maintain application responsiveness under load while optimizing cluster resource usage. It integrates seamlessly with Kubernetes’ declarative model and self-healing mechanisms.
Therefore, the correct and verified answer is Option A, as the Horizontal Pod Autoscaler’s primary function is to automatically scale Pod replicas based on resource utilization and defined metrics.
What is CloudEvents?
It is a specification for describing event data in common formats for Kubernetes network traffic management and cloud providers.
It is a specification for describing event data in common formats in all cloud providers including major cloud providers.
It is a specification for describing event data in common formats to provide interoperability across services, platforms and systems.
It is a Kubernetes specification for describing events data in common formats for iCloud services, iOS platforms and iMac.
CloudEvents is an open specification for describing event data in a common way to enable interoperability across services, platforms, and systems, so C is correct. In cloud-native architectures, many components communicate asynchronously via events (message brokers, event buses, webhooks). Without a standard envelope, each producer and consumer invents its own event structure, making integration brittle. CloudEvents addresses this by standardizing core metadata fields—like event id, source, type, specversion, and time—and defining how event payloads are carried.
This helps systems interoperate regardless of transport. CloudEvents can be serialized as JSON or other encodings and carried over HTTP, messaging systems, or other protocols. By using a shared spec, you can route, filter, validate, and transform events more consistently.
Option A is too narrow and incorrectly ties CloudEvents to Kubernetes traffic management; CloudEvents is broader than Kubernetes. Option B is closer but still framed incorrectly—CloudEvents is not merely “for all cloud providers,” it is an interoperability spec across services and platforms, including but not limited to cloud provider event systems. Option D is clearly incorrect.
In Kubernetes ecosystems, CloudEvents is relevant to event-driven systems and serverless platforms (e.g., Knative Eventing and other eventing frameworks) because it provides a consistent event contract across producers and consumers. That consistency reduces coupling, supports better tooling (schema validation, tracing correlation), and makes event-driven architectures easier to operate at scale.
So, the correct definition is C: a specification for common event formats to enable interoperability across systems.
=========
What is the minimum number of etcd members that are required for a highly available Kubernetes cluster?
Two etcd members.
Five etcd members.
Six etcd members.
Three etcd members.
D (three etcd members) is correct. etcd is a distributed key-value store that uses the Raft consensus algorithm. High availability in consensus systems depends on maintaining a quorum (majority) of members to continue serving writes reliably. With 3 members, the cluster can tolerate 1 failure and still have 2/3 available—enough for quorum.
Two members is a common trap: with 2, a single failure leaves 1/2, which is not a majority, so the cluster cannot safely make progress. That means 2-member etcd is not HA; it is fragile and can be taken down by one node loss, network partition, or maintenance event. Five members can tolerate 2 failures and is a valid HA configuration, but it is not the minimum. Six is even-sized and generally discouraged for consensus because it doesn’t improve failure tolerance compared to five (quorum still requires 4), while increasing coordination overhead.
In Kubernetes, etcd reliability directly affects the API server and the entire control plane because etcd stores cluster state: object specs, status, controller state, and more. If etcd loses quorum, the API server will be unable to persist or reliably read/write state, leading to cluster management outages. That’s why the minimum HA baseline is three etcd members, often across distinct failure domains (nodes/AZs), with strong disk performance and consistent low-latency networking.
So, the smallest etcd topology that provides true fault tolerance is 3 members, which corresponds to option D.
=========
Which are the core features provided by a service mesh?
Authentication and authorization
Distributing and replicating data
Security vulnerability scanning
Configuration management
A is the correct answer because a service mesh primarily focuses on securing and managing service-to-service communication, and a core part of that is authentication and authorization. In microservices architectures, internal (“east-west”) traffic can become a complex web of calls. A service mesh introduces a dedicated communication layer—commonly implemented with sidecar proxies or node proxies plus a control plane—to apply consistent security and traffic policies across services.
Authentication in a mesh typically means service identity: each workload gets an identity (often via certificates), enabling mutual TLS (mTLS) so services can verify each other and encrypt traffic in transit. Authorization then builds on identity to enforce “who can talk to whom” via policies (for example: service A can call service B only on certain paths or methods). These capabilities are central because they reduce the need for every development team to implement and maintain custom security libraries correctly.
Why the other answers are incorrect:
B (data distribution/replication) is a storage/database concern, not a mesh function.
C (vulnerability scanning) is typically part of CI/CD and supply-chain security tooling, not service-to-service runtime traffic management.
D (configuration management) is broader (GitOps, IaC, Helm/Kustomize); a mesh does have configuration, but “configuration management” is not the defining core feature tested here.
Service meshes also commonly provide traffic management (timeouts, retries, circuit breaking, canary routing) and telemetry (metrics/traces), but among the listed options, authentication and authorization best matches “core features.” It captures the mesh’s role in standardizing secure communications in a distributed system.
So, the verified correct answer is A.
=========
Which of the following workload requires a headless Service while deploying into the namespace?
StatefulSet
CronJob
Deployment
DaemonSet
A StatefulSet commonly requires a headless Service, so A is the correct answer. In Kubernetes, StatefulSets are designed for workloads that need stable identities, stable network names, and often stable storage per replica. To support that stable identity model, Kubernetes typically uses a headless Service (spec.clusterIP: None) to provide DNS records that map directly to each Pod, rather than load-balancing behind a single virtual ClusterIP.
With a headless Service, DNS queries return individual endpoint records (the Pod IPs) so that each StatefulSet Pod can be addressed predictably, such as pod-0.service-name.namespace.svc.cluster.local. This is critical for clustered databases, quorum systems, and leader/follower setups where members must discover and address specific peers. The StatefulSet controller then ensures ordered creation/deletion and preserves identity (pod-0, pod-1, etc.), while the headless Service provides discovery for those stable hostnames.
CronJobs run periodic Jobs and don’t require stable DNS identity for multiple replicas. Deployments manage stateless replicas and normally use a standard Service that load-balances across Pods. DaemonSets run one Pod per node, and while they can be exposed by Services, they do not intrinsically require headless discovery.
So while you can use a headless Service for other designs, StatefulSet is the workload type most associated with “requires a headless Service” due to how stable identities and per-Pod addressing work in Kubernetes.
What helps an organization to deliver software more securely at a higher velocity?
Kubernetes
apt-get
Docker Images
CI/CD Pipeline
A CI/CD pipeline is a core practice/tooling approach that enables organizations to deliver software faster and more securely, so D is correct. CI (Continuous Integration) automates building and testing code changes frequently, reducing integration risk and catching defects early. CD (Continuous Delivery/Deployment) automates releasing validated builds into environments using consistent, repeatable steps—reducing manual errors and enabling rapid iteration.
Security improves because automation enables standardized checks on every change: static analysis, dependency scanning, container image scanning, policy validation, and signing/verification steps can be integrated into the pipeline. Instead of relying on ad-hoc human processes, security controls become repeatable gates. In Kubernetes environments, pipelines commonly build container images, run tests, publish artifacts to registries, and then deploy via manifests, Helm, or GitOps controllers—keeping deployments consistent and auditable.
Option A (Kubernetes) is a platform that helps run and manage workloads, but by itself it doesn’t guarantee secure high-velocity delivery. It provides primitives (rollouts, declarative config, RBAC), yet the delivery workflow still needs automation. Option B (apt-get) is a package manager for Debian-based systems and is not a delivery pipeline. Option C (Docker Images) are artifacts; they improve portability and repeatability, but they don’t provide the end-to-end automation of building, testing, promoting, and deploying across environments.
In cloud-native application delivery, the pipeline is the “engine” that turns code changes into safe production releases. Combined with Kubernetes’ declarative deployment model (Deployments, rolling updates, health probes), a CI/CD pipeline supports frequent releases with controlled rollouts, fast rollback, and strong auditability. That is exactly what the question is targeting. Therefore, the verified answer is D.
=========
The cloud native architecture centered around microservices provides a strong system that ensures ______________.
fallback
resiliency
failover
high reachability
The best answer is B (resiliency). A microservices-centered cloud-native architecture is designed to build systems that continue to operate effectively under change and failure. “Resiliency” is the umbrella concept: the ability to tolerate faults, recover from disruptions, and maintain acceptable service levels through redundancy, isolation, and automated recovery.
Microservices help resiliency by reducing blast radius. Instead of one monolith where a single defect can take down the entire application, microservices separate concerns into independently deployable components. Combined with Kubernetes, you get resiliency mechanisms such as replication (multiple Pod replicas), self-healing (restart and reschedule on failure), rolling updates, health probes, and service discovery/load balancing. These enable the platform to detect and replace failing instances automatically, and to keep traffic flowing to healthy backends.
Options C (failover) and A (fallback) are resiliency techniques but are narrower terms. Failover usually refers to switching to a standby component when a primary fails; fallback often refers to degraded behavior (cached responses, reduced features). Both can exist in microservice systems, but the broader architectural guarantee microservices aim to support is resiliency overall. Option D (“high reachability”) is not the standard term used in cloud-native design and doesn’t capture the intent as precisely as resiliency.
In practice, achieving resiliency also requires good observability and disciplined delivery: monitoring/alerts, tracing across service boundaries, circuit breakers/timeouts/retries, and progressive delivery patterns. Kubernetes provides platform primitives, but resilient microservices also need careful API design and failure-mode thinking.
So the intended and verified completion is resiliency, option B.
=========
Which of these is a valid container restart policy?
On login
On update
On start
On failure
The correct answer is D: On failure. In Kubernetes, restart behavior is controlled by the Pod-level field spec.restartPolicy, with valid values Always, OnFailure, and Never. The option presented here (“On failure”) maps to Kubernetes’ OnFailure policy. This setting determines what the kubelet should do when containers exit:
Always: restart containers whenever they exit (typical for long-running services)
OnFailure: restart containers only if they exit with a non-zero status (common for batch workloads)
Never: do not restart containers (fail and leave it terminated)
So “On failure” is a valid restart policy concept and the only one in the list that matches Kubernetes semantics.
The other options are not Kubernetes restart policies. “On login,” “On update,” and “On start” are not recognized values and don’t align with how Kubernetes models container lifecycle. Kubernetes is declarative and event-driven: it reacts to container exit codes and controller intent, not user “logins.”
Operationally, choosing the right restart policy is important. For example, Jobs typically use restartPolicy: OnFailure or Never because the goal is completion, not continuous uptime. Deployments usually imply “Always” because the workload should keep serving traffic, and a crashed container should be restarted. Also note that controllers interact with restarts: a Deployment may recreate Pods if they fail readiness, while a Job counts completions and failures based on Pod termination behavior.
Therefore, among the options, the only valid (Kubernetes-aligned) restart policy is D.
=========
What element allows Kubernetes to run Pods across the fleet of nodes?
The node server.
The etcd static pods.
The API server.
The kubelet.
The correct answer is D (the kubelet) because the kubelet is the node agent responsible for actually running Pods on each node. Kubernetes can orchestrate workloads across many nodes because every worker node (and control-plane node that runs workloads) runs a kubelet that continuously watches the API server for PodSpecs assigned to that node and then ensures the containers described by those PodSpecs are started and kept running. In other words, the kube-scheduler decides where a Pod should run (sets spec.nodeName), but the kubelet is what makes the Pod run on that chosen node.
The kubelet integrates with the container runtime (via CRI) to pull images, create sandboxes, start containers, and manage their lifecycle. It also reports node and Pod status back to the control plane, executes liveness/readiness/startup probes, mounts volumes, and performs local housekeeping that keeps the node aligned with the declared desired state. This node-level reconciliation loop is a key Kubernetes pattern: the control plane declares intent, and the kubelet enforces it on the node.
Option C (API server) is critical but does not run Pods; it is the control plane’s front door for storing and serving cluster state. Option A (“node server”) is not a Kubernetes component. Option B (etcd static pods) is a misunderstanding: etcd is the datastore for Kubernetes state and may run as static Pods in some installations, but it is not the mechanism that runs user workloads across nodes.
So, Kubernetes runs Pods “across the fleet” because each node has a kubelet that can realize scheduled PodSpecs locally and keep them healthy over time.
=========
How do you perform a command in a running container of a Pod?
kubectl exec
docker exec
kubectl run
kubectl attach
In Kubernetes, the standard way to execute a command inside a running container is kubectl exec, which is why A is correct. kubectl exec calls the Kubernetes API (API server), which then coordinates with the kubelet on the target node to run the requested command inside the container using the container runtime’s exec mechanism. The -- separator is important: it tells kubectl that everything after -- is the command to run in the container rather than flags for kubectl itself.
This is fundamentally different from docker exec. In Kubernetes, you don’t normally target containers through Docker/CRI tools directly because Kubernetes abstracts the runtime behind CRI. Also, “Docker” might not even be installed on nodes in modern clusters (containerd/CRI-O are common). So option B is not the Kubernetes-native approach and often won’t work.
kubectl run (option C) is for creating a new Pod (or generating workload resources), not for executing a command in an existing container. kubectl attach (option D) attaches your terminal to a running container’s process streams (stdin/stdout/stderr), which is useful for interactive sessions, but it does not execute an arbitrary new command like exec does.
In real usage, you often specify the container when a Pod has multiple containers: kubectl exec -it
=========
Which option represents best practices when building container images?
Use multi-stage builds, use the latest tag for image version, and only install necessary packages.
Use multi-stage builds, pin the base image version to a specific digest, and install extra packages just in case.
Use multi-stage builds, pin the base image version to a specific digest, and only install necessary packages.
Avoid multi-stage builds, use the latest tag for image version, and install extra packages just in case.
Building secure, efficient, and reproducible container images is a core principle of cloud native application delivery. Kubernetes documentation and container security best practices emphasize minimizing image size, reducing attack surface, and ensuring deterministic builds. Option C fully aligns with these principles, making it the correct answer.
Multi-stage builds allow developers to separate the build environment from the runtime environment. Dependencies such as compilers, build tools, and temporary artifacts are used only in intermediate stages and excluded from the final image. This significantly reduces image size and limits the presence of unnecessary tools that could be exploited at runtime.
Pinning the base image to a specific digest ensures immutability and reproducibility. Tags such as latest can change over time, potentially introducing breaking changes or vulnerabilities without notice. By using a digest, teams guarantee that the same base image is used every time the image is built, which is essential for predictable behavior, security auditing, and reliable rollbacks.
Installing only necessary packages further reduces the attack surface. Every additional package increases the risk of vulnerabilities and expands the maintenance burden. Minimal images are faster to pull, quicker to start, and easier to scan for vulnerabilities. Kubernetes security guidance consistently recommends keeping container images as small and purpose-built as possible.
Option A is incorrect because using the latest tag undermines build determinism and traceability. Option B is incorrect because installing extra packages “just in case” contradicts the principle of minimalism and increases security risk. Option D is incorrect because avoiding multi-stage builds and installing unnecessary packages leads to larger, less secure images and is explicitly discouraged in cloud native best practices.
According to Kubernetes and CNCF security guidance, combining multi-stage builds, immutable image references, and minimal dependencies results in more secure, reliable, and maintainable container images. Therefore, option C represents the best and fully verified approach when building container images.
Which of the following capabilities are you allowed to add to a container using the Restricted policy?
CHOWN
SYS_CHROOT
SETUID
NET_BIND_SERVICE
Under the Kubernetes Pod Security Standards (PSS), the Restricted profile is the most locked-down baseline intended to reduce container privilege and host attack surface. In that profile, adding Linux capabilities is generally prohibited except for very limited cases. Among the listed capabilities, NET_BIND_SERVICE is the one commonly permitted in restricted-like policies, so D is correct.
NET_BIND_SERVICE allows a process to bind to “privileged” ports below 1024 (like 80/443) without running as root. This aligns with restricted security guidance: applications should run as non-root, but still sometimes need to listen on standard ports. Allowing NET_BIND_SERVICE enables that pattern without granting broad privileges.
The other capabilities listed are more sensitive and typically not allowed in a restricted profile: CHOWN can be used to change file ownership, SETUID relates to privilege changes and can be abused, and SYS_CHROOT is a broader system-level capability associated with filesystem root changes. In hardened Kubernetes environments, these are normally disallowed because they increase the risk of privilege escalation or container breakout paths, especially if combined with other misconfigurations.
A practical note: exact enforcement depends on the cluster’s admission configuration (e.g., the built-in Pod Security Admission controller) and any additional policy engines (OPA/Gatekeeper). But the security intent of “Restricted” is consistent: run as non-root, disallow privilege escalation, restrict capabilities, and lock down host access. NET_BIND_SERVICE is a well-known exception used to support common application networking needs while staying non-root.
So, the verified correct choice for an allowed capability in Restricted among these options is D: NET_BIND_SERVICE.
=========
Which GitOps engine can be used to orchestrate parallel jobs on Kubernetes?
Jenkins X
Flagger
Flux
Argo Workflows
Argo Workflows (D) is the correct answer because it is a Kubernetes-native workflow engine designed to define and run multi-step workflows—often with parallelization—directly on Kubernetes. Argo Workflows models workflows as DAGs (directed acyclic graphs) or step-based sequences, where each step is typically a Pod. Because each step is expressed as Kubernetes resources (custom resources), Argo can schedule many tasks concurrently, control fan-out/fan-in patterns, and manage dependencies between steps (e.g., “run these 10 jobs in parallel, then aggregate results”).
The question calls it a “GitOps engine,” but the capability being tested is “orchestrate parallel jobs.” Argo Workflows fits because it is purpose-built for running complex job orchestration, including parallel tasks, retries, timeouts, artifacts passing, and conditional execution. In practice, many teams store workflow manifests in Git and apply GitOps practices around them, but the distinguishing feature here is the workflow orchestration engine itself.
Why the other options are not best:
Flux (C) is a GitOps controller that reconciles cluster state from Git; it doesn’t orchestrate parallel job graphs as its core function.
Flagger (B) is a progressive delivery operator (canary/blue-green) often paired with GitOps and service meshes/Ingress; it’s not a general workflow orchestrator for parallel batch jobs.
Jenkins X (A) is CI/CD-focused (pipelines), not primarily a Kubernetes-native workflow engine for parallel job DAGs in the way Argo Workflows is.
So, the Kubernetes-native tool specifically used to orchestrate parallel jobs and workflows is Argo Workflows (D).
=========
What is the name of the lightweight Kubernetes distribution built for IoT and edge computing?
OpenShift
k3s
RKE
k1s
Edge and IoT environments often have constraints that differ from traditional datacenters: limited CPU/RAM, intermittent connectivity, smaller footprints, and a desire for simpler operations. k3s is a well-known lightweight Kubernetes distribution designed specifically to run in these environments, making B the correct answer.
What makes k3s “lightweight” is that it packages Kubernetes components in a simplified way and reduces operational overhead. It typically uses a single binary distribution and can run with an embedded datastore option for smaller installations (while also supporting external datastores for HA use cases). It streamlines dependencies and is aimed at faster installation and reduced resource consumption, which is ideal for edge nodes, IoT gateways, small servers, labs, and development environments.
By contrast, OpenShift is a Kubernetes distribution focused on enterprise platform capabilities, with additional security defaults, integrated developer tooling, and a larger operational footprint—excellent for many enterprises but not “built for IoT and edge” as the defining characteristic. RKE (Rancher Kubernetes Engine) is a Kubernetes installer/engine used to deploy Kubernetes, but it’s not specifically the lightweight edge-focused distribution in the way k3s is. “k1s” is not a standard, widely recognized Kubernetes distribution name in this context.
From a cloud native architecture perspective, edge Kubernetes distributions extend the same declarative and API-driven model to places where you want consistent operations across cloud, datacenter, and edge. You can apply GitOps patterns, standard manifests, and Kubernetes-native controllers across heterogeneous footprints. k3s provides that familiar Kubernetes experience while optimizing for constrained environments, which is why it has become a common choice for edge/IoT Kubernetes deployments.
=========
Which of the following is a responsibility of the governance board of an open source project?
Decide about the marketing strategy of the project.
Review the pull requests in the main branch.
Outline the project's “terms of engagement”.
Define the license to be used in the project.
A governance board in an open source project typically defines how the community operates—its decision-making rules, roles, conflict resolution, and contribution expectations—so C (“Outline the project's terms of engagement”) is correct. In large cloud-native projects (Kubernetes being a prime example), clear governance is essential to coordinate many contributors, companies, and stakeholders. Governance establishes the “rules of the road” that keep collaboration productive and fair.
“Terms of engagement” commonly includes: how maintainers are selected, how proposals are reviewed (e.g., enhancement processes), how meetings and SIGs operate, what constitutes consensus, how voting works when consensus fails, and what code-of-conduct expectations apply. It also defines escalation and dispute resolution paths so technical disagreements don’t become community-breaking conflicts. In other words, governance is about ensuring the project has durable, transparent processes that outlive any individual contributor and support vendor-neutral decision making.
Option B (reviewing pull requests) is usually the responsibility of maintainers and SIG owners, not a governance board. The governance body may define the structure that empowers maintainers, but it generally does not do day-to-day code review. Option A (marketing strategy) is often handled by foundations, steering committees, or separate outreach groups, not governance boards as their primary responsibility. Option D (defining the license) is usually decided early and may be influenced by a foundation or legal process; while governance can shape legal/policy direction, the core governance responsibility is broader community operating rules rather than selecting a license.
In cloud-native ecosystems, strong governance supports sustainability: it encourages contributions, protects neutrality, and provides predictable processes for evolution. Therefore, the best verified answer is C.
=========
What are the two essential operations that the kube-scheduler normally performs?
Pod eviction or starting
Resource monitoring and reporting
Filtering and scoring nodes
Starting and terminating containers
The kube-scheduler is a core control plane component in Kubernetes responsible for assigning newly created Pods to appropriate nodes. Its primary responsibility is decision-making, not execution. To make an informed scheduling decision, the kube-scheduler performs two essential operations: filtering and scoring nodes.
The scheduling process begins when a Pod is created without a node assignment. The scheduler first evaluates all available nodes and applies a set of filtering rules. During this phase, nodes that do not meet the Pod’s requirements are eliminated. Filtering criteria include resource availability (CPU and memory requests), node selectors, node affinity rules, taints and tolerations, volume constraints, and other policy-based conditions. Any node that fails one or more of these checks is excluded from consideration.
Once filtering is complete, the scheduler moves on to the scoring phase. In this step, each remaining eligible node is assigned a score based on a collection of scoring plugins. These plugins evaluate factors such as resource utilization balance, affinity preferences, topology spread constraints, and custom scheduling policies. The purpose of scoring is to rank nodes according to how well they satisfy the Pod’s placement preferences. The node with the highest total score is selected as the best candidate.
Option A is incorrect because Pod eviction is handled by other components such as the kubelet and controllers, and starting Pods is the responsibility of the kubelet. Option B is incorrect because resource monitoring and reporting are performed by components like metrics-server, not the scheduler. Option D is also incorrect because starting and terminating containers is entirely handled by the kubelet and the container runtime.
By separating filtering (eligibility) from scoring (preference), the kube-scheduler provides a flexible, extensible, and policy-driven scheduling mechanism. This design allows Kubernetes to support diverse workloads and advanced placement strategies while maintaining predictable scheduling behavior.
Therefore, the correct and verified answer is Option C: Filtering and scoring nodes, as documented in Kubernetes scheduling architecture.
Which command lists the running containers in the current Kubernetes namespace?
kubectl get pods
kubectl ls
kubectl ps
kubectl show pods
The correct answer is A: kubectl get pods. Kubernetes does not manage “containers” as standalone top-level objects; the primary schedulable unit is the Pod, and containers run inside Pods. Therefore, the practical way to list what’s running in a namespace is to list the Pods in that namespace. kubectl get pods shows Pods and their readiness, status, restarts, and age—giving you the canonical view of running workloads.
If you need the container-level details (images, container names), you typically use additional commands and output formatting:
kubectl describe pod
kubectl get pods -o jsonpath=... or -o wide to surface more fields
kubectl get pods -o=json to inspect .spec.containers and .status.containerStatuses
But among the provided options, kubectl get pods is the only real kubectl command that lists the running workload objects in the current namespace.
The other options are not valid kubectl subcommands: kubectl ls, kubectl ps, and kubectl show pods are not standard Kubernetes CLI operations. Kubernetes intentionally centers around the API resource model, so listing resources uses kubectl get
So while the question says “running containers,” the Kubernetes-correct interpretation is “containers in running Pods,” and the appropriate listing command in the namespace is kubectl get pods, option A.
=========
What is a DaemonSet?
It’s a type of workload that ensures a specific set of nodes run a copy of a Pod.
It’s a type of workload responsible for maintaining a stable set of replica Pods running in any node.
It’s a type of workload that needs to be run periodically on a given schedule.
It’s a type of workload that provides guarantees about ordering, uniqueness, and identity of a set of Pods.
A DaemonSet ensures that a copy of a Pod runs on each node (or a selected subset of nodes), which matches option A and makes it correct. DaemonSets are ideal for node-level agents that should exist everywhere, such as log shippers, monitoring agents, CNI components, storage daemons, and security scanners.
DaemonSets differ from Deployments/ReplicaSets because their goal is not “N replicas anywhere,” but “one replica per node” (subject to node selection). When nodes are added to the cluster, the DaemonSet controller automatically schedules the DaemonSet Pod onto the new nodes. When nodes are removed, the Pods associated with those nodes are cleaned up. You can restrict placement using node selectors, affinity rules, or tolerations so that only certain nodes run the DaemonSet (for example, only Linux nodes, only GPU nodes, or only nodes with a dedicated label).
Option B sounds like a ReplicaSet/Deployment behavior (stable set of replicas), not a DaemonSet. Option C describes CronJobs (scheduled, recurring run-to-completion workloads). Option D describes StatefulSets, which provide stable identity, ordering, and uniqueness guarantees for stateful replicas.
Operationally, DaemonSets matter because they often run critical cluster services. During maintenance and upgrades, DaemonSet update strategy determines how those node agents roll out across the fleet. Since DaemonSets can tolerate taints (like master/control-plane node taints), they can also be used to ensure essential agents run across all nodes, including special pools. Thus, the correct definition is A.
=========
Why is Cloud-Native Architecture important?
Cloud Native Architecture revolves around containers, microservices and pipelines.
Cloud Native Architecture removes constraints to rapid innovation.
Cloud Native Architecture is modern for application deployment and pipelines.
Cloud Native Architecture is a bleeding edge technology and service.
Cloud-native architecture is important because it enables organizations to build and run software in a way that supports rapid innovation while maintaining reliability, scalability, and efficient operations. Option B best captures this: cloud native removes constraints to rapid innovation, so B is correct.
In traditional environments, innovation is slowed by heavyweight release processes, tightly coupled systems, manual operations, and limited elasticity. Cloud-native approaches—containers, declarative APIs, automation, and microservices-friendly patterns—reduce those constraints. Kubernetes exemplifies this by offering a consistent deployment model, self-healing, automated rollouts, scaling primitives, and a large ecosystem of delivery and observability tools. This makes it easier to ship changes more frequently and safely: teams can iterate quickly, roll back confidently, and standardize operations across environments.
Option A is partly descriptive (containers/microservices/pipelines are common in cloud native), but it doesn’t explain why it matters; it lists ingredients rather than the benefit. Option C is vague (“modern”) and again doesn’t capture the core value proposition. Option D is incorrect because cloud native is not primarily about being “bleeding edge”—it’s about proven practices that improve time-to-market and operational stability.
A good way to interpret “removes constraints” is: cloud native shifts the bottleneck away from infrastructure friction. With automation (IaC/GitOps), standardized runtime packaging (containers), and platform capabilities (Kubernetes controllers), teams spend less time on repetitive manual work and more time delivering features. Combined with observability and policy automation, this results in faster delivery with better reliability—exactly the reason cloud-native architecture is emphasized across the Kubernetes ecosystem.
=========
Which component of the node is responsible to run workloads?
The kubelet.
The kube-proxy.
The kube-apiserver.
The container runtime.
The verified correct answer is D (the container runtime). On a Kubernetes node, the container runtime (such as containerd or CRI-O) is the component that actually executes containers—it creates container processes, manages their lifecycle, pulls images, and interacts with the underlying OS primitives (namespaces, cgroups) through an OCI runtime like runc. In that direct sense, the runtime is what “runs workloads.”
It’s important to distinguish responsibilities. The kubelet (A) is the node agent that orchestrates what should run on the node: it watches the API server for Pods assigned to the node and then asks the runtime to start/stop containers accordingly. Kubelet is essential for node management, but it does not itself execute containers; it delegates execution to the runtime via CRI. kube-proxy (B) handles Service traffic routing rules (or is replaced by other dataplanes) and does not run containers. kube-apiserver (C) is a control plane component that stores and serves cluster state; it is not a node workload runner.
So, in the execution chain: scheduler assigns Pod → kubelet sees Pod assigned → kubelet calls runtime via CRI → runtime launches containers. When troubleshooting “containers won’t start,” you often inspect kubelet logs and runtime logs because the runtime is the component that can fail image pulls, sandbox creation, or container start operations.
Therefore, the best answer to “which node component is responsible to run workloads” is the container runtime, option D.
=========
Which type of Service requires manual creation of Endpoints?
LoadBalancer
Services without selectors
NodePort
ClusterIP with selectors
A Kubernetes Service without selectors requires you to manage its backend endpoints manually, so B is correct. Normally, a Service uses a selector to match a set of Pods (by labels). Kubernetes then automatically maintains the backend list (historically Endpoints, now commonly EndpointSlice) by tracking which Pods match the selector and are Ready. This automation is one of the key reasons Services provide stable connectivity to dynamic Pods.
When you create a Service without a selector, Kubernetes has no way to know which Pods (or external IPs) should receive traffic. In that pattern, you explicitly create an Endpoints object (or EndpointSlices, depending on your approach and controller support) that maps the Service name to one or more IP:port tuples. This is commonly used to represent external services (e.g., a database running outside the cluster) while still providing a stable Kubernetes Service DNS name for in-cluster clients. Another use case is advanced migration scenarios where endpoints are controlled by custom controllers rather than label selection.
Why the other options are wrong: Service types like ClusterIP, NodePort, and LoadBalancer describe how a Service is exposed, but they do not inherently require manual endpoint management. A ClusterIP Service with selectors (D) is the standard case where endpoints are automatically created and updated. NodePort and LoadBalancer Services also typically use selectors and therefore inherit automatic endpoint management; the difference is in how traffic enters the cluster, not how backends are discovered.
Operationally, when using Services without selectors, you must ensure endpoint IPs remain correct, health is accounted for (often via external tooling), and you update endpoints when backends change. The key concept is: no selector → Kubernetes can’t auto-populate endpoints → you must provide them.
=========
Which authorization-mode allows granular control over the operations that different entities can perform on different objects in a Kubernetes cluster?
Webhook Mode Authorization Control
Role Based Access Control
Node Authorization Access Control
Attribute Based Access Control
Role Based Access Control (RBAC) is the standard Kubernetes authorization mode that provides granular control over what users and service accounts can do to which resources, so B is correct. RBAC works by defining Roles (namespaced) and ClusterRoles (cluster-wide) that contain sets of rules. Each rule specifies API groups, resource types, resource names (optional), and allowed verbs such as get, list, watch, create, update, patch, and delete. You then attach these roles to identities using RoleBindings or ClusterRoleBindings.
This gives fine-grained, auditable access control. For example, you can allow a CI service account to create and patch Deployments only in a specific namespace, while restricting it from reading Secrets. You can allow developers to view Pods and logs but prevent them from changing cluster-wide networking resources. This is exactly the “granular control over operations on objects” described by the question.
Why other options are not the best answer: “Webhook mode” is an authorization mechanism where Kubernetes calls an external service to decide authorization. While it can be granular depending on the external system, Kubernetes’ common built-in answer for granular object-level control is RBAC. “Node authorization” is a specialized authorizer for kubelets/nodes to access resources they need; it’s not the general-purpose system for all cluster entities. ABAC (Attribute-Based Access Control) is an older mechanism and is not the primary recommended authorization model; it can be expressive but is less commonly used and not the default best-practice for Kubernetes authorization today.
In Kubernetes security practice, RBAC is typically paired with authentication (certs/OIDC), admission controls, and namespaces to build a defense-in-depth security posture. RBAC policy is also central to least privilege: granting only what is necessary for a workload or user role to function. This reduces blast radius if credentials are compromised.
Therefore, the verified answer is B: Role Based Access Control.
=========
Which of the following is a primary use case of Istio in a Kubernetes cluster?
To manage and control the versions of container runtimes used on nodes between services.
To provide secure built-in database management features for application workloads.
To provision and manage persistent storage volumes for stateful applications.
To provide service mesh capabilities such as traffic management, observability, and security between services.
Istio is a widely adopted service mesh for Kubernetes that focuses on managing service-to-service communication in distributed, microservices-based architectures. Its primary use case is to provide advanced traffic management, observability, and security capabilities between services, making option D the correct answer.
In a Kubernetes cluster, applications often consist of many independent services that communicate over the network. Managing this communication using application code alone becomes complex and error-prone as systems scale. Istio addresses this challenge by inserting a transparent data plane—typically based on Envoy proxies—alongside application workloads. These proxies intercept all inbound and outbound traffic, enabling consistent policy enforcement without requiring code changes.
Istio’s traffic management features include fine-grained routing, retries, timeouts, circuit breaking, fault injection, and canary or blue–green deployments. These capabilities allow operators to control how traffic flows between services, test new versions safely, and improve overall system resilience. For observability, Istio provides detailed telemetry such as metrics, logs, and distributed traces, giving deep insight into service performance and behavior. On the security front, Istio enables mutual TLS (mTLS) for service-to-service communication, strong identity, and access policies to secure traffic within the cluster.
Option A is incorrect because container runtime management is handled at the node and cluster level by Kubernetes and the underlying operating system, not by Istio. Option B is incorrect because Istio does not provide database management functionality. Option C is incorrect because persistent storage provisioning is handled by Kubernetes storage APIs and CSI drivers, not by service meshes.
By abstracting networking concerns away from application code, Istio helps teams operate complex microservices environments more safely and efficiently. Therefore, the correct and verified answer is Option D, which accurately reflects Istio’s core purpose and documented use cases in Kubernetes ecosystems.
How many hosts are required to set up a highly available Kubernetes cluster when using an external etcd topology?
Four hosts. Two for control plane nodes and two for etcd nodes.
Four hosts. One for a control plane node and three for etcd nodes.
Three hosts. The control plane nodes and etcd nodes share the same host.
Six hosts. Three for control plane nodes and three for etcd nodes.
In a highly available (HA) Kubernetes control plane using an external etcd topology, you typically run three control plane nodes and three separate etcd nodes, totaling six hosts, making D correct. HA design relies on quorum-based consensus: etcd uses Raft and requires a majority of members available to make progress. Running three etcd members is the common minimum for HA because it tolerates one member failure while maintaining quorum (2/3).
In the external etcd topology, etcd is decoupled from the control plane nodes. This separation improves fault isolation: if a control plane node fails or is replaced, etcd remains stable and independent; likewise, etcd maintenance can be handled separately. Kubernetes API servers (often multiple instances behind a load balancer) talk to the external etcd cluster for storage of cluster state.
Options A and B propose four hosts, but they break common HA/quorum best practices. Two etcd nodes do not form a robust quorum configuration (a two-member etcd cluster cannot tolerate a single failure without losing quorum). One control plane node is not HA for the API server/scheduler/controller-manager components. Option C describes a stacked etcd topology (control plane + etcd on same hosts), which can be HA with three hosts, but the question explicitly says external etcd, not stacked. In stacked topology, you often use three control plane nodes each running an etcd member. In external topology, you use three control plane + three etcd.
Operationally, external etcd topology is often used when you want dedicated resources, separate lifecycle management, or stronger isolation for the datastore. It can reduce blast radius but increases infrastructure footprint and operational complexity (TLS, backup/restore, networking). Still, for the canonical HA external-etcd pattern, the expected answer is six hosts: 3 control plane + 3 etcd.
=========
How do you deploy a workload to Kubernetes without additional tools?
Create a Bash script and run it on a worker node.
Create a Helm Chart and install it with helm.
Create a manifest and apply it with kubectl.
Create a Python script and run it with kubectl.
The standard way to deploy workloads to Kubernetes using only built-in tooling is to create Kubernetes manifests (YAML/JSON definitions of API objects) and apply them with kubectl, so C is correct. Kubernetes is a declarative system: you describe the desired state of resources (e.g., a Deployment, Service, ConfigMap, Ingress) in a manifest file, then submit that desired state to the API server. Controllers reconcile the actual cluster state to match what you declared.
A manifest typically includes mandatory fields like apiVersion, kind, and metadata, and then a spec describing desired behavior. For example, a Deployment manifest declares replicas and the Pod template (containers, images, ports, probes, resources). Applying the manifest with kubectl apply -f
Option B (Helm) is indeed a popular deployment tool, but Helm is explicitly an “additional tool” beyond kubectl and the Kubernetes API. The question asks “without additional tools,” so Helm is excluded by definition. Option A (running Bash scripts on worker nodes) bypasses Kubernetes’ desired-state control and is not how Kubernetes workload deployment is intended; it also breaks portability and operational safety. Option D is not a standard Kubernetes deployment mechanism; kubectl does not “run Python scripts” to deploy workloads (though scripts can automate kubectl, that’s still not the primary mechanism).
From a cloud native delivery standpoint, manifests support GitOps, reviewable changes, and repeatable deployments across environments. The Kubernetes-native approach is: declare resources in manifests and apply them to the cluster. Therefore, C is the verified correct answer.
A Pod is stuck in the CrashLoopBackOff state. Which is the correct way to troubleshoot this issue?
Use kubectl exec
Use kubectl describe pod
Use kubectl get nodes to verify node capacity and then kubectl apply -f
Use kubectl top pod
The CrashLoopBackOff state in Kubernetes indicates that a container inside a Pod is repeatedly starting, crashing, and then being restarted by the kubelet with increasing backoff delays. This is typically caused by application-level issues such as misconfiguration, missing environment variables, failed startup commands, application crashes, or incorrect container images. Proper troubleshooting focuses on identifying why the container is failing shortly after startup.
The most effective and recommended approach is to first use kubectl describe pod
After reviewing the events, the next step is to inspect the container’s logs using kubectl logs
Option A is incorrect because kubectl exec usually fails when containers are repeatedly crashing, and /var/log/kubelet.log is a node-level log not accessible from inside the container. Option C is incorrect because reapplying the Pod manifest does not address the underlying crash cause. Option D focuses on resource usage and scaling, which does not resolve application startup failures.
Therefore, the correct and verified answer is Option B, which aligns with Kubernetes documentation and best practices for diagnosing CrashLoopBackOff conditions.
Which of these commands is used to retrieve the documentation and field definitions for a Kubernetes resource?
kubectl explain
kubectl api-resources
kubectl get --help
kubectl show
kubectl explain is the command that shows documentation and field definitions for Kubernetes resource schemas, so A is correct. Kubernetes resources have a structured schema: top-level fields like apiVersion, kind, and metadata, and resource-specific structures like spec and status. kubectl explain lets you explore these structures directly from your cluster’s API discovery information, including field types, descriptions, and nested fields.
For example, kubectl explain deployment describes the Deployment resource, and kubectl explain deployment.spec dives into the spec structure. You can continue deeper, such as kubectl explain deployment.spec.template.spec.containers to discover container fields. This is especially useful when writing or troubleshooting manifests, because it reduces guesswork and prevents invalid YAML fields that would be rejected by the API server. It also helps when APIs evolve: you can confirm which fields exist in your cluster’s current version and what they mean.
The other commands do different things. kubectl api-resources lists resource types and their shortnames, whether they are namespaced, and supported verbs—useful discovery, but not detailed field definitions. kubectl get --help shows CLI usage help for kubectl get, not the Kubernetes object schema. kubectl show is not a standard kubectl subcommand.
From a Kubernetes “declarative configuration” perspective, correct manifests are critical: controllers reconcile desired state from spec, and subtle field mistakes can change runtime behavior. kubectl explain is a built-in way to learn the schema and write manifests that align with the Kubernetes API’s expectations. That’s why it’s commonly recommended in Kubernetes documentation and troubleshooting workflows.
=========
Which of the following is a lightweight tool that manages traffic flows between services, enforces access policies, and aggregates telemetry data, all without requiring changes to application code?
NetworkPolicy
Linkerd
kube-proxy
Nginx
Linkerd is a lightweight service mesh that manages service-to-service traffic, security policies, and telemetry without requiring application code changes—so B is correct. A service mesh introduces a dedicated layer for east-west traffic (internal service calls) and typically provides features like mutual TLS (mTLS), retries/timeouts, traffic shaping, and consistent metrics/tracing signals. Linkerd is known for being simpler and resource-efficient relative to some alternatives, which aligns with the “lightweight tool” phrasing.
Why this matches the description: In a service mesh, workload traffic is intercepted by a proxy layer (often as a sidecar or node-level/ambient proxy) and managed centrally by mesh control components. This allows security and traffic policy to be applied uniformly without modifying each microservice. Telemetry is also generated consistently because the proxies observe traffic directly and emit metrics and traces about request rates, latency, and errors.
The other choices don’t fit. NetworkPolicy is a Kubernetes resource that controls allowed network flows (L3/L4) but does not provide L7 traffic management, retries, identity-based mTLS, or automatic telemetry aggregation. kube-proxy implements Service networking rules (ClusterIP/NodePort forwarding) but does not enforce access policies at the service identity level and is not a telemetry system. Nginx can be used as an ingress controller or reverse proxy, but it is not inherently a full service mesh spanning all service-to-service communication and policy/telemetry across the mesh by default.
In cloud native architecture, service meshes help address cross-cutting concerns—security, observability, and traffic management—without embedding that logic into every application. The question’s combination of “traffic flows,” “access policies,” and “aggregates telemetry” maps directly to a mesh, and the lightweight mesh option provided is Linkerd.
=========
Which tools enable Kubernetes HorizontalPodAutoscalers to use custom, application-generated metrics to trigger scaling events?
Prometheus and the prometheus-adapter.
Graylog and graylog-autoscaler metrics.
Graylog and the kubernetes-adapter.
Grafana and Prometheus.
To scale on custom, application-generated metrics, the Horizontal Pod Autoscaler (HPA) needs those metrics exposed through the Kubernetes custom metrics (or external metrics) API. A common and Kubernetes-documented approach is Prometheus + prometheus-adapter, making A correct. Prometheus scrapes application metrics (for example, request rate, queue depth, in-flight requests) from /metrics endpoints. The prometheus-adapter then translates selected Prometheus time series into the Kubernetes Custom Metrics API so the HPA controller can fetch them and make scaling decisions.
Why not the other options: Grafana is a visualization tool; it does not provide the metrics API translation layer required by HPA, so “Grafana and Prometheus” is incomplete. Graylog is primarily a log management system; it’s not the standard solution for feeding custom metrics into HPA via the Kubernetes metrics APIs. The “kubernetes-adapter” term in option C is not the standard named adapter used in the common Kubernetes ecosystem for Prometheus-backed custom metrics (the recognized component is prometheus-adapter).
This matters operationally because HPA is not limited to CPU/memory. CPU and memory use resource metrics (often from metrics-server), but modern autoscaling often needs application signals: message queue length, requests per second, latency, or business metrics. With Prometheus and prometheus-adapter, you can define HPA rules such as “scale to maintain queue depth under X” or “scale based on requests per second per pod.” This can produce better scaling behavior than CPU-based scaling alone, especially for I/O-bound services or workloads with uneven CPU profiles.
So the correct tooling combination in the provided choices is Prometheus and the prometheus-adapter, option A.
=========
Which of the following is the correct command to run an nginx deployment with 2 replicas?
kubectl run deploy nginx --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --replicas=2
kubectl create nginx deployment --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --count=2
The correct answer is B: kubectl create deploy nginx --image=nginx --replicas=2. This uses kubectl create deployment (shorthand create deploy) to generate a Deployment resource named nginx with the specified container image. The --replicas=2 flag sets the desired replica count, so Kubernetes will create two Pod replicas (via a ReplicaSet) and keep that number stable.
Option A is incorrect because kubectl run is primarily intended to run a Pod (and in older versions could generate other resources, but it’s not the recommended/consistent way to create a Deployment in modern kubectl usage). Option C is invalid syntax: kubectl subcommand order is incorrect; you don’t say kubectl create nginx deployment. Option D uses a non-existent --count flag for Deployment replicas.
From a Kubernetes fundamentals perspective, this question tests two ideas: (1) Deployments are the standard controller for running stateless workloads with a desired number of replicas, and (2) kubectl create deployment is a common imperative shortcut for generating that resource. After running the command, you can confirm with kubectl get deploy nginx, kubectl get rs, and kubectl get pods -l app=nginx (label may vary depending on kubectl version). You’ll see a ReplicaSet created and two Pods brought up.
In production, teams typically use declarative manifests (kubectl apply -f) or GitOps, but knowing the imperative command is useful for quick labs and validation. The key is that replicas are managed by the controller, not by manually starting containers—Kubernetes reconciles the state continuously.
Therefore, B is the verified correct command.
=========
What is ephemeral storage?
Storage space that need not persist across restarts.
Storage that may grow dynamically.
Storage used by multiple consumers (e.g., multiple Pods).
Storage that is always provisioned locally.
The correct answer is A: ephemeral storage is non-persistent storage whose data does not need to survive Pod restarts or rescheduling. In Kubernetes, ephemeral storage typically refers to storage tied to the Pod’s lifetime—such as the container writable layer, emptyDir volumes, and other temporary storage types. When a Pod is deleted or moved to a different node, that data is generally lost.
This is different from persistent storage, which is backed by PersistentVolumes and PersistentVolumeClaims and is designed to outlive individual Pod instances. Ephemeral storage is commonly used for caches, scratch space, temporary files, and intermediate build artifacts—data that can be recreated and is not the authoritative system of record.
Option B is incorrect because “may grow dynamically” describes an allocation behavior, not the defining characteristic of ephemeral storage. Option C is incorrect because multiple consumers is about access semantics (ReadWriteMany etc.) and shared volumes, not ephemerality. Option D is incorrect because ephemeral storage is not “always provisioned locally” in a strict sense; while many ephemeral forms are local to the node, the definition is about lifecycle and persistence guarantees, not necessarily physical locality.
Operationally, ephemeral storage is an important scheduling and reliability consideration. Pods can request/limit ephemeral storage similarly to CPU/memory, and nodes can evict Pods under disk pressure. Mismanaged ephemeral storage (logs written to the container filesystem, runaway temp files) can cause node disk exhaustion and cascading failures. Best practices include shipping logs off-node, using emptyDir intentionally with size limits where supported, and using persistent volumes for state that must survive restarts.
So, ephemeral storage is best defined as storage that does not need to persist across restarts/rescheduling, matching option A.
=========
To visualize data from Prometheus you can use expression browser or console templates. What is the other data visualization tool commonly used together with Prometheus?
Grafana
Graphite
Nirvana
GraphQL
The most common visualization tool used with Prometheus is Grafana, so A is correct. Prometheus includes a built-in expression browser that can graph query results, but Grafana provides a much richer dashboarding experience: reusable dashboards, variables, templating, annotations, alerting integrations, and multi-data-source support.
In Kubernetes observability stacks, Prometheus scrapes and stores time-series metrics (cluster and application metrics). Grafana queries Prometheus using PromQL and renders the results into dashboards for SREs and developers. This pairing is widespread because it cleanly separates concerns: Prometheus is the metrics store and query engine; Grafana is the UI and dashboard layer.
Option B (Graphite) is a separate metrics system with its own storage/query model; while Grafana can visualize Graphite too, the question asks what is commonly used together with Prometheus, which is Grafana. Option D (GraphQL) is an API query language, not a metrics visualization tool. Option C (“Nirvana”) is not a standard Prometheus visualization tool in common Kubernetes stacks.
In practice, this combo enables operational outcomes: dashboards for error rates and latency (often derived from histograms), capacity monitoring (node CPU/memory), workload behavior (Pod restarts, HPA scaling), and SLO reporting. Grafana dashboards often serve as the shared language during incidents: teams correlate alerts with time-series patterns and quickly identify when regressions began.
Therefore, the verified correct tool commonly used with Prometheus for visualization is Grafana (A).
=========
What is Helm?
An open source dashboard for Kubernetes.
A package manager for Kubernetes applications.
A custom scheduler for Kubernetes.
An end-to-end testing project for Kubernetes applications.
Helm is best described as a package manager for Kubernetes applications, making B correct. Helm packages Kubernetes resource manifests (Deployments, Services, ConfigMaps, Ingress, RBAC, etc.) into a unit called a chart. A chart includes templates and default values, allowing teams to parameterize deployments for different environments (dev/stage/prod) without rewriting YAML.
From an application delivery perspective, Helm solves common problems: repeatable installation, upgrade management, versioning, and sharing of standardized application definitions. Instead of copying and editing raw YAML, users install a chart and supply a values.yaml file (or CLI overrides) to configure image tags, replica counts, ingress hosts, resource requests, and other settings. Helm then renders templates into concrete Kubernetes manifests and applies them to the cluster.
Helm also manages releases: it tracks what has been installed and supports upgrades and rollbacks. This aligns with cloud native delivery practices where deployments are automated, reproducible, and auditable. Helm is commonly integrated into CI/CD pipelines and GitOps workflows (sometimes with charts stored in Git or Helm repositories).
The other options are incorrect: a dashboard is a UI like Kubernetes Dashboard; a scheduler is kube-scheduler (or custom scheduler implementations, but Helm is not that); end-to-end testing projects exist in the ecosystem, but Helm’s role is packaging and lifecycle management of Kubernetes app definitions.
So the verified, standard definition is: Helm = Kubernetes package manager.
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Flux and ArgoCD
GitOps Toolkit
Linkerd and Istio
Helm and Kustomize
Tools that continuously reconcile cluster state to match a Git repository’s desired configuration are GitOps controllers, and the best match here is Flux and ArgoCD, so A is correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (“GitOps Toolkit”) is related—Flux uses a GitOps Toolkit internally—but the question asks for a “tool” that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore, A is the verified correct answer.
=========
What function does kube-proxy provide to a cluster?
Implementing the Ingress resource type for application traffic.
Forwarding data to the correct endpoints for Services.
Managing data egress from the cluster nodes to the network.
Managing access to the Kubernetes API.
kube-proxy is a node-level networking component that helps implement the Kubernetes Service abstraction. Services provide a stable virtual IP and DNS name that route traffic to a set of Pods (endpoints). kube-proxy watches the API for Service and EndpointSlice/Endpoints changes and then programs the node’s networking rules so that traffic sent to a Service is forwarded (load-balanced) to one of the correct backend Pod IPs. This is why B is correct.
Conceptually, kube-proxy turns the declarative Service configuration into concrete dataplane behavior. Depending on the mode, it may use iptables rules, IPVS, or integrate with eBPF-capable networking stacks (sometimes kube-proxy is replaced or bypassed by CNI implementations, but the classic kube-proxy role remains the canonical answer). In iptables mode, kube-proxy creates NAT rules that rewrite traffic from the Service virtual IP to one of the Pod endpoints. In IPVS mode, it programs kernel load-balancing tables for more scalable service routing. In all cases, the job is to connect “Service IP/port” to “Pod IP/port endpoints.”
Option A is incorrect because Ingress is a separate API resource and requires an Ingress Controller (like NGINX Ingress, HAProxy, Traefik, etc.) to implement HTTP routing, TLS termination, and host/path rules. kube-proxy is not an Ingress controller. Option C is incorrect because general node egress management is not kube-proxy’s responsibility; egress behavior typically depends on the CNI plugin, NAT configuration, and network policies. Option D is incorrect because API access control is handled by the API server’s authentication/authorization layers (RBAC, webhooks, etc.), not kube-proxy.
So kube-proxy’s essential function is: keep node networking rules in sync so that Service traffic reaches the right Pods. It is one of the key components that makes Services “just work” across nodes without clients needing to know individual Pod IPs.
=========
Let’s assume that an organization needs to process large amounts of data in bursts, on a cloud-based Kubernetes cluster. For instance: each Monday morning, they need to run a batch of 1000 compute jobs of 1 hour each, and these jobs must be completed by Monday night. What’s going to be the most cost-effective method?
Run a group of nodes with the exact required size to complete the batch on time, and use a combination of taints, tolerations, and nodeSelectors to reserve these nodes to the batch jobs.
Leverage the Kubernetes Cluster Autoscaler to automatically start and stop nodes as they’re needed.
Commit to a specific level of spending to get discounted prices (with e.g. “reserved instances” or similar mechanisms).
Use PriorityClasses so that the weekly batch job gets priority over other workloads running on the cluster, and can be completed on time.
Burst workloads are a classic elasticity problem: you need large capacity for a short window, then very little capacity the rest of the week. The most cost-effective approach in a cloud-based Kubernetes environment is to scale infrastructure dynamically, matching node count to current demand. That’s exactly what Cluster Autoscaler is designed for: it adds nodes when Pods cannot be scheduled due to insufficient resources and removes nodes when they become underutilized and can be drained safely. Therefore B is correct.
Option A can work operationally, but it commonly results in paying for a reserved “standing army” of nodes that sit idle most of the week—wasteful for bursty patterns unless the nodes are repurposed for other workloads. Taints/tolerations and nodeSelectors are placement tools; they don’t reduce cost by themselves and may increase waste if they isolate nodes. Option D (PriorityClasses) affects which Pods get scheduled first given available capacity, but it does not create capacity. If the cluster doesn’t have enough nodes, high priority Pods will still remain Pending. Option C (reserved instances or committed-use discounts) can reduce unit price, but it assumes relatively predictable baseline usage. For true bursts, you usually want a smaller baseline plus autoscaling, and optionally combine it with discounted capacity types if your cloud supports them.
In Kubernetes terms, the control loop is: batch Jobs create Pods → scheduler tries to place Pods → if many Pods are Pending due to insufficient CPU/memory, Cluster Autoscaler observes this and increases the node group size → new nodes join and kube-scheduler places Pods → after jobs finish and nodes become empty, Cluster Autoscaler drains and removes nodes. This matches cloud-native principles: elasticity, pay-for-what-you-use, and automation. It minimizes idle capacity while still meeting the completion deadline.
=========
What's the most adopted way of conflict resolution and decision-making for the open-source projects under the CNCF umbrella?
Financial Analysis
Discussion and Voting
Flipism Technique
Project Founder Say
B (Discussion and Voting) is correct. CNCF-hosted open-source projects generally operate with open governance practices that emphasize transparency, community participation, and documented decision-making. While each project can have its own governance model (maintainers, technical steering committees, SIGs, TOC interactions, etc.), a very common and widely adopted approach to resolving disagreements and making decisions is to first pursue discussion (often on GitHub issues/PRs, mailing lists, or community meetings) and then use voting/consensus mechanisms when needed.
This approach is important because open-source communities are made up of diverse contributors across companies and geographies. “Project Founder Say” (D) is not a sustainable or typical CNCF governance norm for mature projects; CNCF explicitly encourages neutral, community-led governance rather than single-person control. “Financial Analysis” (A) is not a conflict resolution mechanism for technical decisions, and “Flipism Technique” (C) is not a real governance practice.
In Kubernetes specifically, community decisions are often made within structured groups (e.g., SIGs) using discussion and consensus-building, sometimes followed by formal votes where governance requires it. The goal is to ensure decisions are fair, recorded, and aligned with the project’s mission and contributor expectations. This also reduces risk of vendor capture and builds trust: anyone can review the rationale in meeting notes, issues, or PR threads, and decisions can be revisited with new evidence.
Therefore, the most adopted conflict resolution and decision-making method across CNCF open-source projects is discussion and voting, making B the verified correct answer.
=========
What are the characteristics for building every cloud-native application?
Resiliency, Operability, Observability, Availability
Resiliency, Containerd, Observability, Agility
Kubernetes, Operability, Observability, Availability
Resiliency, Agility, Operability, Observability
Cloud-native applications are typically designed to thrive in dynamic, distributed environments where infrastructure is elastic and failures are expected. The best set of characteristics listed is Resiliency, Agility, Operability, Observability, making D correct.
Resiliency means the application and its supporting platform can tolerate failures and continue providing service. In Kubernetes terms, resiliency is supported through self-healing controllers, replica management, health probes, and safe rollout mechanisms, but the application must also be designed to handle transient failures, retries, and graceful degradation.
Agility reflects the ability to deliver changes quickly and safely. Cloud-native systems emphasize automation, CI/CD, declarative configuration, and small, frequent releases—often enabled by Kubernetes primitives like Deployments and rollout strategies. Agility is about reducing the friction to ship improvements while maintaining reliability.
Operability is how manageable the system is in production: clear configuration, predictable deployments, safe scaling, and automation-friendly operations. Kubernetes encourages operability through consistent APIs, controllers, and standardized patterns for configuration and lifecycle.
Observability means you can understand what’s happening inside the system using telemetry—metrics, logs, and traces—so you can troubleshoot issues, measure SLOs, and improve performance. Kubernetes provides many integration points for observability, but cloud-native apps must also emit meaningful signals.
Options B and C include items that are not “characteristics” (containerd is a runtime; Kubernetes is a platform). Option A includes “availability,” which is important, but the canonical cloud-native framing in this question emphasizes the four qualities in D as the foundational build characteristics.
=========
What is a sidecar container?
A Pod that runs next to another container within the same Pod.
A container that runs next to another Pod within the same namespace.
A container that runs next to another container within the same Pod.
A Pod that runs next to another Pod within the same namespace.
A sidecar container is an additional container that runs alongside the main application container within the same Pod, sharing network and storage context. That matches option C, so C is correct. The sidecar pattern is used to add supporting capabilities to an application without modifying the application code. Because both containers are in the same Pod, the sidecar can communicate with the main container over localhost and share volumes for files, sockets, or logs.
Common sidecar examples include: log forwarders that tail application logs and ship them to a logging system, proxies (service mesh sidecars like Envoy) that handle mTLS and routing policy, config reloaders that watch ConfigMaps and signal the main process, and local caching agents. Sidecars are especially powerful in cloud-native systems because they standardize cross-cutting concerns—security, observability, traffic policy—across many workloads.
Options A and D incorrectly describe “a Pod running next to …” which is not how sidecars work; sidecars are containers, not separate Pods. Running separate Pods “next to” each other in a namespace does not give the same shared network namespace and tightly coupled lifecycle. Option B is also incorrect for the same reason: a sidecar is not a separate Pod; it is a container in the same Pod.
Operationally, sidecars share the Pod lifecycle: they are scheduled together, scaled together, and generally terminated together. This is both a benefit (co-location guarantees) and a responsibility (resource requests/limits should include the sidecar’s needs, and failure modes should be understood). Kubernetes is increasingly formalizing sidecar behavior (e.g., sidecar containers with ordered startup semantics), but the core definition remains: a helper container in the same Pod.
=========
A Kubernetes Pod is returning a CrashLoopBackOff status. What is the most likely reason for this behavior?
There are insufficient resources allocated for the Pod.
The application inside the container crashed after starting.
The container’s image is missing or cannot be pulled.
The Pod is unable to communicate with the Kubernetes API server.
A CrashLoopBackOff status in Kubernetes indicates that a container within a Pod is repeatedly starting, crashing, and being restarted by Kubernetes. This behavior occurs when the container process exits shortly after starting and Kubernetes applies an increasing back-off delay between restart attempts to prevent excessive restarts.
Option B is the correct answer because CrashLoopBackOff most commonly occurs when the application inside the container crashes after it has started. Typical causes include application runtime errors, misconfigured environment variables, missing configuration files, invalid command or entrypoint definitions, failed dependencies, or unhandled exceptions during application startup. Kubernetes itself is functioning as expected by restarting the container according to the Pod’s restart policy.
Option A is incorrect because insufficient resources usually lead to different symptoms. For example, if a container exceeds its memory limit, it may be terminated with an OOMKilled status rather than repeatedly crashing immediately. While resource constraints can indirectly cause crashes, they are not the defining reason for a CrashLoopBackOff state.
Option C is incorrect because an image that cannot be pulled results in statuses such as ImagePullBackOff or ErrImagePull, not CrashLoopBackOff. In those cases, the container never successfully starts.
Option D is incorrect because Pods do not need to communicate directly with the Kubernetes API server for normal application execution. Issues with API server communication affect control plane components or scheduling, not container restart behavior.
From a troubleshooting perspective, Kubernetes documentation recommends inspecting container logs using kubectl logs and reviewing Pod events with kubectl describe pod to identify the root cause of the crash. Fixing the underlying application error typically resolves the CrashLoopBackOff condition.
In summary, CrashLoopBackOff is a protective mechanism that signals a repeatedly failing container process. The most likely and verified cause is that the application inside the container is crashing after startup, making option B the correct answer.
How does cert-manager integrate with Kubernetes resources to provide TLS certificates for an application?
It manages Certificate resources and Secrets that can be used by Ingress objects for TLS.
It replaces default Kubernetes API certificates with those from external authorities.
It updates kube-proxy configuration to ensure encrypted traffic between Services.
It injects TLS certificates directly into Pods when the workloads are deployed.
cert-manager is a widely adopted Kubernetes add-on that automates the management and lifecycle of TLS certificates in cloud native environments. Its primary function is to issue, renew, and manage certificates by integrating directly with Kubernetes-native resources, rather than modifying core cluster components or injecting certificates manually into workloads.
Option A correctly describes how cert-manager operates. cert-manager introduces Custom Resource Definitions (CRDs) such as Certificate, Issuer, and ClusterIssuer. These resources define how certificates should be requested and from which certificate authority they should be obtained, such as Let’s Encrypt or a private PKI. Once a certificate is successfully issued, cert-manager stores it in a Kubernetes Secret. These Secrets can then be referenced by Ingress resources, Gateway API resources, or directly by applications to enable TLS.
Option B is incorrect because cert-manager does not replace or interfere with Kubernetes API server certificates. The Kubernetes control plane manages its own internal certificates independently, and cert-manager is focused on application-level TLS, not control plane security.
Option C is incorrect because cert-manager does not interact with kube-proxy or manage service-to-service encryption. Traffic encryption between Services is typically handled by service meshes or application-level TLS configurations, not cert-manager.
Option D is incorrect because cert-manager does not inject certificates directly into Pods at deployment time. Instead, Pods consume certificates indirectly by mounting the Secrets created and maintained by cert-manager. This design aligns with Kubernetes best practices by keeping certificate management decoupled from application deployment logic.
According to Kubernetes and cert-manager documentation, cert-manager’s strength lies in its native integration with Kubernetes APIs and declarative workflows. By managing Certificate resources and automatically maintaining Secrets for use by Ingress or Gateway resources, cert-manager simplifies TLS management, reduces operational overhead, and improves security across cloud native application delivery pipelines. This makes option A the accurate and fully verified answer.
Which two elements are shared between containers in the same pod?
Network resources and liveness probes.
Storage and container image registry.
Storage and network resources.
Network resources and Dockerfiles.
The correct answer is C: Storage and network resources. In Kubernetes, a Pod is the smallest schedulable unit and acts like a “logical host” for its containers. Containers inside the same Pod share a number of namespaces and resources, most notably:
Network: all containers in a Pod share the same network namespace, which means they share a single Pod IP address and the same port space. They can talk to each other via localhost and coordinate tightly without exposing separate network endpoints.
Storage: containers in a Pod can share data through Pod volumes. Volumes (like emptyDir, ConfigMap/Secret volumes, or PVC-backed volumes) are defined at the Pod level and can be mounted into multiple containers within the Pod. This enables common patterns like a sidecar writing logs to a shared volume that the main container generates, or an init/sidecar container producing configuration or certificates for the main container.
Why other options are wrong: liveness probes (A) are defined per container (or per Pod template) but are not a “shared” resource between containers. A container image registry (B) is an external system and not a shared in-Pod element. Dockerfiles (D) are build-time artifacts, irrelevant at runtime, and not shared resources.
This question is a classic test of Pod fundamentals: multi-container Pods work precisely because they share networking and volumes. This is also why the sidecar pattern is feasible—sidecars can intercept traffic on localhost, export metrics, or ship logs while sharing the same lifecycle boundary and scheduling placement.
Therefore, the verified correct choice is C.
=========
Which storage operator in Kubernetes can help the system to self-scale, self-heal, etc?
Rook
Kubernetes
Helm
Container Storage Interface (CSI)
Rook is a Kubernetes storage operator that helps manage and automate storage systems in a Kubernetes-native way, so A is correct. The key phrase in the question is “storage operator … self-scale, self-heal.” Operators extend Kubernetes by using controllers to reconcile a desired state. Rook applies that model to storage, commonly by managing storage backends like Ceph (and other systems depending on configuration).
With an operator approach, you declare how you want storage to look (cluster size, pools, replication, placement, failure domains), and the operator works continuously to maintain that state. That includes operational behaviors that feel “self-healing” such as reacting to failed storage Pods, rebalancing, or restoring desired replication counts (the exact behavior depends on the backend and configuration). The important KCNA-level idea is that Rook uses Kubernetes controllers to automate day-2 operations for storage in a way consistent with Kubernetes’ reconciliation loops.
The other options do not match the question: “Kubernetes” is the orchestrator itself, not a storage operator. “Helm” is a package manager for Kubernetes apps—it can install storage software, but it is not an operator that continuously reconciles and self-manages. “CSI” (Container Storage Interface) is an interface specification that enables pluggable storage drivers; CSI drivers provision and attach volumes, but CSI itself is not a “storage operator” with the broader self-managing operator semantics described here.
So, for “storage operator that can help with self-* behaviors,” Rook is the correct choice.
=========
At which layer would distributed tracing be implemented in a cloud native deployment?
Network
Application
Database
Infrastructure
Distributed tracing is implemented primarily at the application layer, so B is correct. The reason is simple: tracing is about capturing the end-to-end path of a request as it traverses services, libraries, queues, and databases. That “request context” (trace ID, span ID, baggage) must be created, propagated, and enriched as code executes. While infrastructure components (proxies, gateways, service meshes) can generate or augment trace spans, the fundamental unit of tracing is still tied to application operations (an HTTP handler, a gRPC call, a database query, a cache lookup).
In Kubernetes-based microservices, distributed tracing typically uses standards like OpenTelemetry for instrumentation and context propagation. Application frameworks emit spans for key operations, attach attributes (route, status code, tenant, retry count), and propagate context via headers (e.g., W3C Trace Context). This is what lets you reconstruct “Service A → Service B → Service C” for one user request and identify the slow or failing hop.
Why other layers are not the best answer:
Network focuses on packets/flows, but tracing is not a packet-capture problem; it’s a causal request-path problem across services.
Database spans are part of traces, but tracing is not “implemented in the database layer” overall; DB spans are one component.
Infrastructure provides the platform and can observe traffic, but without application context it can’t fully represent business operations (and many useful attributes live in app code).
So the correct layer for “where tracing is implemented” is the application layer—even when a mesh or proxy helps, it’s still describing application request execution across components.
=========
The IPv4/IPv6 dual stack in Kubernetes:
Translates an IPv4 request from a Service to an IPv6 Service.
Allows you to access the IPv4 address by using the IPv6 address.
Requires NetworkPolicies to prevent Services from mixing requests.
Allows you to create IPv4 and IPv6 dual stack Services.
The correct answer is D: Kubernetes dual-stack support allows you to create Services (and Pods, depending on configuration) that use both IPv4 and IPv6 addressing. Dual-stack means the cluster is configured to allocate and route traffic for both IP families. For Services, this can mean assigning both an IPv4 ClusterIP and an IPv6 ClusterIP so clients can connect using either family, depending on their network stack and DNS resolution.
Option A is incorrect because dual-stack is not about protocol translation (that would be NAT64/other gateway mechanisms, not the core Kubernetes dual-stack feature). Option B is also a form of translation/aliasing that isn’t what Kubernetes dual-stack implies; having both addresses available is different from “access IPv4 via IPv6.” Option C is incorrect: dual-stack does not inherently require NetworkPolicies to “prevent mixing requests.” NetworkPolicies are about traffic control, not IP family separation.
In Kubernetes, dual-stack requires support across components: the network plugin (CNI) must support IPv4/IPv6, the cluster must be configured with both Pod CIDRs and Service CIDRs, and DNS should return appropriate A and AAAA records for Service names. Once configured, you can specify preferences such as ipFamilyPolicy (e.g., PreferDualStack) and ipFamilies (IPv4, IPv6 order) for Services to influence allocation behavior.
Operationally, dual-stack is useful for environments transitioning to IPv6, supporting IPv6-only clients, or running in mixed networks. But it adds complexity: address planning, firewalling, and troubleshooting need to consider two IP families. Still, the definition in the question is straightforward: Kubernetes dual-stack enables dual-stack Services, which is option D.
=========
What is a key feature of a container network?
Proxying REST requests across a set of containers.
Allowing containers running on separate hosts to communicate.
Allowing containers on the same host to communicate.
Caching remote disk access.
A defining requirement of container networking in orchestrated environments is enabling workloads to communicate across hosts, not just within a single machine. That’s why B is correct: a key feature of a container network is allowing containers (Pods) running on separate hosts to communicate.
In Kubernetes, this idea becomes the Kubernetes network model: every Pod gets an IP address, and Pods should be able to communicate with other Pods across nodes without needing NAT (depending on implementation details). Achieving that across a cluster requires a networking layer (typically implemented by a CNI plugin) that can route traffic between nodes so that Pod-to-Pod communication works regardless of placement. This is crucial because schedulers dynamically place Pods; you cannot assume two communicating components will land on the same node.
Option C is true in a trivial sense—containers on the same host can communicate—but that capability alone is not the key feature that makes orchestration viable at scale. Cross-host connectivity is the harder and more essential property. Option A describes application-layer behavior (like API gateways or reverse proxies) rather than the foundational networking capability. Option D describes storage optimization, unrelated to container networking.
From a cloud native architecture perspective, reliable cross-host networking enables microservices patterns, service discovery, and distributed systems behavior. Kubernetes Services, DNS, and NetworkPolicies all depend on the underlying ability for Pods across the cluster to send traffic to each other. If your container network cannot provide cross-node routing and reachability, the cluster behaves like isolated islands and breaks the fundamental promise of orchestration: “schedule anywhere, communicate consistently.”
=========
There is an application running in a logical chain: Gateway API → Service → EndpointSlice → Container.
What Kubernetes API object is missing from this sequence?
Proxy
Docker
Pod
Firewall
In Kubernetes, application traffic flows through a well-defined set of API objects and runtime components before reaching a running container. Understanding this logical chain is essential for grasping how Kubernetes networking works internally.
The given sequence is: Gateway API → Service → EndpointSlice → Container. While this looks close to correct, it is missing a critical Kubernetes abstraction: the Pod. Containers in Kubernetes do not run independently; they always run inside Pods. A Pod is the smallest deployable and schedulable unit in Kubernetes and serves as the execution environment for one or more containers that share networking and storage resources.
The correct logical chain should be:
Gateway API → Service → EndpointSlice → Pod → Container
The Gateway API defines how external or internal traffic enters the cluster. The Service provides a stable virtual IP and DNS name, abstracting a set of backend workloads. EndpointSlices then represent the actual network endpoints backing the Service, typically mapping to the IP addresses of Pods. Finally, traffic is delivered to containers running inside those Pods.
Option A (Proxy) is incorrect because while proxies such as kube-proxy or data plane proxies play a role in traffic forwarding, they are not Kubernetes API objects that represent application workloads in this logical chain. Option B (Docker) is incorrect because Docker is a container runtime, not a Kubernetes API object, and Kubernetes is runtime-agnostic. Option D (Firewall) is incorrect because firewalls are not core Kubernetes workload or networking API objects involved in service-to-container routing.
Option C (Pod) is the correct answer because Pods are the missing link between EndpointSlices and containers. EndpointSlices point to Pod IPs, and containers cannot exist outside of Pods. Kubernetes documentation clearly states that Pods are the fundamental unit of execution and networking, making them essential in any accurate representation of application traffic flow within a cluster.
How many different Kubernetes service types can you define?
2
3
4
5
Kubernetes defines four primary Service types, which is why C (4) is correct. The commonly recognized Service spec.type values are:
ClusterIP: The default type. Exposes the Service on an internal virtual IP reachable only within the cluster. This supports typical east-west traffic between workloads.
NodePort: Exposes the Service on a static port on each node. Traffic to
LoadBalancer: Integrates with a cloud provider (or load balancer implementation) to provision an external load balancer and route traffic to the Service. This is common in managed Kubernetes.
ExternalName: Maps the Service name to an external DNS name via a CNAME record, allowing in-cluster clients to use a consistent Service DNS name to reach an external dependency.
Some people also talk about “Headless Services,” but headless is not a separate type; it’s a behavior achieved by setting clusterIP: None. Headless Services still use the Service API object but change DNS and virtual-IP behavior to return endpoint IPs directly rather than a ClusterIP. That’s why the canonical count of “Service types” is four.
This question tests understanding of the Service abstraction: Service type controls how a stable service identity is exposed (internal VIP, node port, external LB, or DNS alias), while selectors/endpoints control where traffic goes (the backend Pods). Different environments will favor different types: ClusterIP for internal microservices, LoadBalancer for external exposure in cloud, NodePort for bare-metal or simple access, ExternalName for bridging to outside services.
Therefore, the verified answer is C (4).
=========
In Kubernetes, what is the primary responsibility of the kubelet running on each worker node?
To allocate persistent storage volumes and manage distributed data replication for Pods.
To manage cluster state information and handle all scheduling decisions for workloads.
To ensure that containers defined in Pod specifications are running and remain healthy on the node.
To provide internal DNS resolution and route service traffic between Pods and nodes.
The kubelet is a critical Kubernetes component that runs on every worker node and acts as the primary execution agent for Pods. Its core responsibility is to ensure that the containers defined in Pod specifications are running and remain healthy on the node, making option C the correct answer.
Once the Kubernetes scheduler assigns a Pod to a specific node, the kubelet on that node becomes responsible for carrying out the desired state described in the Pod specification. It continuously watches the API server for Pods assigned to its node and communicates with the container runtime (such as containerd or CRI-O) to start, stop, and restart containers as needed. The kubelet does not make scheduling decisions; it simply executes them.
Health management is another key responsibility of the kubelet. It runs liveness, readiness, and startup probes as defined in the Pod specification. If a container fails a liveness probe, the kubelet restarts it. If a readiness probe fails, the kubelet marks the Pod as not ready, preventing traffic from being routed to it. The kubelet also reports detailed Pod and node status information back to the API server, enabling controllers to take corrective actions when necessary.
Option A is incorrect because persistent volume provisioning and data replication are handled by storage systems, CSI drivers, and controllers—not by the kubelet. Option B is incorrect because cluster state management and scheduling are responsibilities of control plane components such as the API server, controller manager, and kube-scheduler. Option D is incorrect because DNS resolution and service traffic routing are handled by components like CoreDNS and kube-proxy.
In summary, the kubelet serves as the node-level guardian of Kubernetes workloads. By ensuring containers are running exactly as specified and continuously reporting their health and status, the kubelet forms the essential bridge between Kubernetes’ declarative control plane and the actual execution of applications on worker nodes.
What is the default deployment strategy in Kubernetes?
Rolling update
Blue/Green deployment
Canary deployment
Recreate deployment
For Kubernetes Deployments, the default update strategy is RollingUpdate, which corresponds to “Rolling update” in option A. Rolling updates replace old Pods with new Pods gradually, aiming to maintain availability during the rollout. Kubernetes does this by creating a new ReplicaSet for the updated Pod template and then scaling the new ReplicaSet up while scaling the old one down.
The pace and safety of a rolling update are controlled by parameters like maxUnavailable and maxSurge. maxUnavailable limits how many replicas can be unavailable during the update, protecting availability. maxSurge controls how many extra replicas can be created temporarily above the desired count, helping speed up rollouts while maintaining capacity. If readiness probes fail, Kubernetes will pause progression because new Pods aren’t becoming Ready, helping prevent a bad version from fully replacing a good one.
Options B (Blue/Green) and C (Canary) are popular progressive delivery patterns, but they are not the default built-in Deployment strategy. They are typically implemented using additional tooling (service mesh routing, traffic splitting controllers, or specialized rollout controllers) or by operating multiple Deployments/Services. Option D (Recreate) is a valid strategy but not the default; it terminates all old Pods before creating new ones, causing downtime unless you have external buffering or multi-tier redundancy.
From an application delivery perspective, RollingUpdate aligns with Kubernetes’ declarative model: you update the desired Pod template and let the controller converge safely. kubectl rollout status is commonly used to monitor progress. Rollbacks are also supported because the Deployment tracks history. Therefore, the verified correct answer is A: Rolling update.
=========
Which command provides information about the field replicas within the spec resource of a deployment object?
kubectl get deployment.spec.replicas
kubectl explain deployment.spec.replicas
kubectl describe deployment.spec.replicas
kubectl explain deployment --spec.replicas
The correct command to get field-level schema information about spec.replicas in a Deployment is kubectl explain deployment.spec.replicas, so B is correct. kubectl explain is designed to retrieve documentation for resource fields directly from Kubernetes API discovery and OpenAPI schemas. When you use kubectl explain deployment.spec.replicas, kubectl shows what the field means, its type, and any relevant notes—exactly what “provides information about the field” implies.
This differs from kubectl get and kubectl describe. kubectl get is for retrieving actual objects or listing resources; it does not accept dot-paths like deployment.spec.replicas as a normal resource argument. You can use JSONPath/custom-columns with kubectl get deployment
Option D is not valid syntax: kubectl explain deployment --spec.replicas is not how kubectl explain accepts nested field references. The correct pattern is positional dot notation: kubectl explain
Understanding spec.replicas matters operationally: it defines the desired number of Pod replicas for a Deployment. The Deployment controller ensures that the corresponding ReplicaSet maintains that count, supporting self-healing if Pods fail. While autoscalers can adjust replicas automatically, the field remains the primary declarative knob. The question is specifically about finding information (schema docs) for that field, which is why kubectl explain deployment.spec.replicas is the verified correct answer.
=========
What is the default eviction timeout when the Ready condition of a node is Unknown or False?
Thirty seconds.
Thirty minutes.
One minute.
Five minutes.
The verified correct answer is D (Five minutes). In Kubernetes, node health is continuously monitored. When a node stops reporting status (heartbeats from the kubelet) or is otherwise considered unreachable, the Node controller updates the Node’s Ready condition to Unknown (or it can become False). From that point, Kubernetes has to balance two risks: acting too quickly might cause unnecessary disruption (e.g., transient network hiccups), but acting too slowly prolongs outage for workloads that were running on the failed node.
The “default eviction timeout” refers to the control plane behavior that determines how long Kubernetes waits before evicting Pods from a node that appears unhealthy/unreachable. After this timeout elapses, Kubernetes begins eviction of Pods so controllers (like Deployments) can recreate them on healthy nodes, restoring the desired replica count and availability.
This is tightly connected to high availability and self-healing: Kubernetes does not “move” Pods from a dead node; it replaces them. The eviction timeout gives the cluster time to confirm the node is truly unavailable, avoiding flapping in unstable networks. Once eviction begins, replacement Pods can be scheduled elsewhere (assuming capacity exists), which is the normal recovery path for stateless workloads.
It’s also worth noting that graceful operational handling can be influenced by PodDisruptionBudgets (for voluntary disruptions) and by workload design (replicas across nodes/zones). But the question is testing the default timer value, which is five minutes in this context.
Therefore, among the choices provided, the correct answer is D.
=========
If kubectl is failing to retrieve information from the cluster, where can you find Pod logs to troubleshoot?
/var/log/pods/
~/.kube/config
/var/log/k8s/
/etc/kubernetes/
The correct answer is A: /var/log/pods/. When kubectl logs can’t retrieve logs (for example, API connectivity issues, auth problems, or kubelet/API proxy issues), you can often troubleshoot directly on the node where the Pod ran. Kubernetes nodes typically store container logs on disk, and a common location is under /var/log/pods/, organized by namespace, Pod name/UID, and container. This directory contains symlinks or files that map to the underlying container runtime log location (often under /var/log/containers/ as well, depending on distro/runtime setup).
Option B (~/.kube/config) is your local kubeconfig file; it contains cluster endpoints and credentials, not Pod logs. Option D (/etc/kubernetes/) contains Kubernetes component configuration/manifests on some installations (especially control plane), not application logs. Option C (/var/log/k8s/) is not a standard Kubernetes log path.
Operationally, the node-level log locations depend on the container runtime and logging configuration, but the Kubernetes convention is that kubelet writes container logs to a known location and exposes them through the API so kubectl logs works. If the API path is broken, node access becomes your fallback. This is also why secure node access is sensitive: anyone with node root access can potentially read logs (and other data), which is part of the threat model.
So, the best answer for where to look on the node for Pod logs when kubectl can’t retrieve them is /var/log/pods/, option A.
=========
TESTED 21 Feb 2026
Copyright © 2014-2026 CramTick. All Rights Reserved