What is a Kubernetes service with no cluster IP address called?
Headless Service
Nodeless Service
IPLess Service
Specless Service
A Kubernetes Service normally provides a stable virtual IP (ClusterIP) and a DNS name that load-balances traffic across matching Pods. A headless Service is a special type of Service where Kubernetes does not allocate a ClusterIP. Instead, the Service’s DNS returns individual Pod IPs (or other endpoint records), allowing clients to connect directly to specific backends rather than through a single virtual IP. That is why the correct answer is A (Headless Service).
Headless Services are created by setting spec.clusterIP: None. When you do this, kube-proxy does not program load-balancing rules for a virtual IP because there isn’t one. Instead, service discovery is handled via DNS records that point to the actual endpoints. This behavior is especially important for stateful or identity-sensitive systems where clients must talk to a particular replica (for example, databases, leader/follower clusters, or StatefulSet members).
This is also why headless Services pair naturally with StatefulSets. StatefulSets provide stable network identities (pod-0, pod-1, etc.) and stable DNS names. The headless Service provides the DNS domain that resolves each Pod’s stable hostname to its IP, enabling peer discovery and consistent addressing even as Pods move between nodes.
The other options are distractors: “Nodeless,” “IPLess,” and “Specless” are not Kubernetes Service types. In the core API, the Service “types” are things like ClusterIP, NodePort, LoadBalancer, and ExternalName; “headless” is a behavioral mode achieved through the ClusterIP field.
In short: a headless Service removes the virtual IP abstraction and exposes endpoint-level discovery. It’s a deliberate design choice when load-balancing is not desired or when the application itself handles routing, membership, or sharding.
=========
How many different Kubernetes service types can you define?
2
3
4
5
Kubernetes defines four primary Service types, which is why C (4) is correct. The commonly recognized Service spec.type values are:
ClusterIP: The default type. Exposes the Service on an internal virtual IP reachable only within the cluster. This supports typical east-west traffic between workloads.
NodePort: Exposes the Service on a static port on each node. Traffic to
LoadBalancer: Integrates with a cloud provider (or load balancer implementation) to provision an external load balancer and route traffic to the Service. This is common in managed Kubernetes.
ExternalName: Maps the Service name to an external DNS name via a CNAME record, allowing in-cluster clients to use a consistent Service DNS name to reach an external dependency.
Some people also talk about “Headless Services,” but headless is not a separate type; it’s a behavior achieved by setting clusterIP: None. Headless Services still use the Service API object but change DNS and virtual-IP behavior to return endpoint IPs directly rather than a ClusterIP. That’s why the canonical count of “Service types” is four.
This question tests understanding of the Service abstraction: Service type controls how a stable service identity is exposed (internal VIP, node port, external LB, or DNS alias), while selectors/endpoints control where traffic goes (the backend Pods). Different environments will favor different types: ClusterIP for internal microservices, LoadBalancer for external exposure in cloud, NodePort for bare-metal or simple access, ExternalName for bridging to outside services.
Therefore, the verified answer is C (4).
=========
Which component of the node is responsible to run workloads?
The kubelet.
The kube-proxy.
The kube-apiserver.
The container runtime.
The verified correct answer is D (the container runtime). On a Kubernetes node, the container runtime (such as containerd or CRI-O) is the component that actually executes containers—it creates container processes, manages their lifecycle, pulls images, and interacts with the underlying OS primitives (namespaces, cgroups) through an OCI runtime like runc. In that direct sense, the runtime is what “runs workloads.”
It’s important to distinguish responsibilities. The kubelet (A) is the node agent that orchestrates what should run on the node: it watches the API server for Pods assigned to the node and then asks the runtime to start/stop containers accordingly. Kubelet is essential for node management, but it does not itself execute containers; it delegates execution to the runtime via CRI. kube-proxy (B) handles Service traffic routing rules (or is replaced by other dataplanes) and does not run containers. kube-apiserver (C) is a control plane component that stores and serves cluster state; it is not a node workload runner.
So, in the execution chain: scheduler assigns Pod → kubelet sees Pod assigned → kubelet calls runtime via CRI → runtime launches containers. When troubleshooting “containers won’t start,” you often inspect kubelet logs and runtime logs because the runtime is the component that can fail image pulls, sandbox creation, or container start operations.
Therefore, the best answer to “which node component is responsible to run workloads” is the container runtime, option D.
=========
Which of the following are tasks performed by a container orchestration tool?
Schedule, scale, and manage the health of containers.
Create images, scale, and manage the health of containers.
Debug applications, and manage the health of containers.
Store images, scale, and manage the health of containers.
A container orchestration tool (like Kubernetes) is responsible for scheduling, scaling, and health management of workloads, making A correct. Orchestration sits above individual containers and focuses on running applications reliably across a fleet of machines. Scheduling means deciding which node should run a workload based on resource requests, constraints, affinities, taints/tolerations, and current cluster state. Scaling means changing the number of running instances (replicas) to meet demand (manually or automatically through autoscalers). Health management includes monitoring whether containers and Pods are alive and ready, replacing failed instances, and maintaining the declared desired state.
Options B and D include “create images” and “store images,” which are not orchestration responsibilities. Image creation is a CI/build responsibility (Docker/BuildKit/build systems), and image storage is a container registry responsibility (Harbor, ECR, GCR, Docker Hub, etc.). Kubernetes consumes images from registries but does not build or store them. Option C includes “debug applications,” which is not a core orchestration function. While Kubernetes provides tools that help debugging (logs, exec, events), debugging is a human/operator activity rather than the orchestrator’s fundamental responsibility.
In Kubernetes specifically, these orchestration tasks are implemented through controllers and control loops: Deployments/ReplicaSets manage replica counts and rollouts, kube-scheduler assigns Pods to nodes, kubelet ensures containers run, and probes plus controller logic replace unhealthy replicas. This is exactly what makes Kubernetes valuable at scale: instead of manually starting/stopping containers on individual hosts, you declare your intent and let the orchestration system continually reconcile reality to match. That combination—placement + elasticity + self-healing—is the core of container orchestration, matching option A precisely.
=========
What Kubernetes control plane component exposes the programmatic interface used to create, manage and interact with the Kubernetes objects?
kube-controller-manager
kube-proxy
kube-apiserver
etcd
The kube-apiserver is the front door of the Kubernetes control plane and exposes the programmatic interface used to create, read, update, delete, and watch Kubernetes objects—so C is correct. Every interaction with cluster state ultimately goes through the Kubernetes API. Tools like kubectl, client libraries, GitOps controllers, operators, and core control plane components (scheduler and controllers) all communicate with the API server to submit desired state and to observe current state.
The API server is responsible for handling authentication (who are you?), authorization (what are you allowed to do?), and admission control (should this request be allowed and possibly mutated/validated?). After a request passes these gates, the API server persists the object’s desired state to etcd (the backing datastore) and returns a response. The API server also provides a watch mechanism so controllers can react to changes efficiently, enabling Kubernetes’ reconciliation model.
It’s important to distinguish this from the other options. etcd stores cluster data but does not expose the cluster’s primary user-facing API; it’s an internal datastore. kube-controller-manager runs control loops (controllers) that continuously reconcile resources (like Deployments, Nodes, Jobs) but it consumes the API rather than exposing it. kube-proxy is a node-level component implementing Service networking rules and is unrelated to the control-plane API endpoint.
Because Kubernetes is “API-driven,” the kube-apiserver is central: if it is unavailable, you cannot create workloads, update configurations, or even reliably observe cluster state. This is why high availability architectures prioritize multiple API server instances behind a load balancer, and why securing the API server (RBAC, TLS, audit) is a primary operational concern.
=========
In a cloud native world, what does the IaC abbreviation stand for?
Infrastructure and Code
Infrastructure as Code
Infrastructure above Code
Infrastructure across Code
IaC stands for Infrastructure as Code, which is option B. In cloud native environments, IaC is a core operational practice: infrastructure (networks, clusters, load balancers, IAM roles, storage classes, DNS records, and more) is defined using code-like, declarative configuration rather than manual, click-driven changes. This approach mirrors Kubernetes’ own declarative model—where you define desired state in manifests and controllers reconcile the cluster to match.
IaC improves reliability and velocity because it makes infrastructure repeatable, version-controlled, reviewable, and testable. Teams can store infrastructure definitions in Git, use pull requests for change review, and run automated checks to validate formatting, policies, and safety constraints. If an environment must be recreated (disaster recovery, test environments, regional expansion), IaC enables consistent reproduction with fewer human errors.
In Kubernetes-centric workflows, IaC often covers both the base platform and the workloads layered on top. For example, provisioning might include the Kubernetes control plane, node pools, networking, and identity integration, while Kubernetes manifests (or Helm/Kustomize) define Deployments, Services, RBAC, Ingress, and storage resources. GitOps extends this further by continuously reconciling cluster configuration from a Git source of truth.
The incorrect options (Infrastructure and Code / above / across) are not standard terms. The key idea is “infrastructure treated like software”: changes are made through code commits, go through CI checks, and are rolled out in controlled ways. This aligns with cloud native goals: faster iteration, safer operations, and easier auditing. In short, IaC is the operational backbone that makes Kubernetes and cloud platforms manageable at scale, enabling consistent environments and reducing configuration drift.
You’re right — my previous 16–30 were not taken from your PDF. Below is the correct redo of Questions 16–30 extracted from your PDF, with verified answers, typos corrected, and formatted exactly as you requested.
What is an ephemeral container?
A specialized container that runs as root for infosec applications.
A specialized container that runs temporarily in an existing Pod.
A specialized container that extends and enhances the main container in a Pod.
A specialized container that runs before the app container in a Pod.
B is correct: an ephemeral container is a temporary container you can add to an existing Pod for troubleshooting and debugging without restarting the Pod. This capability is especially useful when a running container image is minimal (distroless) and lacks debugging tools like sh, curl, or ps. Instead of rebuilding the workload image or disrupting the Pod, you attach an ephemeral container that includes the tools you need, then inspect processes, networking, filesystem mounts, and runtime behavior.
Ephemeral containers are not part of the original Pod spec the same way normal containers are. They are added via a dedicated subresource and are generally not restarted automatically like regular containers. They are meant for interactive investigation, not for ongoing workload functionality.
Why the other options are incorrect:
D describes init containers, which run before app containers start and are used for setup tasks.
C resembles the “sidecar” concept (a supporting container that runs alongside the main container), but sidecars are normal containers defined in the Pod spec, not ephemeral containers.
A is not a definition; ephemeral containers are not “root by design” (they can run with various security contexts depending on policy), and they aren’t limited to infosec use cases.
In Kubernetes operations, ephemeral containers complement kubectl exec and logs. If the target container is crash-looping or lacks a shell, exec may not help; adding an ephemeral container provides a safe and Kubernetes-native debugging path. So, the accurate definition is B.
=========
Which command provides information about the field replicas within the spec resource of a deployment object?
kubectl get deployment.spec.replicas
kubectl explain deployment.spec.replicas
kubectl describe deployment.spec.replicas
kubectl explain deployment --spec.replicas
The correct command to get field-level schema information about spec.replicas in a Deployment is kubectl explain deployment.spec.replicas, so B is correct. kubectl explain is designed to retrieve documentation for resource fields directly from Kubernetes API discovery and OpenAPI schemas. When you use kubectl explain deployment.spec.replicas, kubectl shows what the field means, its type, and any relevant notes—exactly what “provides information about the field” implies.
This differs from kubectl get and kubectl describe. kubectl get is for retrieving actual objects or listing resources; it does not accept dot-paths like deployment.spec.replicas as a normal resource argument. You can use JSONPath/custom-columns with kubectl get deployment
Option D is not valid syntax: kubectl explain deployment --spec.replicas is not how kubectl explain accepts nested field references. The correct pattern is positional dot notation: kubectl explain
Understanding spec.replicas matters operationally: it defines the desired number of Pod replicas for a Deployment. The Deployment controller ensures that the corresponding ReplicaSet maintains that count, supporting self-healing if Pods fail. While autoscalers can adjust replicas automatically, the field remains the primary declarative knob. The question is specifically about finding information (schema docs) for that field, which is why kubectl explain deployment.spec.replicas is the verified correct answer.
=========
Which of the following sentences is true about container runtimes in Kubernetes?
If you let iptables see bridged traffic, you don't need a container runtime.
If you enable IPv4 forwarding, you don't need a container runtime.
Container runtimes are deprecated, you must install CRI on each node.
You must install a container runtime on each node to run pods on it.
A Kubernetes node must have a container runtime to run Pods, so D is correct. Kubernetes schedules Pods to nodes, but the actual execution of containers is performed by a runtime such as containerd or CRI-O. The kubelet communicates with that runtime via the Container Runtime Interface (CRI) to pull images, create sandboxes, and start/stop containers. Without a runtime, the node cannot launch container processes, so Pods cannot transition into running state.
Options A and B confuse networking kernel settings with runtime requirements. iptables bridged traffic visibility and IPv4 forwarding can be relevant for node networking, but they do not replace the need for a container runtime. Networking and container execution are separate layers: you need networking for connectivity, and you need a runtime for running containers.
Option C is also incorrect and muddled. Container runtimes are not deprecated; rather, Kubernetes removed the built-in Docker shim integration from kubelet in favor of CRI-native runtimes. CRI is an interface, not “something you install instead of a runtime.” In practice you install a CRI-compatible runtime (containerd/CRI-O), which implements CRI endpoints that kubelet talks to.
Operationally, the runtime choice affects node behavior: image management, logging integration, performance characteristics, and compatibility. Kubernetes installation guides explicitly list installing a container runtime as a prerequisite for worker nodes. If a cluster has nodes without a properly configured runtime, workloads scheduled there will fail to start (often stuck in ContainerCreating/ImagePullBackOff/Runtime errors).
Therefore, the only fully correct statement is D: each node needs a container runtime to run Pods.
=========
Which statement about Ingress is correct?
Ingress provides a simple way to track network endpoints within a cluster.
Ingress is a Service type like NodePort and ClusterIP.
Ingress is a construct that allows you to specify how a Pod is allowed to communicate.
Ingress exposes routes from outside the cluster to Services in the cluster.
Ingress is the Kubernetes API resource for defining external HTTP/HTTPS routing into the cluster, so D is correct. An Ingress object specifies rules such as hostnames (e.g., app.example.com), URL paths (e.g., /api), and TLS configuration, mapping those routes to Kubernetes Services. This provides Layer 7 routing capabilities beyond what a basic Service offers.
Ingress is not a Service type (so B is wrong). Service types (ClusterIP, NodePort, LoadBalancer, ExternalName) are part of the Service API and operate at Layer 4. Ingress is a separate API object that depends on an Ingress Controller to actually implement routing. The controller watches Ingress resources and configures a reverse proxy/load balancer (like NGINX, HAProxy, or a cloud load balancer integration) to enforce the desired routing. Without an Ingress Controller, creating an Ingress object alone will not route traffic.
Option A describes endpoint tracking (that’s closer to Endpoints/EndpointSlice). Option C describes NetworkPolicy, which controls allowed network flows between Pods/namespaces. Ingress is about exposing and routing incoming application traffic from outside the cluster to internal Services.
So the verified correct statement is D: Ingress exposes routes from outside the cluster to Services in the cluster.
Which Kubernetes resource uses immutable: true boolean field?
Deployment
Pod
ConfigMap
ReplicaSet
The immutable: true field is supported by ConfigMap (and also by Secrets, though Secret is not in the options), so C is correct. When a ConfigMap is marked immutable, its data can no longer be changed after creation. This is useful for protecting configuration from accidental modification and for improving cluster performance by reducing watch/update churn on frequently referenced configuration objects.
In Kubernetes, ConfigMaps store non-sensitive configuration as key-value pairs. They can be consumed by Pods as environment variables, command-line arguments, or mounted files in volumes. Without immutability, ConfigMap updates can trigger complex runtime behaviors: for example, file-mounted ConfigMap updates can eventually reflect in the volume (with some delay), but environment variables do not update automatically in running Pods. This can cause confusion and configuration drift between expected and actual behavior. Marking a ConfigMap immutable makes the configuration stable and encourages explicit rollout strategies (create a new ConfigMap with a new name and update the Pod template), which is generally more reliable for production delivery.
Why the other options are wrong: Deployments, Pods, and ReplicaSets do not use an immutable: true field as a standard top-level toggle in their API schema for the purpose described. These objects can be updated through the normal API mechanisms, and their updates are part of typical lifecycle operations (rolling updates, scaling, etc.). The immutability concept exists in Kubernetes, but the specific immutable boolean in this context is a recognized field for ConfigMap (and Secret) objects.
Operationally, immutable ConfigMaps help enforce safer practices: instead of editing live configuration in place, teams adopt versioned configuration artifacts and controlled rollouts via Deployments. This fits cloud-native principles of repeatability and reducing accidental production changes.
=========
A Kubernetes Pod is returning a CrashLoopBackOff status. What is the most likely reason for this behavior?
There are insufficient resources allocated for the Pod.
The application inside the container crashed after starting.
The container’s image is missing or cannot be pulled.
The Pod is unable to communicate with the Kubernetes API server.
A CrashLoopBackOff status in Kubernetes indicates that a container within a Pod is repeatedly starting, crashing, and being restarted by Kubernetes. This behavior occurs when the container process exits shortly after starting and Kubernetes applies an increasing back-off delay between restart attempts to prevent excessive restarts.
Option B is the correct answer because CrashLoopBackOff most commonly occurs when the application inside the container crashes after it has started. Typical causes include application runtime errors, misconfigured environment variables, missing configuration files, invalid command or entrypoint definitions, failed dependencies, or unhandled exceptions during application startup. Kubernetes itself is functioning as expected by restarting the container according to the Pod’s restart policy.
Option A is incorrect because insufficient resources usually lead to different symptoms. For example, if a container exceeds its memory limit, it may be terminated with an OOMKilled status rather than repeatedly crashing immediately. While resource constraints can indirectly cause crashes, they are not the defining reason for a CrashLoopBackOff state.
Option C is incorrect because an image that cannot be pulled results in statuses such as ImagePullBackOff or ErrImagePull, not CrashLoopBackOff. In those cases, the container never successfully starts.
Option D is incorrect because Pods do not need to communicate directly with the Kubernetes API server for normal application execution. Issues with API server communication affect control plane components or scheduling, not container restart behavior.
From a troubleshooting perspective, Kubernetes documentation recommends inspecting container logs using kubectl logs and reviewing Pod events with kubectl describe pod to identify the root cause of the crash. Fixing the underlying application error typically resolves the CrashLoopBackOff condition.
In summary, CrashLoopBackOff is a protective mechanism that signals a repeatedly failing container process. The most likely and verified cause is that the application inside the container is crashing after startup, making option B the correct answer.
What is a Dockerfile?
A bash script that is used to automatically build a docker image.
A config file that defines which image registry a container should be pushed to.
A text file that contains all the commands a user could call on the command line to assemble an image.
An image layer created by a running container stored on the host.
A Dockerfile is a text file that contains a sequence of instructions used to build a container image, so C is correct. These instructions include choosing a base image (FROM), copying files (COPY/ADD), installing dependencies (RUN), setting environment variables (ENV), defining working directories (WORKDIR), exposing ports (EXPOSE), and specifying the default startup command (CMD/ENTRYPOINT). When you run docker build (or compatible tools like BuildKit), the builder executes these instructions to produce an image composed of immutable layers.
In cloud-native application delivery, Dockerfiles (more generally, OCI image build definitions) are a key step in the supply chain. The resulting image artifact is what Kubernetes runs in Pods. Best practices include using minimal base images, pinning versions, avoiding embedding secrets, and using multi-stage builds to keep runtime images small. These practices improve security and performance, and make delivery pipelines more reliable.
Option A is incorrect because a Dockerfile is not a bash script, even though it can run shell commands through RUN. Option B is incorrect because registry destinations are handled by tooling and tagging/push commands (or CI pipeline configuration), not by the Dockerfile itself. Option D is incorrect because an image layer created by a running container is more closely related to container filesystem changes and commits; a Dockerfile is the build recipe, not a runtime-generated layer.
Although the question uses “Dockerfile,” the concept maps well to OCI-based container image creation generally: you define a reproducible build recipe that produces an immutable image artifact. That artifact is then versioned, scanned, signed, stored in a registry, and deployed to Kubernetes through manifests/Helm/GitOps. Therefore, C is the correct and verified definition.
=========
Which command will list the resource types that exist within a cluster?
kubectl api-resources
kubectl get namespaces
kubectl api-versions
curl https://kubectrl/namespaces
To list the resource types available in a Kubernetes cluster, you use kubectl api-resources, so A is correct. This command queries the API server’s discovery endpoints and prints a table of resources (kinds) that the cluster knows about, including their names, shortnames, API group/version, whether they are namespaced, and supported verbs. It’s extremely useful for learning what objects exist in a cluster—especially when CRDs are installed, because those custom resource types will also appear in the output.
Option C (kubectl api-versions) lists available API versions (group/version strings like v1, apps/v1, batch/v1) but does not directly list the resource kinds/types. It’s related discovery information but answers a different question. Option B (kubectl get namespaces) lists namespaces, not resource types. Option D is invalid (typo in URL and conceptually not the Kubernetes discovery mechanism).
Practically, kubectl api-resources is used during troubleshooting and exploration: you might use it to confirm whether a CRD is installed (e.g., certificates.cert-manager.io kinds), to check whether a resource is namespaced, or to find the correct kind name for kubectl get. It also helps understand what your cluster supports at the API layer (including aggregated APIs).
So, the verified correct command to list resource types that exist in the cluster is A: kubectl api-resources.
What is the Kubernetes abstraction that allows groups of Pods to be exposed inside a Kubernetes cluster?
Deployment
Daemon
Unit
Service
In Kubernetes, Pods are ephemeral by design. They can be created, destroyed, rescheduled, or replaced at any time, and each Pod receives its own IP address. Because of this dynamic nature, directly relying on Pod IPs for communication is unreliable. To solve this problem, Kubernetes provides the Service abstraction, which allows a stable way to expose and access a group of Pods inside (and sometimes outside) the cluster.
A Service defines a logical set of Pods using label selectors and provides a consistent virtual IP address and DNS name for accessing them. Even if individual Pods fail or are replaced, the Service remains stable, and traffic is automatically routed to healthy Pods that match the selector. This makes Services a fundamental building block for internal communication between applications within a Kubernetes cluster.
Deployments (Option A) are responsible for managing the lifecycle of Pods, including scaling, rolling updates, and self-healing. However, Deployments do not provide networking or exposure capabilities. They control how Pods run, not how they are accessed.
Option B, “Daemon,” is not a valid Kubernetes resource. The correct resource is a DaemonSet, which ensures that a copy of a Pod runs on each (or selected) node in the cluster. DaemonSets are used for node-level workloads like logging or monitoring agents, not for exposing Pods.
Option C, “Unit,” is not a Kubernetes concept at all and does not exist in Kubernetes architecture.
Services can be configured in different ways depending on access requirements, such as ClusterIP for internal access, NodePort or LoadBalancer for external access, and Headless Services for direct Pod discovery. Regardless of type, the core purpose of a Service is to expose a group of Pods in a stable and reliable way.
Therefore, the correct and verified answer is Option D: Service, which is the Kubernetes abstraction specifically designed to expose groups of Pods within a cluster.
What is the role of a NetworkPolicy in Kubernetes?
The ability to cryptic and obscure all traffic.
The ability to classify the Pods as isolated and non isolated.
The ability to prevent loopback or incoming host traffic.
The ability to log network security events.
A Kubernetes NetworkPolicy defines which traffic is allowed to and from Pods by selecting Pods and specifying ingress/egress rules. A key conceptual effect is that it can make Pods “isolated” (default deny except what is allowed) versus “non-isolated” (default allow). This aligns best with option B, so B is correct.
By default, Kubernetes networking is permissive: Pods can typically talk to any other Pod. When you apply a NetworkPolicy that selects a set of Pods, those selected Pods become “isolated” for the direction(s) covered by the policy (ingress and/or egress). That means only traffic explicitly allowed by the policy is permitted; everything else is denied (again, for the selected Pods and direction). This classification concept—isolated vs non-isolated—is a common way the Kubernetes documentation explains NetworkPolicy behavior.
Option A is incorrect: NetworkPolicy does not encrypt (“cryptic and obscure”) traffic. Encryption is typically handled by mTLS via a service mesh or application-layer TLS. Option C is not the primary role; loopback and host traffic handling depend on the network plugin and node configuration, and NetworkPolicy is not a “prevent loopback” mechanism. Option D is incorrect because NetworkPolicy is not a logging system; while some CNIs can produce logs about policy decisions, logging is not NetworkPolicy’s role in the API.
One critical Kubernetes detail: NetworkPolicy enforcement is performed by the CNI/network plugin. If your CNI doesn’t implement NetworkPolicy, creating these objects won’t change runtime traffic. In CNIs that do support it, NetworkPolicy becomes a foundational security primitive for segmentation and least privilege: restricting database access to app Pods only, isolating namespaces, and reducing lateral movement risk.
So, in the language of the provided answers, NetworkPolicy’s role is best captured as the ability to classify Pods into isolated/non-isolated by applying traffic-allow rules—option B.
=========
The cloud native architecture centered around microservices provides a strong system that ensures ______________.
fallback
resiliency
failover
high reachability
The best answer is B (resiliency). A microservices-centered cloud-native architecture is designed to build systems that continue to operate effectively under change and failure. “Resiliency” is the umbrella concept: the ability to tolerate faults, recover from disruptions, and maintain acceptable service levels through redundancy, isolation, and automated recovery.
Microservices help resiliency by reducing blast radius. Instead of one monolith where a single defect can take down the entire application, microservices separate concerns into independently deployable components. Combined with Kubernetes, you get resiliency mechanisms such as replication (multiple Pod replicas), self-healing (restart and reschedule on failure), rolling updates, health probes, and service discovery/load balancing. These enable the platform to detect and replace failing instances automatically, and to keep traffic flowing to healthy backends.
Options C (failover) and A (fallback) are resiliency techniques but are narrower terms. Failover usually refers to switching to a standby component when a primary fails; fallback often refers to degraded behavior (cached responses, reduced features). Both can exist in microservice systems, but the broader architectural guarantee microservices aim to support is resiliency overall. Option D (“high reachability”) is not the standard term used in cloud-native design and doesn’t capture the intent as precisely as resiliency.
In practice, achieving resiliency also requires good observability and disciplined delivery: monitoring/alerts, tracing across service boundaries, circuit breakers/timeouts/retries, and progressive delivery patterns. Kubernetes provides platform primitives, but resilient microservices also need careful API design and failure-mode thinking.
So the intended and verified completion is resiliency, option B.
=========
In Kubernetes, which abstraction defines a logical set of Pods and a policy by which to access them?
Service Account
NetworkPolicy
Service
Custom Resource Definition
The correct answer is C: Service. A Kubernetes Service is an abstraction that provides stable access to a logical set of Pods. Pods are ephemeral: they can be rescheduled, recreated, and scaled, which changes their IP addresses over time. A Service solves this by providing a stable identity—typically a virtual IP (ClusterIP) and a DNS name—and a traffic-routing policy that directs requests to the current set of backend Pods.
Services commonly select Pods using labels via a selector (e.g., app=web). Kubernetes then maintains the backend endpoint list (Endpoints/EndpointSlices). The cluster networking layer routes traffic sent to the Service IP/port to one of the Pod endpoints, enabling load distribution across replicas. This is fundamental to microservices architectures: clients call the Service name, not individual Pods.
Why the other options are incorrect:
A ServiceAccount is an identity for Pods to authenticate to the Kubernetes API; it doesn’t define a set of Pods nor traffic access policy.
A NetworkPolicy defines allowed network flows (who can talk to whom) but does not provide stable addressing or load-balanced access to Pods. It is a security policy, not an exposure abstraction.
A CustomResourceDefinition extends the Kubernetes API with new resource types; it’s unrelated to service discovery and traffic routing for a set of Pods.
Understanding Services is core Kubernetes fundamentals: they decouple backend Pod churn from client connectivity. Services also integrate with different exposure patterns via type (ClusterIP, NodePort, LoadBalancer, ExternalName) and can be paired with Ingress/Gateway for HTTP routing. But the essential definition in the question—“logical set of Pods and a policy to access them”—is exactly the textbook description of a Service.
Therefore, the verified correct answer is C.
=========
Which statement about the Kubernetes network model is correct?
Pods can only communicate with Pods exposed via a Service.
Pods can communicate with all Pods without NAT.
The Pod IP is only visible inside a Pod.
The Service IP is used for the communication between Services.
Kubernetes’ networking model assumes that every Pod has its own IP address and that Pods can communicate with other Pods across nodes without requiring network address translation (NAT). That makes B correct. This is one of Kubernetes’ core design assumptions and is typically implemented via CNI plugins that provide flat, routable Pod networking (or equivalent behavior using encapsulation/routing).
This model matters because scheduling is dynamic. The scheduler can place Pods anywhere in the cluster, and applications should not need to know whether a peer is on the same node or a different node. With the Kubernetes network model, Pod-to-Pod communication works uniformly: a Pod can reach any other Pod IP directly, and nodes can reach Pods as well. Services and DNS add stable naming and load balancing, but direct Pod connectivity is part of the baseline model.
Option A is incorrect because Pods can communicate directly using Pod IPs even without Services (subject to NetworkPolicies and routing). Services are abstractions for stable access and load balancing; they are not the only way Pods can communicate. Option C is incorrect because Pod IPs are not limited to visibility “inside a Pod”; they are routable within the cluster network. Option D is misleading: Services are often used by Pods (clients) to reach a set of Pods (backends). “Service IP used for communication between Services” is not the fundamental model; Services are virtual IPs for reaching workloads, and “Service-to-Service communication” usually means one workload calling another via the target Service name.
A useful way to remember the official model: (1) all Pods can communicate with all other Pods (no NAT), (2) all nodes can communicate with all Pods (no NAT), (3) Pod IPs are unique cluster-wide. This enables consistent microservice connectivity and supports higher-level traffic management layers like Ingress and service meshes.
=========
What is the goal of load balancing?
Automatically measure request performance across instances of an application.
Automatically distribute requests across different versions of an application.
Automatically distribute instances of an application across the cluster.
Automatically distribute requests across instances of an application.
The core goal of load balancing is to distribute incoming requests across multiple instances of a service so that no single instance becomes overloaded and so that the overall service is more available and responsive. That matches option D, which is the correct answer.
In Kubernetes, load balancing commonly appears through the Service abstraction. A Service selects a set of Pods using labels and provides stable access via a virtual IP (ClusterIP) and DNS name. Traffic sent to the Service is then forwarded to one of the healthy backend Pods. This spreads load across replicas and provides resilience: if one Pod fails, it is removed from endpoints (or becomes NotReady) and traffic shifts to remaining replicas. The actual traffic distribution mechanism depends on the networking implementation (kube-proxy using iptables/IPVS or an eBPF dataplane), but the intent remains consistent: distribute requests across multiple backends.
Option A describes monitoring/observability, not load balancing. Option B describes progressive delivery patterns like canary or A/B routing; that can be implemented with advanced routing layers (Ingress controllers, service meshes), but it’s not the general definition of load balancing. Option C describes scheduling/placement of instances (Pods) across cluster nodes, which is the role of the scheduler and controllers, not load balancing.
In cloud environments, load balancing may also be implemented by external load balancers (cloud LBs) in front of the cluster, then forwarded to NodePorts or ingress endpoints, and again balanced internally to Pods. At each layer, the objective is the same: spread request traffic across multiple service instances to improve performance and availability.
=========
Which of the following options include resources cleaned by the Kubernetes garbage collection mechanism?
Stale or expired CertificateSigningRequests (CSRs) and old deployments.
Nodes deleted by a cloud controller manager and obsolete logs from the kubelet.
Unused container and container images, and obsolete logs from the kubelet.
Terminated pods, completed jobs, and objects without owner references.
Kubernetes garbage collection (GC) is about cleaning up API objects and related resources that are no longer needed, so the correct answer is D. Two big categories it targets are (1) objects that have finished their lifecycle (like terminated Pods and completed Jobs, depending on controllers and TTL policies), and (2) “dangling” objects that are no longer referenced properly—often described as objects without owner references (or where owners are gone), which can happen when a higher-level controller is deleted or when dependent resources are left behind.
A key Kubernetes concept here is OwnerReferences: many resources are created “owned” by a controller (e.g., a ReplicaSet owned by a Deployment, Pods owned by a ReplicaSet). When an owning object is deleted, Kubernetes’ garbage collector can remove dependent objects based on deletion propagation policies (foreground/background/orphan). This prevents resource leaks and keeps the cluster tidy and performant.
The other options are incorrect because they refer to cleanup tasks outside Kubernetes GC’s scope. Kubelet logs (B/C) are node-level files and log rotation is handled by node/runtime configuration, not the Kubernetes garbage collector. Unused container images (C) are managed by the container runtime’s image GC and kubelet disk pressure management, not the Kubernetes API GC. Nodes deleted by a cloud controller (B) aren’t “garbage collected” in the same sense; node lifecycle is handled by controllers and cloud integrations, but not as a generic GC cleanup category like ownerRef-based object deletion.
So, when the question asks specifically about “resources cleaned by Kubernetes garbage collection,” it’s pointing to Kubernetes object lifecycle cleanup: terminated Pods, completed Jobs, and orphaned objects—exactly what option D states.
=========
What is a Pod?
A networked application within Kubernetes.
A storage volume within Kubernetes.
A single container within Kubernetes.
A group of one or more containers within Kubernetes.
A Pod is the smallest deployable/schedulable unit in Kubernetes and consists of a group of one or more containers that are deployed together on the same node—so D is correct. The key idea is that Kubernetes schedules Pods, not individual containers. Containers in the same Pod share important runtime context: they share the same network namespace (one Pod IP and port space) and can share storage volumes defined at the Pod level. This is why a Pod is often described as a “logical host” for its containers.
Most Pods run a single container, but multi-container Pods are common for sidecar patterns. For example, an application container might run alongside a service mesh proxy sidecar, a log shipper, or a config reloader. Because these containers share localhost networking, they can communicate efficiently without exposing extra network endpoints. Because they can share volumes, one container can produce files that another consumes (for example, writing logs to a shared volume).
Options A and B are incorrect because a Pod is not “an application” abstraction nor is it a storage volume. Pods can host applications, but they are the execution unit for containers rather than the application concept itself. Option C is incorrect because a Pod is not limited to a single container; “one or more containers” is fundamental to the Pod definition.
Operationally, understanding Pods is essential because many Kubernetes behaviors key off Pods: Services select Pods (typically by labels), autoscalers scale Pods (replica counts), probes determine Pod readiness/liveness, and scheduling constraints place Pods on nodes. When a Pod is replaced (for example during a Deployment rollout), a new Pod is created with a new UID and potentially a new IP—reinforcing why Services exist to provide stable access.
Therefore, the verified correct answer is D: a Pod is a group of one or more containers within Kubernetes.
=========
Which of the following workload requires a headless Service while deploying into the namespace?
StatefulSet
CronJob
Deployment
DaemonSet
A StatefulSet commonly requires a headless Service, so A is the correct answer. In Kubernetes, StatefulSets are designed for workloads that need stable identities, stable network names, and often stable storage per replica. To support that stable identity model, Kubernetes typically uses a headless Service (spec.clusterIP: None) to provide DNS records that map directly to each Pod, rather than load-balancing behind a single virtual ClusterIP.
With a headless Service, DNS queries return individual endpoint records (the Pod IPs) so that each StatefulSet Pod can be addressed predictably, such as pod-0.service-name.namespace.svc.cluster.local. This is critical for clustered databases, quorum systems, and leader/follower setups where members must discover and address specific peers. The StatefulSet controller then ensures ordered creation/deletion and preserves identity (pod-0, pod-1, etc.), while the headless Service provides discovery for those stable hostnames.
CronJobs run periodic Jobs and don’t require stable DNS identity for multiple replicas. Deployments manage stateless replicas and normally use a standard Service that load-balances across Pods. DaemonSets run one Pod per node, and while they can be exposed by Services, they do not intrinsically require headless discovery.
So while you can use a headless Service for other designs, StatefulSet is the workload type most associated with “requires a headless Service” due to how stable identities and per-Pod addressing work in Kubernetes.
Which authorization-mode allows granular control over the operations that different entities can perform on different objects in a Kubernetes cluster?
Webhook Mode Authorization Control
Role Based Access Control
Node Authorization Access Control
Attribute Based Access Control
Role Based Access Control (RBAC) is the standard Kubernetes authorization mode that provides granular control over what users and service accounts can do to which resources, so B is correct. RBAC works by defining Roles (namespaced) and ClusterRoles (cluster-wide) that contain sets of rules. Each rule specifies API groups, resource types, resource names (optional), and allowed verbs such as get, list, watch, create, update, patch, and delete. You then attach these roles to identities using RoleBindings or ClusterRoleBindings.
This gives fine-grained, auditable access control. For example, you can allow a CI service account to create and patch Deployments only in a specific namespace, while restricting it from reading Secrets. You can allow developers to view Pods and logs but prevent them from changing cluster-wide networking resources. This is exactly the “granular control over operations on objects” described by the question.
Why other options are not the best answer: “Webhook mode” is an authorization mechanism where Kubernetes calls an external service to decide authorization. While it can be granular depending on the external system, Kubernetes’ common built-in answer for granular object-level control is RBAC. “Node authorization” is a specialized authorizer for kubelets/nodes to access resources they need; it’s not the general-purpose system for all cluster entities. ABAC (Attribute-Based Access Control) is an older mechanism and is not the primary recommended authorization model; it can be expressive but is less commonly used and not the default best-practice for Kubernetes authorization today.
In Kubernetes security practice, RBAC is typically paired with authentication (certs/OIDC), admission controls, and namespaces to build a defense-in-depth security posture. RBAC policy is also central to least privilege: granting only what is necessary for a workload or user role to function. This reduces blast radius if credentials are compromised.
Therefore, the verified answer is B: Role Based Access Control.
=========
How are ReplicaSets and Deployments related?
Deployments manage ReplicaSets and provide declarative updates to Pods.
ReplicaSets manage stateful applications, Deployments manage stateless applications.
Deployments are runtime instances of ReplicaSets.
ReplicaSets are subsets of Jobs and CronJobs which use imperative Deployments.
In Kubernetes, a Deployment is a higher-level controller that manages ReplicaSets, and ReplicaSets in turn manage Pods. That is exactly what option A states, making it the correct answer.
A ReplicaSet’s job is straightforward: ensure that a specified number of Pod replicas matching a selector are running. It continuously reconciles actual state to desired state by creating new Pods when replicas are missing or removing Pods when there are too many. However, ReplicaSets alone do not provide the richer application rollout lifecycle features most teams need.
A Deployment adds those features by managing ReplicaSets across versions of your Pod template. When you update a Deployment (for example, change the container image tag), Kubernetes creates a new ReplicaSet with the new Pod template and then gradually scales the new ReplicaSet up and the old one down according to the Deployment strategy (RollingUpdate by default). Deployments also maintain rollout history, support rollback (kubectl rollout undo), and allow pause/resume of rollouts. This is why the common guidance is: you almost always create Deployments rather than ReplicaSets directly for stateless apps.
Option B is incorrect because stateful workloads are typically handled by StatefulSets, not ReplicaSets. Deployments can run stateless apps, but ReplicaSets are also used under Deployments and are not “for stateful only.” Option C is reversed: ReplicaSets are not “instances” of Deployments; Deployments create/manage ReplicaSets. Option D is incorrect because Jobs/CronJobs are separate controllers for run-to-completion workloads and do not define ReplicaSets as subsets.
So the accurate relationship is: Deployment → manages ReplicaSets → which manage Pods, enabling declarative updates and controlled rollouts.
=========
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Flux and ArgoCD
GitOps Toolkit
Linkerd and Istio
Helm and Kustomize
Tools that continuously reconcile cluster state to match a Git repository’s desired configuration are GitOps controllers, and the best match here is Flux and ArgoCD, so A is correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (“GitOps Toolkit”) is related—Flux uses a GitOps Toolkit internally—but the question asks for a “tool” that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore, A is the verified correct answer.
=========
What is the resource type used to package sets of containers for scheduling in a cluster?
Pod
ContainerSet
ReplicaSet
Deployment
The Kubernetes resource used to package one or more containers into a schedulable unit is the Pod, so A is correct. Kubernetes schedules Pods onto nodes; it does not schedule individual containers. A Pod represents a single “instance” of an application component and includes one or more containers that share key runtime properties, including the same network namespace (same IP and port space) and the ability to share volumes.
Pods enable common patterns beyond “one container per Pod.” For example, a Pod may include a main application container plus a sidecar container for logging, proxying, or configuration reload. Because these containers share localhost networking and volume mounts, they can coordinate efficiently without requiring external service calls. Kubernetes manages the Pod lifecycle as a unit: the containers in a Pod are started according to container lifecycle rules and are co-located on the same node.
Option B (ContainerSet) is not a standard Kubernetes workload resource. Option C (ReplicaSet) manages a set of Pod replicas, ensuring a desired count is running, but it is not the packaging unit itself. Option D (Deployment) is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, again operating on Pods rather than being the container-packaging unit.
From the scheduling perspective, the PodSpec defines container images, commands, resources, volumes, security context, and placement constraints. The scheduler evaluates these constraints and assigns the Pod to a node. This “Pod as the atomic scheduling unit” is fundamental to Kubernetes architecture and explains why Kubernetes-native concepts (Services, selectors, readiness, autoscaling) all revolve around Pods.
=========
Imagine there is a requirement to run a database backup every day. Which Kubernetes resource could be used to achieve that?
kube-scheduler
CronJob
Task
Job
To run a workload on a repeating schedule (like “every day”), Kubernetes provides CronJob, making B correct. A CronJob creates Jobs according to a cron-formatted schedule, and then each Job creates one or more Pods that run to completion. This is the Kubernetes-native replacement for traditional cron scheduling, but implemented as a declarative resource managed by controllers in the cluster.
For a daily database backup, you’d define a CronJob with a schedule (e.g., "0 2 * * *" for 2:00 AM daily), and specify the Pod template that performs the backup (invokes backup scripts/tools, writes output to durable storage, uploads to object storage, etc.). Kubernetes will then create a Job at each scheduled time. CronJobs also support operational controls like concurrencyPolicy (Allow/Forbid/Replace) to decide what happens if a previous backup is still running, startingDeadlineSeconds to handle missed schedules, and history limits to retain recent successful/failed Job records for debugging.
Option D (Job) is close but not sufficient for “every day.” A Job runs a workload until completion once; you would need an external scheduler to create a Job every day. Option A (kube-scheduler) is a control plane component responsible for placing Pods onto nodes and does not schedule recurring tasks. Option C (“Task”) is not a standard Kubernetes workload resource.
This question is fundamentally about mapping a recurring operational requirement (backup cadence) to Kubernetes primitives. The correct design is: CronJob triggers Job creation on a schedule; Job runs Pods to completion. Therefore, the correct answer is B.
=========
In Kubernetes, what is the primary function of a RoleBinding?
To provide a user or group with permissions across all resources at the cluster level.
To assign the permissions of a Role to a user, group, or service account within a namespace.
To enforce namespace network rules by binding policies to Pods running in the namespace.
To create and define a new Role object that contains a specific set of permissions.
In Kubernetes, authorization is managed using Role-Based Access Control (RBAC), which defines what actions identities can perform on which resources. Within this model, a RoleBinding plays a crucial role by connecting permissions to identities, making option B the correct answer.
A Role defines a set of permissions—such as the ability to get, list, create, or delete specific resources—but by itself, a Role does not grant those permissions to anyone. A RoleBinding is required to bind that Role to a specific subject, such as a user, group, or service account. This binding is namespace-scoped, meaning it applies only within the namespace where the RoleBinding is created. As a result, RoleBindings enable fine-grained access control within individual namespaces, which is essential for multi-tenant and least-privilege environments.
When a RoleBinding is created, it references a Role (or a ClusterRole) and assigns its permissions to one or more subjects within that namespace. This allows administrators to reuse existing roles while precisely controlling who can perform certain actions and where. For example, a RoleBinding can grant a service account read-only access to ConfigMaps in a single namespace without affecting access elsewhere in the cluster.
Option A is incorrect because cluster-wide permissions are granted using a ClusterRoleBinding, not a RoleBinding. Option C is incorrect because network rules are enforced using NetworkPolicies, not RBAC objects. Option D is incorrect because Roles are defined independently and only describe permissions; they do not assign them to identities.
In summary, a RoleBinding’s primary purpose is to assign the permissions defined in a Role to users, groups, or service accounts within a specific namespace. This separation of permission definition (Role) and permission assignment (RoleBinding) is a fundamental principle of Kubernetes RBAC and is clearly documented in Kubernetes authorization architecture.
Which item is a Kubernetes node component?
kube-scheduler
kubectl
kube-proxy
etcd
A Kubernetes node component is a component that runs on worker nodes to support Pods and node-level networking/operations. Among the options, kube-proxy is a node component, so C is correct.
kube-proxy runs on each node and implements parts of the Kubernetes Service networking model. It watches the API server for Service and endpoint updates and then programs node networking rules (iptables/IPVS, or equivalent) so traffic sent to a Service IP/port is forwarded to one of the backend Pod endpoints. This is essential for stable virtual IPs and load distribution across Pods.
Why the other options are not node components:
kube-scheduler is a control plane component; it assigns Pods to nodes but does not run on every node as part of node functionality.
kubectl is a client CLI tool used by humans/automation; it is not a cluster component.
etcd is the control plane datastore; it stores cluster state and is not a per-node workload component.
Operationally, kube-proxy can be replaced by some modern CNI/eBPF dataplanes, but in classic Kubernetes architecture it remains the canonical node-level component for Service rule programming. Understanding which components are node vs control plane is key for troubleshooting: node issues involve kubelet/runtime/kube-proxy/CNI; control plane issues involve API server/scheduler/controller-manager/etcd.
So, the verified node component in this list is kube-proxy (C).
=========
What methods can you use to scale a Deployment?
With kubectl edit deployment exclusively.
With kubectl scale-up deployment exclusively.
With kubectl scale deployment and kubectl edit deployment.
With kubectl scale deployment exclusively.
A Deployment’s replica count is controlled by spec.replicas. You can scale a Deployment by changing that field—either directly editing the object or using kubectl’s scaling helper. Therefore C is correct: you can scale using kubectl scale and also via kubectl edit.
kubectl scale deployment
kubectl edit deployment
Option B is invalid because kubectl scale-up deployment is not a standard kubectl command. Option A is incorrect because kubectl edit is not the only method; scaling is commonly done with kubectl scale. Option D is also incorrect because while kubectl scale is a primary method, kubectl edit is also a valid method to change replicas.
In production, you often scale with autoscalers (HPA/VPA), but the question is asking about kubectl methods. The key Kubernetes concept is that scaling is achieved by updating desired state (spec.replicas), and controllers reconcile Pods to match.
=========
What is the default deployment strategy in Kubernetes?
Rolling update
Blue/Green deployment
Canary deployment
Recreate deployment
For Kubernetes Deployments, the default update strategy is RollingUpdate, which corresponds to “Rolling update” in option A. Rolling updates replace old Pods with new Pods gradually, aiming to maintain availability during the rollout. Kubernetes does this by creating a new ReplicaSet for the updated Pod template and then scaling the new ReplicaSet up while scaling the old one down.
The pace and safety of a rolling update are controlled by parameters like maxUnavailable and maxSurge. maxUnavailable limits how many replicas can be unavailable during the update, protecting availability. maxSurge controls how many extra replicas can be created temporarily above the desired count, helping speed up rollouts while maintaining capacity. If readiness probes fail, Kubernetes will pause progression because new Pods aren’t becoming Ready, helping prevent a bad version from fully replacing a good one.
Options B (Blue/Green) and C (Canary) are popular progressive delivery patterns, but they are not the default built-in Deployment strategy. They are typically implemented using additional tooling (service mesh routing, traffic splitting controllers, or specialized rollout controllers) or by operating multiple Deployments/Services. Option D (Recreate) is a valid strategy but not the default; it terminates all old Pods before creating new ones, causing downtime unless you have external buffering or multi-tier redundancy.
From an application delivery perspective, RollingUpdate aligns with Kubernetes’ declarative model: you update the desired Pod template and let the controller converge safely. kubectl rollout status is commonly used to monitor progress. Rollbacks are also supported because the Deployment tracks history. Therefore, the verified correct answer is A: Rolling update.
=========
How does dynamic storage provisioning work?
A user requests dynamically provisioned storage by including an existing StorageClass in their PersistentVolumeClaim.
An administrator creates a StorageClass and includes it in their Pod YAML definition file without creating a PersistentVolumeClaim.
A Pod requests dynamically provisioned storage by including a StorageClass and the Pod name in their PersistentVolumeClaim.
An administrator creates a PersistentVolume and includes the name of the PersistentVolume in their Pod YAML definition file.
Dynamic provisioning is the Kubernetes mechanism where storage is created on-demand when a user creates a PersistentVolumeClaim (PVC) that references a StorageClass, so A is correct. In this model, the user does not need to pre-create a PersistentVolume (PV). Instead, the StorageClass points to a provisioner (typically a CSI driver) that knows how to create a volume in the underlying storage system (cloud disk, SAN, NAS, etc.). When the PVC is created with storageClassName:
This is why option B is incorrect: you do not put a StorageClass “in the Pod YAML” to request provisioning. Pods reference PVCs, not StorageClasses directly. Option C is incorrect because the PVC does not need the Pod name; binding is done via the PVC itself. Option D describes static provisioning: an admin pre-creates PVs and users claim them by creating PVCs that match the PV (capacity, access modes, selectors). Static provisioning can work, but it is not dynamic provisioning.
Under the hood, the StorageClass can define parameters like volume type, replication, encryption, and binding behavior (e.g., volumeBindingMode: WaitForFirstConsumer to delay provisioning until the Pod is scheduled, ensuring the volume is created in the correct zone). Reclaim policies (Delete/Retain) define what happens to the underlying volume after the PVC is deleted.
In cloud-native operations, dynamic provisioning is preferred because it improves developer self-service, reduces manual admin work, and makes scaling stateful workloads easier and faster. The essence is: PVC + StorageClass → automatic PV creation and binding.
=========
Why do administrators need a container orchestration tool?
To manage the lifecycle of an elevated number of containers.
To assess the security risks of the container images used in production.
To learn how to transform monolithic applications into microservices.
Container orchestration tools such as Kubernetes are the future.
The correct answer is A. Container orchestration exists because running containers at scale is hard: you need to schedule workloads onto machines, keep them healthy, scale them up and down, roll out updates safely, and recover from failures automatically. Administrators (and platform teams) use orchestration tools like Kubernetes to manage the lifecycle of many containers across many nodes—handling placement, restart, rescheduling, networking/service discovery, and desired-state reconciliation.
At small scale, you can run containers manually or with basic scripts. But at “elevated” scale (many services, many replicas, many nodes), manual management becomes unreliable and brittle. Orchestration provides primitives and controllers that continuously converge actual state toward desired state: if a container crashes, it is restarted; if a node dies, replacement Pods are scheduled; if traffic increases, replicas can be increased via autoscaling; if configuration changes, rolling updates can be coordinated with readiness checks.
Option B (security risk assessment) is important, but it’s not why orchestration tools exist. Image scanning and supply-chain security are typically handled by CI/CD tooling and registries, not by orchestration as the primary purpose. Option C is a separate architectural modernization effort; orchestration can support microservices, but it isn’t required “to learn transformation.” Option D is an opinion statement rather than a functional need.
So the core administrator need is lifecycle management at scale: ensuring workloads run reliably, predictably, and efficiently across a fleet. That is exactly what option A states.
=========
What are the two essential operations that the kube-scheduler normally performs?
Pod eviction or starting
Resource monitoring and reporting
Filtering and scoring nodes
Starting and terminating containers
The kube-scheduler is a core control plane component in Kubernetes responsible for assigning newly created Pods to appropriate nodes. Its primary responsibility is decision-making, not execution. To make an informed scheduling decision, the kube-scheduler performs two essential operations: filtering and scoring nodes.
The scheduling process begins when a Pod is created without a node assignment. The scheduler first evaluates all available nodes and applies a set of filtering rules. During this phase, nodes that do not meet the Pod’s requirements are eliminated. Filtering criteria include resource availability (CPU and memory requests), node selectors, node affinity rules, taints and tolerations, volume constraints, and other policy-based conditions. Any node that fails one or more of these checks is excluded from consideration.
Once filtering is complete, the scheduler moves on to the scoring phase. In this step, each remaining eligible node is assigned a score based on a collection of scoring plugins. These plugins evaluate factors such as resource utilization balance, affinity preferences, topology spread constraints, and custom scheduling policies. The purpose of scoring is to rank nodes according to how well they satisfy the Pod’s placement preferences. The node with the highest total score is selected as the best candidate.
Option A is incorrect because Pod eviction is handled by other components such as the kubelet and controllers, and starting Pods is the responsibility of the kubelet. Option B is incorrect because resource monitoring and reporting are performed by components like metrics-server, not the scheduler. Option D is also incorrect because starting and terminating containers is entirely handled by the kubelet and the container runtime.
By separating filtering (eligibility) from scoring (preference), the kube-scheduler provides a flexible, extensible, and policy-driven scheduling mechanism. This design allows Kubernetes to support diverse workloads and advanced placement strategies while maintaining predictable scheduling behavior.
Therefore, the correct and verified answer is Option C: Filtering and scoring nodes, as documented in Kubernetes scheduling architecture.
In a cloud native environment, who is usually responsible for maintaining the workloads running across the different platforms?
The cloud provider.
The Site Reliability Engineering (SRE) team.
The team of developers.
The Support Engineering team (SE).
B (the Site Reliability Engineering team) is correct. In cloud-native organizations, SREs are commonly responsible for the reliability, availability, and operational health of workloads across platforms (multiple clusters, regions, clouds, and supporting services). While responsibilities vary by company, the classic SRE charter is to apply software engineering to operations: build automation, standardize runbooks, manage incident response, define SLOs/SLIs, and continuously improve system reliability.
Maintaining workloads “across different platforms” implies cross-cutting operational ownership: deployments need to behave consistently, rollouts must be safe, monitoring and alerting must be uniform, and incident practices must work across environments. SRE teams typically own or heavily influence the observability stack (metrics/logs/traces), operational readiness, capacity planning, and reliability guardrails (error budgets, progressive delivery, automated rollback triggers). They also collaborate closely with platform engineering and application teams, but SRE is often the group that ensures production workloads meet reliability targets.
Why other options are less correct:
The cloud provider (A) maintains the underlying cloud services, but not your application workloads’ correctness, SLOs, or operational processes.
Developers (C) do maintain application code and may own on-call in some models, but the question asks “usually” in cloud-native environments; SRE is the widely recognized function for workload reliability across platforms.
Support Engineering (D) typically focuses on customer support and troubleshooting from a user perspective, not maintaining platform workload reliability at scale.
So, the best and verified answer is B: SRE teams commonly maintain and ensure reliability of workloads across cloud-native platforms.
=========
A Pod named my-app must be created to run a simple nginx container. Which kubectl command should be used?
kubectl create nginx --name=my-app
kubectl run my-app --image=nginx
kubectl create my-app --image=nginx
kubectl run nginx --name=my-app
In Kubernetes, the simplest and most direct way to create a Pod that runs a single container is to use the kubectl run command with the appropriate image specification. The command kubectl run my-app --image=nginx explicitly instructs Kubernetes to create a Pod named my-app using the nginx container image, which makes option B the correct answer.
The kubectl run command is designed to quickly create and run a Pod (or, in some contexts, a higher-level workload resource) from the command line. When no additional flags such as --restart=Always are specified, Kubernetes creates a standalone Pod by default. This is ideal for simple use cases like testing, demonstrations, or learning scenarios where only a single container is required.
Option A is incorrect because kubectl create nginx --name=my-app is not valid syntax; the create subcommand requires a resource type (such as pod, deployment, or service) or a manifest file. Option C is also incorrect because kubectl create my-app --image=nginx omits the resource type and therefore is not a valid kubectl create command. Option D is incorrect because kubectl run nginx --name=my-app attempts to use the deprecated --name flag, which is no longer supported in modern versions of kubectl.
Using kubectl run with explicit naming and image flags is consistent with Kubernetes command-line conventions and is widely documented as the correct approach for creating simple Pods. The resulting Pod can be verified using commands such as kubectl get pods and kubectl describe pod my-app.
In summary, Option B is the correct and verified answer because it uses valid kubectl syntax to create a Pod named my-app running the nginx container image in a straightforward and predictable way.
What happens with a regular Pod running in Kubernetes when a node fails?
A new Pod with the same UID is scheduled to another node after a while.
A new, near-identical Pod but with different UID is scheduled to another node.
By default, a Pod can only be scheduled to the same node when the node fails.
A new Pod is scheduled on a different node only if it is configured explicitly.
B is correct: when a node fails, Kubernetes does not “move” the same Pod instance; instead, a new Pod object (new UID) is created to replace it—assuming the Pod is managed by a controller (Deployment/ReplicaSet, StatefulSet, etc.). A Pod is an API object with a unique identifier (UID) and is tightly associated with the node it’s scheduled to via spec.nodeName. If the node becomes unreachable, that original Pod cannot be restarted elsewhere because it was bound to that node.
Kubernetes’ high availability comes from controllers maintaining desired state. For example, a Deployment desires N replicas. If a node fails and the replicas on that node are lost, the controller will create replacement Pods, and the scheduler will place them onto healthy nodes. These replacement Pods will be “near-identical” in spec (same template), but they are still new instances with new UIDs and typically new IPs.
Why the other options are wrong:
A is incorrect because the UID does not remain the same—Kubernetes creates a new Pod object rather than reusing the old identity.
C is incorrect; pods are not restricted to the same node after failure. The whole point of orchestration is to reschedule elsewhere.
D is incorrect; rescheduling does not require special explicit configuration for typical controller-managed workloads. The controller behavior is standard. (If it’s a bare Pod without a controller, it will not be recreated automatically.)
This also ties to the difference between “regular Pod” vs controller-managed workloads: a standalone Pod is not self-healing by itself, while a Deployment/ReplicaSet provides that resilience. In typical production design, you run workloads under controllers specifically so node failure triggers replacement and restores replica count.
Therefore, the correct outcome is B.
=========
Which persona is normally responsible for defining, testing, and running an incident management process?
Site Reliability Engineers
Project Managers
Application Developers
Quality Engineers
The role most commonly responsible for defining, testing, and running an incident management process is Site Reliability Engineers (SREs), so A is correct. SRE is an operational engineering discipline focused on ensuring reliability, availability, and performance of services in production. Incident management is a core part of that mission: when outages or severe degradations occur, someone must coordinate response, restore service quickly, and then drive follow-up improvements to prevent recurrence.
In cloud native environments (including Kubernetes), incident response involves both technical and process elements. On the technical side, SREs ensure observability is in place—metrics, logs, traces, dashboards, and actionable alerts—so incidents can be detected and diagnosed quickly. They also validate operational readiness: runbooks, escalation paths, on-call rotations, and post-incident review practices. On the process side, SREs often establish severity classifications, response roles (incident commander, communications lead, subject matter experts), and “game day” exercises or simulated incidents to test preparedness.
Project managers may help coordinate schedules and communication for projects, but they are not typically the owners of operational incident response mechanics. Application developers are crucial participants during incidents, especially for debugging application-level failures, but they are not usually the primary maintainers of the incident management framework. Quality engineers focus on testing and quality assurance, and while they contribute to preventing defects, they are not usually the owners of real-time incident operations.
In Kubernetes specifically, incidents often span multiple layers: workload behavior, cluster resources, networking, storage, and platform dependencies. SREs are positioned to manage the cross-cutting operational view and to continuously improve reliability through error budgets, SLOs/SLIs, and iterative hardening. That’s why the correct persona is Site Reliability Engineers.
=========
What is the correct hierarchy of Kubernetes components?
Containers → Pods → Cluster → Nodes
Nodes → Cluster → Containers → Pods
Cluster → Nodes → Pods → Containers
Pods → Cluster → Containers → Nodes
The correct answer is C: Cluster → Nodes → Pods → Containers. This expresses the fundamental structural relationship in Kubernetes. A cluster is the overall system (control plane + nodes) that runs your workloads. Inside the cluster, you have nodes (worker machines—VMs or bare metal) that provide CPU, memory, storage, and networking. The scheduler assigns workloads to nodes.
Workloads are executed as Pods, which are the smallest deployable units Kubernetes schedules. Pods represent one or more containers that share networking (one Pod IP and port space) and can share storage volumes. Within each Pod are containers, which are the actual application processes packaged with their filesystem and runtime dependencies.
The other options are incorrect because they break these containment relationships. Containers do not contain Pods; Pods contain containers. Nodes do not exist “inside” Pods; Pods run on nodes. And the cluster is the top-level boundary that contains nodes and orchestrates Pods.
This hierarchy matters for troubleshooting and design. If you’re thinking about capacity, you reason at the node and cluster level (node pools, autoscaling, quotas). If you’re thinking about application scaling, you reason at the Pod level (replicas, HPA, readiness probes). If you’re thinking about process-level concerns, you reason at the container level (images, security context, runtime user, resources). Kubernetes intentionally uses this layered model so that scheduling and orchestration operate on Pods, while the container runtime handles container execution details.
So the accurate hierarchy from largest to smallest unit is: Cluster → Nodes → Pods → Containers, which corresponds to C.
=========
In a cloud native environment, how do containerization and virtualization differ in terms of resource management?
Containerization uses hypervisors to manage resources, while virtualization does not.
Containerization shares the host OS, while virtualization runs a full OS for each instance.
Containerization consumes more memory than virtualization by default.
Containerization allocates resources per container, virtualization does not isolate them.
The fundamental difference between containerization and virtualization in a cloud native environment lies in how they manage and isolate resources, particularly with respect to the operating system. The correct description is that containerization shares the host operating system, while virtualization runs a full operating system for each instance, making option B the correct answer.
In virtualization, each virtual machine (VM) includes its own complete guest operating system running on top of a hypervisor. The hypervisor virtualizes hardware resources—CPU, memory, storage, and networking—and allocates them to each VM. Because every VM runs a full OS, virtualization introduces significant overhead in terms of memory usage, disk space, and startup time. However, it provides strong isolation between workloads, which is useful for running different operating systems or untrusted workloads on the same physical hardware.
In contrast, containerization operates at the operating system level rather than the hardware level. Containers share the host OS kernel and isolate applications using kernel features such as namespaces and control groups (cgroups). This design makes containers much lighter weight than virtual machines. Containers start faster, consume fewer resources, and allow higher workload density on the same infrastructure. Resource limits and isolation are still enforced, but without duplicating the entire operating system for each application instance.
Option A is incorrect because hypervisors are a core component of virtualization, not containerization. Option C is incorrect because containers generally consume less memory than virtual machines due to the absence of a full guest OS. Option D is incorrect because virtualization does isolate resources very strongly, while containers rely on OS-level isolation rather than hardware-level isolation.
In cloud native architectures, containerization is preferred for microservices and scalable workloads because of its efficiency and portability. Virtualization is still valuable for stronger isolation and heterogeneous operating systems. Therefore, Option B accurately captures the key resource management distinction between the two models.
In a serverless computing architecture:
Users of the cloud provider are charged based on the number of requests to a function.
Serverless functions are incompatible with containerized functions.
Users should make a reservation to the cloud provider based on an estimation of usage.
Containers serving requests are running in the background in idle status.
Serverless architectures typically bill based on actual consumption, often measured as number of requests and execution duration (and sometimes memory/CPU allocated), so A is correct. The defining trait is that you don’t provision or manage servers directly; the platform scales execution up and down automatically, including down to zero for many models, and charges you for what you use.
Option B is incorrect: many serverless platforms can run container-based workloads (and some are explicitly “serverless containers”). The idea is the operational abstraction and billing model, not incompatibility with containers. Option C is incorrect because “making a reservation based on estimation” describes reserved capacity purchasing, which is the opposite of the typical serverless pay-per-use model. Option D is misleading: serverless systems aim to avoid charging for idle compute; while platforms may keep some warm capacity for latency reasons, the customer-facing model is not “containers running idle in the background.”
In cloud-native architecture, serverless is often chosen for spiky, event-driven workloads where you want minimal ops overhead and cost efficiency at low utilization. It pairs naturally with eventing systems (queues, pub/sub) and can be integrated with Kubernetes ecosystems via event-driven autoscaling frameworks or managed serverless offerings.
So the correct statement is A: charging is commonly based on requests (and usage), which captures the cost and operational model that differentiates serverless from always-on infrastructure.
=========
Which component of the Kubernetes architecture is responsible for integration with the CRI container runtime?
kubeadm
kubelet
kube-apiserver
kubectl
The correct answer is B: kubelet. The Container Runtime Interface (CRI) defines how Kubernetes interacts with container runtimes in a consistent, pluggable way. The component that speaks CRI is the kubelet, the node agent responsible for running Pods on each node. When the kube-scheduler assigns a Pod to a node, the kubelet reads the PodSpec and makes the runtime calls needed to realize that desired state—pull images, create a Pod sandbox, start containers, stop containers, and retrieve status and logs. Those calls are made via CRI to a CRI-compliant runtime such as containerd or CRI-O.
Why not the others:
kubeadm bootstraps clusters (init/join/upgrade workflows) but does not run containers or speak CRI for workload execution.
kube-apiserver is the control plane API frontend; it stores and serves cluster state and does not directly integrate with runtimes.
kubectl is just a client tool that sends API requests; it is not involved in runtime integration on nodes.
This distinction matters operationally. If the runtime is misconfigured or CRI endpoints are unreachable, kubelet will report errors and Pods can get stuck in ContainerCreating, image pull failures, or runtime errors. Debugging often involves checking kubelet logs and runtime service health, because kubelet is the integration point bridging Kubernetes scheduling/state with actual container execution.
So, the node-level component responsible for CRI integration is the kubelet—option B.
=========
To visualize data from Prometheus you can use expression browser or console templates. What is the other data visualization tool commonly used together with Prometheus?
Grafana
Graphite
Nirvana
GraphQL
The most common visualization tool used with Prometheus is Grafana, so A is correct. Prometheus includes a built-in expression browser that can graph query results, but Grafana provides a much richer dashboarding experience: reusable dashboards, variables, templating, annotations, alerting integrations, and multi-data-source support.
In Kubernetes observability stacks, Prometheus scrapes and stores time-series metrics (cluster and application metrics). Grafana queries Prometheus using PromQL and renders the results into dashboards for SREs and developers. This pairing is widespread because it cleanly separates concerns: Prometheus is the metrics store and query engine; Grafana is the UI and dashboard layer.
Option B (Graphite) is a separate metrics system with its own storage/query model; while Grafana can visualize Graphite too, the question asks what is commonly used together with Prometheus, which is Grafana. Option D (GraphQL) is an API query language, not a metrics visualization tool. Option C (“Nirvana”) is not a standard Prometheus visualization tool in common Kubernetes stacks.
In practice, this combo enables operational outcomes: dashboards for error rates and latency (often derived from histograms), capacity monitoring (node CPU/memory), workload behavior (Pod restarts, HPA scaling), and SLO reporting. Grafana dashboards often serve as the shared language during incidents: teams correlate alerts with time-series patterns and quickly identify when regressions began.
Therefore, the verified correct tool commonly used with Prometheus for visualization is Grafana (A).
=========
How can you monitor the progress for an updated Deployment/DaemonSets/StatefulSets?
kubectl rollout watch
kubectl rollout progress
kubectl rollout state
kubectl rollout status
To monitor rollout progress for Kubernetes workload updates (most commonly Deployments, and also StatefulSets and DaemonSets where applicable), the standard kubectl command is kubectl rollout status, which makes D correct.
Kubernetes manages updates declaratively through controllers. For a Deployment, an update typically creates a new ReplicaSet and gradually shifts replicas from the old to the new according to the strategy (e.g., RollingUpdate with maxUnavailable and maxSurge). For StatefulSets, updates may be ordered and respect stable identities, and for DaemonSets, an update replaces node-level Pods according to update strategy. In all cases, you often want a single command that tells you whether the controller has completed the update and whether the new replicas are available. kubectl rollout status queries the resource status and prints a progress view until completion or timeout.
The other commands listed are not the canonical kubectl subcommands. kubectl rollout watch, kubectl rollout progress, and kubectl rollout state are not standard rollout verbs in kubectl. The supported rollout verbs typically include status, history, undo, pause, and resume (depending on kubectl version and resource type).
Operationally, kubectl rollout status deployment/
=========
How is application data maintained in containers?
Store data into data folders.
Store data in separate folders.
Store data into sidecar containers.
Store data into volumes.
Container filesystems are ephemeral: the writable layer is tied to the container lifecycle and can be lost when containers are recreated. Therefore, maintaining application data correctly means storing it in volumes, making D the correct answer. In Kubernetes, volumes provide durable or shareable storage that is mounted into containers at specific paths. Depending on the volume type, the data can persist across container restarts and even Pod rescheduling.
Kubernetes supports many volume patterns. For transient scratch data you might use emptyDir (ephemeral for the Pod’s lifetime). For durable state, you typically use PersistentVolumes consumed by PersistentVolumeClaims (PVCs), backed by storage systems via CSI drivers (cloud disks, SAN/NAS, distributed storage). This decouples the application container image from its state and enables rolling updates, rescheduling, and scaling without losing data.
Options A and B (“folders”) are incomplete because folders inside the container filesystem do not guarantee persistence. A folder is only as durable as the underlying storage; without a mounted volume, it lives in the container’s writable layer and will disappear when the container is replaced. Option C is incorrect because “sidecar containers” are not a data durability mechanism; sidecars can help ship logs or sync data, but persistent data should still be stored on volumes (or external services like managed databases).
From an application delivery standpoint, the principle is: containers should be immutable and disposable, and state should be externalized. Volumes (and external managed services) make this possible. In Kubernetes, this is a foundational pattern enabling safe rollouts, self-healing, and portability: the platform can kill and recreate Pods freely because data is maintained independently via volumes.
Therefore, the verified correct choice is D: Store data into volumes.
=========
What is the main purpose of a DaemonSet?
A DaemonSet ensures that all (or certain) nodes run a copy of a Pod.
A DaemonSet ensures that the kubelet is constantly up and running.
A DaemonSet ensures that there are as many pods running as specified in the replicas field.
A DaemonSet ensures that a process (agent) runs on every node.
The correct answer is A. A DaemonSet is a workload controller whose job is to ensure that a specific Pod runs on all nodes (or on a selected subset of nodes) in the cluster. This is fundamentally different from Deployments/ReplicaSets, which aim to maintain a certain replica count regardless of node count. With a DaemonSet, the number of Pods is implicitly tied to the number of eligible nodes: add a node, and the DaemonSet automatically schedules a Pod there; remove a node, and its Pod goes away.
DaemonSets are commonly used for node-level services and background agents: log collectors, node monitoring agents, storage daemons, CNI components, or security agents—anything where you want a presence on each node to interact with node resources. This aligns with option D’s phrasing (“agent on every node”), but option A is the canonical definition and is slightly broader because it covers “all or certain nodes” (via node selectors/affinity/taints-tolerations) and the fact that the unit is a Pod.
Why the other options are wrong: DaemonSets do not “keep kubelet running” (B); kubelet is a node service managed by the OS. DaemonSets do not use a replicas field to maintain a specific count (C); that’s Deployment/ReplicaSet behavior.
Operationally, DaemonSets matter for cluster operations because they provide consistent node coverage and automatically react to node pool scaling. They also require careful scheduling constraints so they land only where intended (e.g., only Linux nodes, only GPU nodes). But the main purpose remains: ensure a copy of a Pod runs on each relevant node—option A.
=========
What function does kube-proxy provide to a cluster?
Implementing the Ingress resource type for application traffic.
Forwarding data to the correct endpoints for Services.
Managing data egress from the cluster nodes to the network.
Managing access to the Kubernetes API.
kube-proxy is a node-level networking component that helps implement the Kubernetes Service abstraction. Services provide a stable virtual IP and DNS name that route traffic to a set of Pods (endpoints). kube-proxy watches the API for Service and EndpointSlice/Endpoints changes and then programs the node’s networking rules so that traffic sent to a Service is forwarded (load-balanced) to one of the correct backend Pod IPs. This is why B is correct.
Conceptually, kube-proxy turns the declarative Service configuration into concrete dataplane behavior. Depending on the mode, it may use iptables rules, IPVS, or integrate with eBPF-capable networking stacks (sometimes kube-proxy is replaced or bypassed by CNI implementations, but the classic kube-proxy role remains the canonical answer). In iptables mode, kube-proxy creates NAT rules that rewrite traffic from the Service virtual IP to one of the Pod endpoints. In IPVS mode, it programs kernel load-balancing tables for more scalable service routing. In all cases, the job is to connect “Service IP/port” to “Pod IP/port endpoints.”
Option A is incorrect because Ingress is a separate API resource and requires an Ingress Controller (like NGINX Ingress, HAProxy, Traefik, etc.) to implement HTTP routing, TLS termination, and host/path rules. kube-proxy is not an Ingress controller. Option C is incorrect because general node egress management is not kube-proxy’s responsibility; egress behavior typically depends on the CNI plugin, NAT configuration, and network policies. Option D is incorrect because API access control is handled by the API server’s authentication/authorization layers (RBAC, webhooks, etc.), not kube-proxy.
So kube-proxy’s essential function is: keep node networking rules in sync so that Service traffic reaches the right Pods. It is one of the key components that makes Services “just work” across nodes without clients needing to know individual Pod IPs.
=========
Which tool is used to streamline installing and managing Kubernetes applications?
apt
helm
service
brew
Helm is the Kubernetes package manager used to streamline installing and managing applications, so B is correct. Helm packages Kubernetes resources into charts, which contain templates, default values, and metadata. When you install a chart, Helm renders templates into concrete manifests and applies them to the cluster. Helm also tracks a “release,” enabling upgrades, rollbacks, and consistent lifecycle operations across environments.
This is why Helm is widely used for complex applications that require multiple Kubernetes objects (Deployments/StatefulSets, Services, Ingresses, ConfigMaps, RBAC, CRDs). Rather than manually maintaining many YAML files per environment, teams can parameterize configuration with values and reuse the same chart across dev/stage/prod with different overrides.
Option A (apt) and option D (brew) are OS package managers (Debian/Ubuntu and macOS/Linuxbrew respectively), not Kubernetes application managers. Option C (service) is a Linux service manager command pattern and not relevant here.
In cloud-native delivery pipelines, Helm often integrates with GitOps and CI/CD: the pipeline builds an image, updates chart values (image tag/digest), and deploys via Helm or via GitOps controllers that render/apply Helm charts. Helm also supports chart repositories and versioning, making it easier to standardize deployments and manage dependencies.
So, the verified tool for streamlined Kubernetes app install/management is Helm (B).
=========
In which framework do the developers no longer have to deal with capacity, deployments, scaling and fault tolerance, and OS?
Docker Swarm
Kubernetes
Mesos
Serverless
Serverless is the model where developers most directly avoid managing server capacity, OS operations, and much of the deployment/scaling/fault-tolerance mechanics, which is why D is correct. In serverless computing (commonly Function-as-a-Service, FaaS, and managed serverless container platforms), the provider abstracts away the underlying servers. You typically deploy code (functions) or a container image, define triggers (HTTP events, queues, schedules), and the platform automatically provisions the required compute, scales it based on demand, and handles much of the availability and fault tolerance behind the scenes.
It’s important to compare this to Kubernetes: Kubernetes does automate scheduling, self-healing, rolling updates, and scaling, but it still requires you (or your platform team) to design and operate cluster capacity, node pools, upgrades, runtime configuration, networking, and baseline reliability controls. Even in managed Kubernetes services, you still choose node sizes, scale policies, and operational configuration. Kubernetes reduces toil, but it does not eliminate infrastructure concerns in the same way serverless does.
Docker Swarm and Mesos are orchestration platforms that schedule workloads, but they also require managing the underlying capacity and OS-level aspects. They are not “no longer have to deal with capacity and OS” frameworks.
From a cloud native viewpoint, serverless is about consuming compute as an on-demand utility. Kubernetes can be a foundation for a serverless experience (for example, with event-driven autoscaling or serverless frameworks), but the pure framework that removes the most operational burden from developers is serverless.
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called:
Namespaces
Containers
Hypervisors
cgroups
Kubernetes provides “virtual clusters” within a single physical cluster primarily through Namespaces, so A is correct. Namespaces are a logical partitioning mechanism that scopes many Kubernetes resources (Pods, Services, Deployments, ConfigMaps, Secrets, etc.) into separate environments. This enables multiple teams, applications, or environments (dev/test/prod) to share a cluster while keeping their resource names and access controls separated.
Namespaces are often described as “soft multi-tenancy.” They don’t provide full isolation like separate clusters, but they do allow administrators to apply controls per namespace:
RBAC rules can grant different permissions per namespace (who can read Secrets, who can deploy workloads, etc.).
ResourceQuotas and LimitRanges can enforce fair usage and prevent one namespace from consuming all cluster resources.
NetworkPolicies can isolate traffic between namespaces (depending on the CNI).
Containers are runtime units inside Pods and are not “virtual clusters.” Hypervisors are virtualization components for VMs, not Kubernetes partitioning constructs. cgroups are Linux kernel primitives for resource control, not Kubernetes virtual cluster constructs.
While there are other “virtual cluster” approaches (like vcluster projects) that create stronger virtualized control planes, the built-in Kubernetes mechanism referenced by this question is namespaces. Therefore, the correct answer is A: Namespaces.
=========
Which of these is a valid container restart policy?
On login
On update
On start
On failure
The correct answer is D: On failure. In Kubernetes, restart behavior is controlled by the Pod-level field spec.restartPolicy, with valid values Always, OnFailure, and Never. The option presented here (“On failure”) maps to Kubernetes’ OnFailure policy. This setting determines what the kubelet should do when containers exit:
Always: restart containers whenever they exit (typical for long-running services)
OnFailure: restart containers only if they exit with a non-zero status (common for batch workloads)
Never: do not restart containers (fail and leave it terminated)
So “On failure” is a valid restart policy concept and the only one in the list that matches Kubernetes semantics.
The other options are not Kubernetes restart policies. “On login,” “On update,” and “On start” are not recognized values and don’t align with how Kubernetes models container lifecycle. Kubernetes is declarative and event-driven: it reacts to container exit codes and controller intent, not user “logins.”
Operationally, choosing the right restart policy is important. For example, Jobs typically use restartPolicy: OnFailure or Never because the goal is completion, not continuous uptime. Deployments usually imply “Always” because the workload should keep serving traffic, and a crashed container should be restarted. Also note that controllers interact with restarts: a Deployment may recreate Pods if they fail readiness, while a Job counts completions and failures based on Pod termination behavior.
Therefore, among the options, the only valid (Kubernetes-aligned) restart policy is D.
=========
What is the main role of the Kubernetes DNS within a cluster?
Acts as a DNS server for virtual machines that are running outside the cluster.
Provides a DNS as a Service, allowing users to create zones and registries for domains that they own.
Allows Pods running in dual stack to convert IPv6 calls into IPv4 calls.
Provides consistent DNS names for Pods and Services for workloads that need to communicate with each other.
Kubernetes DNS (commonly implemented by CoreDNS) provides service discovery inside the cluster by assigning stable, consistent DNS names to Services and (optionally) Pods, which makes D correct. In a Kubernetes environment, Pods are ephemeral—IP addresses can change when Pods restart or move between nodes. DNS-based discovery allows applications to communicate using stable names rather than hardcoded IPs.
For Services, Kubernetes creates DNS records like service-name.namespace.svc.cluster.local, which resolve to the Service’s virtual IP (ClusterIP) or, for headless Services, to the set of Pod endpoints. This supports both load-balanced communication (standard Service) and per-Pod addressing (headless Service, commonly used with StatefulSets). Kubernetes DNS is therefore a core building block that enables microservices to locate each other reliably.
Option A is not Kubernetes DNS’s purpose; it serves cluster workloads rather than external VMs. Option B describes a managed DNS hosting product (creating zones/registries), which is outside the scope of cluster DNS. Option C describes protocol translation, which is not the role of DNS. Dual-stack support relates to IP families and networking configuration, not DNS translating IPv6 to IPv4.
In day-to-day Kubernetes operations, DNS reliability impacts everything: if DNS is unhealthy, Pods may fail to resolve Services, causing cascading outages. That’s why CoreDNS is typically deployed as a highly available add-on in kube-system, and why DNS caching and scaling are important for large clusters.
So the correct statement is D: Kubernetes DNS provides consistent DNS names so workloads can communicate reliably.
=========
What is the order of 4C’s in Cloud Native Security, starting with the layer that a user has the most control over?
Cloud -> Container -> Cluster -> Code
Container -> Cluster -> Code -> Cloud
Cluster -> Container -> Code -> Cloud
Code -> Container -> Cluster -> Cloud
The Cloud Native Security “4C’s” model is commonly presented as Code, Container, Cluster, Cloud, ordered from the layer you control most directly to the one you control least—therefore D is correct. The idea is defense-in-depth across layers, recognizing that responsibilities are shared between developers, platform teams, and cloud providers.
Code is where users have the most direct control: application logic, dependencies, secure coding practices, secrets handling patterns, and testing. This includes validating inputs, avoiding vulnerabilities, and scanning dependencies. Next is the Container layer: building secure images, minimizing image size/attack surface, using non-root users, setting file permissions, and scanning images for known CVEs. Container security is about ensuring the artifact you run is trustworthy and hardened.
Then comes the Cluster layer: Kubernetes configuration and runtime controls, including RBAC, admission policies (OPA/Gatekeeper), Pod Security standards, network policies, runtime security, audit logging, and node hardening practices. Cluster controls determine what can run and how workloads interact. Finally, the Cloud layer includes the infrastructure and provider controls—IAM, VPC/networking, KMS, managed control plane protections, and physical security—which users influence through configuration but do not fully own.
The model’s value is prioritization: start with what you control most (code), then harden the container artifact, then enforce cluster policy and runtime protections, and finally ensure cloud controls are configured properly. This layered approach aligns well with Kubernetes security guidance and modern shared-responsibility models.
Which of the following is a responsibility of the governance board of an open source project?
Decide about the marketing strategy of the project.
Review the pull requests in the main branch.
Outline the project's “terms of engagement”.
Define the license to be used in the project.
A governance board in an open source project typically defines how the community operates—its decision-making rules, roles, conflict resolution, and contribution expectations—so C (“Outline the project's terms of engagement”) is correct. In large cloud-native projects (Kubernetes being a prime example), clear governance is essential to coordinate many contributors, companies, and stakeholders. Governance establishes the “rules of the road” that keep collaboration productive and fair.
“Terms of engagement” commonly includes: how maintainers are selected, how proposals are reviewed (e.g., enhancement processes), how meetings and SIGs operate, what constitutes consensus, how voting works when consensus fails, and what code-of-conduct expectations apply. It also defines escalation and dispute resolution paths so technical disagreements don’t become community-breaking conflicts. In other words, governance is about ensuring the project has durable, transparent processes that outlive any individual contributor and support vendor-neutral decision making.
Option B (reviewing pull requests) is usually the responsibility of maintainers and SIG owners, not a governance board. The governance body may define the structure that empowers maintainers, but it generally does not do day-to-day code review. Option A (marketing strategy) is often handled by foundations, steering committees, or separate outreach groups, not governance boards as their primary responsibility. Option D (defining the license) is usually decided early and may be influenced by a foundation or legal process; while governance can shape legal/policy direction, the core governance responsibility is broader community operating rules rather than selecting a license.
In cloud-native ecosystems, strong governance supports sustainability: it encourages contributions, protects neutrality, and provides predictable processes for evolution. Therefore, the best verified answer is C.
=========
Imagine you're releasing open-source software for the first time. Which of the following is a valid semantic version?
1.0
2021-10-11
0.1.0-rc
v1beta1
Semantic Versioning (SemVer) follows the pattern MAJOR.MINOR.PATCH with optional pre-release identifiers (e.g., -rc, -alpha.1) and build metadata. Among the options, 0.1.0-rc matches SemVer rules, so C is correct.
0.1.0-rc breaks down as: MAJOR=0, MINOR=1, PATCH=0, and -rc indicates a pre-release (“release candidate”). Pre-release versions are valid SemVer and are explicitly allowed to denote versions that are not yet considered stable. For a first-time open-source release, 0.x.y is common because it signals the API may still change in backward-incompatible ways before reaching 1.0.0.
Why the other options are not correct SemVer as written:
1.0 is missing the PATCH segment; SemVer requires three numeric components (e.g., 1.0.0).
2021-10-11 is a date string, not MAJOR.MINOR.PATCH.
v1beta1 resembles Kubernetes API versioning conventions, not SemVer.
In cloud-native delivery and Kubernetes ecosystems, SemVer matters because it communicates compatibility. Incrementing MAJOR indicates breaking changes, MINOR indicates backward-compatible feature additions, and PATCH indicates backward-compatible bug fixes. Pre-release tags allow releasing candidates for testing without claiming full stability. This is especially useful for open-source consumers and automation systems that need consistent version comparison and upgrade planning.
So, the only valid semantic version in the choices is 0.1.0-rc, option C.
=========
If a Pod was waiting for container images to download on the scheduled node, what state would it be in?
Failed
Succeeded
Unknown
Pending
If a Pod is waiting for its container images to be pulled to the node, it remains in the Pending phase, so D is correct. Kubernetes Pod “phase” is a high-level summary of where the Pod is in its lifecycle. Pending means the Pod has been accepted by the cluster but one or more of its containers has not started yet. That can occur because the Pod is waiting to be scheduled, waiting on volume attachment/mount, or—very commonly—waiting for the container runtime to pull the image.
When image pulling is the blocker, kubectl describe pod
Why the other phases don’t apply:
Succeeded is for run-to-completion Pods that have finished successfully (typical for Jobs).
Failed means the Pod terminated and at least one container terminated in failure (and won’t be restarted, depending on restartPolicy).
Unknown is used when the node can’t be contacted and the Pod’s state can’t be reliably determined (rare in healthy clusters).
A subtle but important Kubernetes detail: status “Waiting” reasons like ImagePullBackOff are container states inside .status.containerStatuses, while the Pod phase can still be Pending. So, “waiting for images to download” maps to Pod Pending, with container waiting reasons providing the deeper diagnosis.
Therefore, the verified correct answer is D: Pending.
=========
In Kubernetes, what is the primary responsibility of the kubelet running on each worker node?
To allocate persistent storage volumes and manage distributed data replication for Pods.
To manage cluster state information and handle all scheduling decisions for workloads.
To ensure that containers defined in Pod specifications are running and remain healthy on the node.
To provide internal DNS resolution and route service traffic between Pods and nodes.
The kubelet is a critical Kubernetes component that runs on every worker node and acts as the primary execution agent for Pods. Its core responsibility is to ensure that the containers defined in Pod specifications are running and remain healthy on the node, making option C the correct answer.
Once the Kubernetes scheduler assigns a Pod to a specific node, the kubelet on that node becomes responsible for carrying out the desired state described in the Pod specification. It continuously watches the API server for Pods assigned to its node and communicates with the container runtime (such as containerd or CRI-O) to start, stop, and restart containers as needed. The kubelet does not make scheduling decisions; it simply executes them.
Health management is another key responsibility of the kubelet. It runs liveness, readiness, and startup probes as defined in the Pod specification. If a container fails a liveness probe, the kubelet restarts it. If a readiness probe fails, the kubelet marks the Pod as not ready, preventing traffic from being routed to it. The kubelet also reports detailed Pod and node status information back to the API server, enabling controllers to take corrective actions when necessary.
Option A is incorrect because persistent volume provisioning and data replication are handled by storage systems, CSI drivers, and controllers—not by the kubelet. Option B is incorrect because cluster state management and scheduling are responsibilities of control plane components such as the API server, controller manager, and kube-scheduler. Option D is incorrect because DNS resolution and service traffic routing are handled by components like CoreDNS and kube-proxy.
In summary, the kubelet serves as the node-level guardian of Kubernetes workloads. By ensuring containers are running exactly as specified and continuously reporting their health and status, the kubelet forms the essential bridge between Kubernetes’ declarative control plane and the actual execution of applications on worker nodes.
Which tools enable Kubernetes HorizontalPodAutoscalers to use custom, application-generated metrics to trigger scaling events?
Prometheus and the prometheus-adapter.
Graylog and graylog-autoscaler metrics.
Graylog and the kubernetes-adapter.
Grafana and Prometheus.
To scale on custom, application-generated metrics, the Horizontal Pod Autoscaler (HPA) needs those metrics exposed through the Kubernetes custom metrics (or external metrics) API. A common and Kubernetes-documented approach is Prometheus + prometheus-adapter, making A correct. Prometheus scrapes application metrics (for example, request rate, queue depth, in-flight requests) from /metrics endpoints. The prometheus-adapter then translates selected Prometheus time series into the Kubernetes Custom Metrics API so the HPA controller can fetch them and make scaling decisions.
Why not the other options: Grafana is a visualization tool; it does not provide the metrics API translation layer required by HPA, so “Grafana and Prometheus” is incomplete. Graylog is primarily a log management system; it’s not the standard solution for feeding custom metrics into HPA via the Kubernetes metrics APIs. The “kubernetes-adapter” term in option C is not the standard named adapter used in the common Kubernetes ecosystem for Prometheus-backed custom metrics (the recognized component is prometheus-adapter).
This matters operationally because HPA is not limited to CPU/memory. CPU and memory use resource metrics (often from metrics-server), but modern autoscaling often needs application signals: message queue length, requests per second, latency, or business metrics. With Prometheus and prometheus-adapter, you can define HPA rules such as “scale to maintain queue depth under X” or “scale based on requests per second per pod.” This can produce better scaling behavior than CPU-based scaling alone, especially for I/O-bound services or workloads with uneven CPU profiles.
So the correct tooling combination in the provided choices is Prometheus and the prometheus-adapter, option A.
=========
There is an application running in a logical chain: Gateway API → Service → EndpointSlice → Container.
What Kubernetes API object is missing from this sequence?
Proxy
Docker
Pod
Firewall
In Kubernetes, application traffic flows through a well-defined set of API objects and runtime components before reaching a running container. Understanding this logical chain is essential for grasping how Kubernetes networking works internally.
The given sequence is: Gateway API → Service → EndpointSlice → Container. While this looks close to correct, it is missing a critical Kubernetes abstraction: the Pod. Containers in Kubernetes do not run independently; they always run inside Pods. A Pod is the smallest deployable and schedulable unit in Kubernetes and serves as the execution environment for one or more containers that share networking and storage resources.
The correct logical chain should be:
Gateway API → Service → EndpointSlice → Pod → Container
The Gateway API defines how external or internal traffic enters the cluster. The Service provides a stable virtual IP and DNS name, abstracting a set of backend workloads. EndpointSlices then represent the actual network endpoints backing the Service, typically mapping to the IP addresses of Pods. Finally, traffic is delivered to containers running inside those Pods.
Option A (Proxy) is incorrect because while proxies such as kube-proxy or data plane proxies play a role in traffic forwarding, they are not Kubernetes API objects that represent application workloads in this logical chain. Option B (Docker) is incorrect because Docker is a container runtime, not a Kubernetes API object, and Kubernetes is runtime-agnostic. Option D (Firewall) is incorrect because firewalls are not core Kubernetes workload or networking API objects involved in service-to-container routing.
Option C (Pod) is the correct answer because Pods are the missing link between EndpointSlices and containers. EndpointSlices point to Pod IPs, and containers cannot exist outside of Pods. Kubernetes documentation clearly states that Pods are the fundamental unit of execution and networking, making them essential in any accurate representation of application traffic flow within a cluster.
What is an important consideration when choosing a base image for a container in a Kubernetes deployment?
It should be minimal and purpose-built for the application to reduce attack surface and improve performance.
It should always be the latest version to ensure access to the newest features.
It should be the largest available image to ensure all dependencies are included.
It can be any existing image from the public repository without consideration of its contents.
Choosing an appropriate base image is a critical decision in building containerized applications for Kubernetes, as it directly impacts security, performance, reliability, and operational efficiency. A key best practice is to select a minimal, purpose-built base image, making option A the correct answer.
Minimal base images—such as distroless images or slim variants of common distributions—contain only the essential components required to run the application. By excluding unnecessary packages, shells, and utilities, these images significantly reduce the attack surface. Fewer components mean fewer potential vulnerabilities, which is especially important in Kubernetes environments where containers are often deployed at scale and exposed to dynamic network traffic.
Smaller images also improve performance and efficiency. They reduce image size, leading to faster image pulls, quicker Pod startup times, and lower network and storage overhead. This is particularly beneficial in large clusters or during frequent deployments, scaling events, or rolling updates. Kubernetes’ design emphasizes fast, repeatable deployments, and lightweight images align well with these goals.
Option B is incorrect because always using the latest image version can introduce instability or unexpected breaking changes. Kubernetes best practices recommend using explicitly versioned and tested images to ensure predictable behavior and reproducibility. Option C is incorrect because large images increase the attack surface, slow down deployments, and often include unnecessary dependencies that are never used by the application. Option D is incorrect because blindly using public images without inspecting their contents or provenance introduces serious security and compliance risks.
Kubernetes documentation and cloud-native security guidance consistently emphasize the principle of least privilege and minimalism in container images. A well-chosen base image supports secure defaults, faster operations, and easier maintenance, all of which are essential for running reliable workloads in production Kubernetes environments.
Therefore, the correct and verified answer is Option A.
What is a key feature of a container network?
Proxying REST requests across a set of containers.
Allowing containers running on separate hosts to communicate.
Allowing containers on the same host to communicate.
Caching remote disk access.
A defining requirement of container networking in orchestrated environments is enabling workloads to communicate across hosts, not just within a single machine. That’s why B is correct: a key feature of a container network is allowing containers (Pods) running on separate hosts to communicate.
In Kubernetes, this idea becomes the Kubernetes network model: every Pod gets an IP address, and Pods should be able to communicate with other Pods across nodes without needing NAT (depending on implementation details). Achieving that across a cluster requires a networking layer (typically implemented by a CNI plugin) that can route traffic between nodes so that Pod-to-Pod communication works regardless of placement. This is crucial because schedulers dynamically place Pods; you cannot assume two communicating components will land on the same node.
Option C is true in a trivial sense—containers on the same host can communicate—but that capability alone is not the key feature that makes orchestration viable at scale. Cross-host connectivity is the harder and more essential property. Option A describes application-layer behavior (like API gateways or reverse proxies) rather than the foundational networking capability. Option D describes storage optimization, unrelated to container networking.
From a cloud native architecture perspective, reliable cross-host networking enables microservices patterns, service discovery, and distributed systems behavior. Kubernetes Services, DNS, and NetworkPolicies all depend on the underlying ability for Pods across the cluster to send traffic to each other. If your container network cannot provide cross-node routing and reachability, the cluster behaves like isolated islands and breaks the fundamental promise of orchestration: “schedule anywhere, communicate consistently.”
=========
What is the practice of bringing financial accountability to the variable spend model of cloud resources?
FaaS
DevOps
CloudCost
FinOps
The practice of bringing financial accountability to cloud spending—where costs are variable and usage-based—is called FinOps, so D is correct. FinOps (Financial Operations) is an operating model and culture that helps organizations manage cloud costs by connecting engineering, finance, and business teams. Because cloud resources can be provisioned quickly and billed dynamically, traditional budgeting approaches often fail to keep pace. FinOps addresses this by introducing shared visibility, governance, and optimization processes that enable teams to make cost-aware decisions while still moving fast.
In Kubernetes and cloud-native architectures, variable spend shows up in many ways: autoscaling node pools, over-provisioned resource requests, idle clusters, persistent volumes, load balancers, egress traffic, managed services, and observability tooling. FinOps practices encourage tagging/labeling for cost attribution, defining cost KPIs, enforcing budget guardrails, and continuously optimizing usage (right-sizing resources, scaling policies, turning off unused environments, and selecting cost-effective architectures).
Why the other options are incorrect: FaaS (Function as a Service) is a compute model (serverless), not a financial accountability practice. DevOps is a cultural and technical practice focused on collaboration and delivery speed, not specifically cloud cost accountability (though it can complement FinOps). CloudCost is not a widely recognized standard term in the way FinOps is.
In practice, FinOps for Kubernetes often involves improving resource efficiency: aligning requests/limits with real usage, using HPA/VPA appropriately, selecting instance types that match workload profiles, managing cluster autoscaler settings, and allocating shared platform costs to teams via labels/namespaces. It also includes forecasting and anomaly detection, because cloud-native spend can spike quickly due to misconfigurations (e.g., runaway autoscaling or excessive log ingestion).
So, the correct term for financial accountability in cloud variable spend is FinOps (D).
=========
Can a Kubernetes Service expose multiple ports?
No, you can only expose one port per each Service.
Yes, but you must specify an unambiguous name for each port.
Yes, the only requirement is to use different port numbers.
No, because the only port you can expose is port number 443.
Yes, a Kubernetes Service can expose multiple ports, and when it does, each port should have a unique, unambiguous name, making B correct. In the Service spec, the ports field is an array, allowing you to define multiple port mappings (e.g., 80 for HTTP and 443 for HTTPS, or grpc and metrics). Each entry can include port (Service port), targetPort (backend Pod port), and protocol.
The naming requirement becomes important because Kubernetes needs to disambiguate ports, especially when other resources refer to them. For example, an Ingress backend or some proxies/controllers can reference Service ports by name. Also, when multiple ports exist, a name helps humans and automation reliably select the correct port. Kubernetes documentation and common practice recommend naming ports whenever there is more than one, and in several scenarios it’s effectively required to avoid ambiguity.
Option A is incorrect because multi-port Services are common and fully supported. Option C is insufficient: while different port numbers are necessary, naming is the correct distinguishing rule emphasized by Kubernetes patterns and required by some integrations. Option D is incorrect and nonsensical—Services can expose many ports and are not restricted to 443.
Operationally, exposing multiple ports through one Service is useful when a single backend workload provides multiple interfaces (e.g., application traffic and a metrics endpoint). You can keep stable discovery under one DNS name while still differentiating ports. The backend Pods must still listen on the target ports, and selectors determine which Pods are endpoints. The key correctness point for this question is: multi-port Services are allowed, and each port should be uniquely named to avoid confusion and integration issues.
=========
What is the minimum number of etcd members that are required for a highly available Kubernetes cluster?
Two etcd members.
Five etcd members.
Six etcd members.
Three etcd members.
D (three etcd members) is correct. etcd is a distributed key-value store that uses the Raft consensus algorithm. High availability in consensus systems depends on maintaining a quorum (majority) of members to continue serving writes reliably. With 3 members, the cluster can tolerate 1 failure and still have 2/3 available—enough for quorum.
Two members is a common trap: with 2, a single failure leaves 1/2, which is not a majority, so the cluster cannot safely make progress. That means 2-member etcd is not HA; it is fragile and can be taken down by one node loss, network partition, or maintenance event. Five members can tolerate 2 failures and is a valid HA configuration, but it is not the minimum. Six is even-sized and generally discouraged for consensus because it doesn’t improve failure tolerance compared to five (quorum still requires 4), while increasing coordination overhead.
In Kubernetes, etcd reliability directly affects the API server and the entire control plane because etcd stores cluster state: object specs, status, controller state, and more. If etcd loses quorum, the API server will be unable to persist or reliably read/write state, leading to cluster management outages. That’s why the minimum HA baseline is three etcd members, often across distinct failure domains (nodes/AZs), with strong disk performance and consistent low-latency networking.
So, the smallest etcd topology that provides true fault tolerance is 3 members, which corresponds to option D.
=========
What is the role of the ingressClassName field in a Kubernetes Ingress resource?
It defines the type of protocol (HTTP or HTTPS) that the Ingress Controller should process.
It specifies the backend Service used by the Ingress Controller to route external requests.
It determines how routing rules are prioritized when multiple Ingress objects are applied.
It indicates which Ingress Controller should implement the rules defined in the Ingress resource.
The ingressClassName field in a Kubernetes Ingress resource is used to explicitly specify which Ingress Controller is responsible for processing and enforcing the rules defined in that Ingress. This makes option D the correct answer.
In Kubernetes clusters, it is common to have multiple Ingress Controllers running at the same time. For example, a cluster might run an NGINX Ingress Controller, a cloud-provider-specific controller, and an internal-only controller simultaneously. Without a clear mechanism to select which controller should handle a given Ingress resource, multiple controllers could attempt to process the same rules, leading to conflicts or undefined behavior.
The ingressClassName field solves this problem by referencing an IngressClass object. The IngressClass defines the controller implementation (via the controller field), and the Ingress resource uses ingressClassName to declare which class—and therefore which controller—should act on it. This creates a clean and explicit binding between an Ingress and its controller.
Option A is incorrect because protocol handling (HTTP vs HTTPS) is defined through TLS configuration and service ports, not by ingressClassName. Option B is incorrect because backend Services are defined in the rules and backend sections of the Ingress specification. Option C is incorrect because routing priority is determined by path matching rules and controller-specific logic, not by ingressClassName.
Historically, annotations were used to select Ingress Controllers, but ingressClassName is now the recommended and standardized approach. It improves clarity, portability, and compatibility across different Kubernetes distributions and controllers.
In summary, the primary purpose of ingressClassName is to indicate which Ingress Controller should implement the routing rules for a given Ingress resource, making Option D the correct and verified answer.
What are the characteristics for building every cloud-native application?
Resiliency, Operability, Observability, Availability
Resiliency, Containerd, Observability, Agility
Kubernetes, Operability, Observability, Availability
Resiliency, Agility, Operability, Observability
Cloud-native applications are typically designed to thrive in dynamic, distributed environments where infrastructure is elastic and failures are expected. The best set of characteristics listed is Resiliency, Agility, Operability, Observability, making D correct.
Resiliency means the application and its supporting platform can tolerate failures and continue providing service. In Kubernetes terms, resiliency is supported through self-healing controllers, replica management, health probes, and safe rollout mechanisms, but the application must also be designed to handle transient failures, retries, and graceful degradation.
Agility reflects the ability to deliver changes quickly and safely. Cloud-native systems emphasize automation, CI/CD, declarative configuration, and small, frequent releases—often enabled by Kubernetes primitives like Deployments and rollout strategies. Agility is about reducing the friction to ship improvements while maintaining reliability.
Operability is how manageable the system is in production: clear configuration, predictable deployments, safe scaling, and automation-friendly operations. Kubernetes encourages operability through consistent APIs, controllers, and standardized patterns for configuration and lifecycle.
Observability means you can understand what’s happening inside the system using telemetry—metrics, logs, and traces—so you can troubleshoot issues, measure SLOs, and improve performance. Kubernetes provides many integration points for observability, but cloud-native apps must also emit meaningful signals.
Options B and C include items that are not “characteristics” (containerd is a runtime; Kubernetes is a platform). Option A includes “availability,” which is important, but the canonical cloud-native framing in this question emphasizes the four qualities in D as the foundational build characteristics.
=========
What are the advantages of adopting a GitOps approach for your deployments?
Reduce failed deployments, operational costs, and fragile release processes.
Reduce failed deployments, configuration drift, and fragile release processes.
Reduce failed deployments, operational costs, and learn git.
Reduce failed deployments, configuration drift and improve your reputation.
The correct answer is B: GitOps helps reduce failed deployments, reduce configuration drift, and reduce fragile release processes. GitOps is an operating model where Git is the source of truth for declarative configuration (Kubernetes manifests, Helm releases, Kustomize overlays). A GitOps controller (like Flux or Argo CD) continuously reconciles the cluster’s actual state to match what’s declared in Git. This creates a stable, repeatable deployment pipeline and minimizes “snowflake” environments.
Reducing failed deployments: changes go through pull requests, code review, automated checks, and controlled merges. Deployments become predictable because the controller applies known-good, versioned configuration rather than ad-hoc manual commands. Rollbacks are also simpler—reverting a Git commit returns the cluster to the prior desired state.
Reducing configuration drift: without GitOps, clusters often drift because humans apply hotfixes directly in production or because different environments diverge over time. With GitOps, the controller detects drift and either alerts or automatically corrects it, restoring alignment with Git.
Reducing fragile release processes: releases become standardized and auditable. Git history provides an immutable record of who changed what and when. Promotion between environments becomes systematic (merge/branch/tag), and the same declarative artifacts are used consistently.
The other options include items that are either not the primary GitOps promise (like “learn git”) or subjective (“improve your reputation”). Operational cost reduction can happen indirectly through fewer incidents and more automation, but the most canonical and direct GitOps advantages in Kubernetes delivery are reliability and drift control—captured precisely in B.
=========
A Kubernetes _____ is an abstraction that defines a logical set of Pods and a policy by which to access them.
Selector
Controller
Service
Job
A Kubernetes Service is the abstraction that defines a logical set of Pods and the policy for accessing them, so C is correct. Pods are ephemeral: their IPs change as they are recreated, rescheduled, or scaled. A Service solves this by providing a stable endpoint (DNS name and virtual IP) and routing rules that send traffic to the current healthy Pods backing the Service.
A Service typically uses a label selector to identify which Pods belong to it. Kubernetes then maintains endpoint data (Endpoints/EndpointSlice) for those Pods and uses the cluster dataplane (kube-proxy or eBPF-based implementations) to forward traffic from the Service IP/port to one of the backend Pod IPs. This is what the question means by “logical set of Pods” and “policy by which to access them” (for example, round-robin-like distribution depending on dataplane, session affinity options, and how ports map via targetPort).
Option A (Selector) is only the query mechanism used by Services and controllers; it is not itself the access abstraction. Option B (Controller) is too generic; controllers reconcile desired state but do not provide stable network access policies. Option D (Job) manages run-to-completion tasks and is unrelated to network access abstraction.
Services can be exposed in different ways: ClusterIP (internal), NodePort, LoadBalancer, and ExternalName. Regardless of type, the core Service concept remains: stable access to a dynamic set of Pods. This is foundational to Kubernetes networking and microservice communication, and it is why Service discovery via DNS works effectively across rolling updates and scaling events.
Thus, the correct answer is Service (C).
=========
What Linux namespace is shared by default by containers running within a Kubernetes Pod?
Host Network
Network
Process ID
Process Name
By default, containers in the same Kubernetes Pod share the network namespace, which means they share the same IP address and port space. Therefore, the correct answer is B (Network).
This shared network namespace is a key part of the Pod abstraction. Because all containers in a Pod share networking, they can communicate with each other over localhost and coordinate tightly, which is the basis for patterns like sidecars (service mesh proxies, log shippers, config reloaders). It also means containers must coordinate port usage: if two containers try to bind the same port on 0.0.0.0, they’ll conflict because they share the same port namespace.
Option A (“Host Network”) is different: hostNetwork: true is an optional Pod setting that puts the Pod into the node’s network namespace, not the Pod’s shared namespace. It is not the default and is generally used sparingly due to security and port-collision risks. Option C (“Process ID”) is not shared by default in Kubernetes; PID namespace sharing requires explicitly enabling process namespace sharing (e.g., shareProcessNamespace: true). Option D (“Process Name”) is not a Linux namespace concept.
The Pod model also commonly implies shared storage volumes (if defined) and shared IPC namespace in some configurations, but the universally shared-by-default namespace across containers in the same Pod is the network namespace. This default behavior is why Kubernetes documentation explains a Pod as a “logical host” for one or more containers: the containers are co-located and share certain namespaces as if they ran on the same host.
So, the correct, verified answer is B: containers in the same Pod share the Network namespace by default.
=========
What is the purpose of the CRI?
To provide runtime integration control when multiple runtimes are used.
Support container replication and scaling on nodes.
Provide an interface allowing Kubernetes to support pluggable container runtimes.
Allow the definition of dynamic resource criteria across containers.
The Container Runtime Interface (CRI) exists so Kubernetes can support pluggable container runtimes behind a stable interface, which makes C correct. In Kubernetes, the kubelet is responsible for managing Pods on a node, but it does not implement container execution itself. Instead, it delegates container lifecycle operations (pull images, create pod sandbox, start/stop containers, fetch logs, exec/attach streaming) to a container runtime through a well-defined API. CRI is that API contract.
Because of CRI, Kubernetes can run with different container runtimes—commonly containerd or CRI-O—without changing kubelet core logic. This improves portability and keeps Kubernetes modular: runtime innovation can happen independently while Kubernetes retains a consistent operational model. CRI is accessed via gRPC and defines the services and message formats kubelet uses to communicate with runtimes.
Option B is incorrect because replication and scaling are handled by controllers (Deployments/ReplicaSets) and schedulers, not by CRI. Option D is incorrect because resource criteria (requests/limits) are expressed in Pod specs and enforced via OS mechanisms (cgroups) and kubelet/runtime behavior, but CRI is not “for defining dynamic resource criteria.” Option A is vague and not the primary statement; while CRI enables runtime integration, its key purpose is explicitly to make runtimes pluggable and interoperable.
This design became even more important as Kubernetes moved away from Docker Engine integration (dockershim removal from kubelet). With CRI, Kubernetes focuses on orchestrating Pods, while runtimes focus on executing containers. That separation of responsibilities is a core container orchestration principle and is exactly what the question is testing.
So the verified answer is C.
=========
Copyright © 2014-2026 Certensure. All Rights Reserved