Kubernetes resource management is more than just creating, deleting, or updating objects. It's an intricate dance involving numerous tools, operators, and users. As our infrastructure grows, it becomes increasingly challenging to maintain control.Kubernetes resource management is more than just creating, deleting, or updating objects. It's an intricate dance involving numerous tools, operators, and users. As our infrastructure grows, it becomes increasingly challenging to maintain control.

Battle for Resources or the SSA Path to Kubernetes Diplomacy

Introduction

In the world of Kubernetes, resource management is more than just creating, deleting, or updating objects. It's an intricate dance involving numerous tools, operators, and users. As our infrastructure grows, it becomes increasingly challenging to maintain control, necessitating the adoption of more advanced and sophisticated resource management and control systems and approaches.

In this article, I will not delve into the basic methods of creating resources, as that is a rather trivial task. Instead, I would like to share my experience in optimizing resource update paths, which have proven to be immensely valuable in managing resources within large and complex Kubernetes clusters, as well as during the development of operators.

Update

"How do I update a resource?" This is often the first question that comes to mind for every developer, DevOps engineer, or anyone who has interacted with Kubernetes.

And the first answer that comes to mind is UPDATE - or, in Kubernetes terminology, APPLY.

This approach is entirely correct.

The kubectl apply command is an incredibly powerful and convenient tool: you simply modify the desired part of the resource manifest and apply it, and Kubernetes handles the rest.

Let's apply the following deployment manifest as an example:

> cat test-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata:   name: gpa-app spec:   replicas: 1   selector:     matchLabels:       app: gpa-app   template:     metadata:       labels:         app: gpa-app     spec:       containers:         - name: gpa-app           image: nginx:1.21.0           ports:             - containerPort: 80  > kubectl apply -f test-deployment.yaml 

\ However, the complexity comes when we want to automate the regular checking and updating of a resource, for example, by encapsulating the logic within a microservice.

In such a scenario, you always need to keep the source manifest to introduce changes and apply them. Storing the manifest directly within the microservice isn't an ideal solution for several reasons:

  • Any necessary manifest change, unrelated to the microservice's routine logic, would require code modification, building a new microservice image, and redeploying it. This introduces unnecessary downtime and operational inconvenience.
  • If the manifest within the cluster is manually modified, these changes will be overwritten by the stale values from the microservice's internal manifest on its next application cycle.

A more robust solution is to implement a logic that first retrieves (GET) the current state of the resource's manifest, updates the necessary fields, and then applies the changes back to the cluster:

  • Executing the GET command to retrieve the current state
> kubectl get deployment gpa-app -o yaml apiVersion: apps/v1 kind: Deployment metadata:   annotations:     ...   creationTimestamp: "2025-08-25T23:43:37Z"   generation: 1   name: gpa-app   namespace: default   resourceVersion: "495672"   uid: 2dec1474-988d-431b-9cf2-e8f41a624517 spec:   replicas: 1   ...   selector:     matchLabels:       app: gpa-app   ...   template:     metadata:       creationTimestamp: null       labels:         app: gpa-app     spec:       containers:       - image: nginx:1.21.0         imagePullPolicy: IfNotPresent         name: gpa-app         ports:         - containerPort: 80           protocol: TCP         ...       ... status:   ... 

\

  • Cleaning the manifest and updating the replicas count
> cat test-deployment-upd.yaml apiVersion: apps/v1 kind: Deployment metadata:   name: gpa-app spec:   replicas: 2   # new replicas count   selector:     matchLabels:       app: gpa-app   template:     metadata:       labels:         app: gpa-app     spec:       containers:         - name: gpa-app           image: nginx:1.21.0           ports:             - containerPort: 80 

\

  • applying changes
> kubectl apply -f test-deployment-upd.yaml 

And this is a very popular solution that's sufficient for most use cases.

However, we're discussing large, complex, and high-load systems where an extra request to the Kubernetes API can be an expensive operation. In the example above, we are making two separate requests every time: a GET followed by an APPLY.

To delve even deeper into this situation, let's consider the presence of microservices or other systems that subscribe to and react to resource events. In this scenario, we would constantly be spamming the cluster with events, even when no actual changes are being made to the resource.

:::info \ This situation does not apply to the standard kubectl apply command because the utility itself performs a validation of the applied manifest by using information stored in the resource's annotations. Specifically, it uses the kubectl.kubernetes.io/last-applied-configuration annotation to intelligently determine what has changed and send only the necessary updates.

:::

\ One of the most straightforward solutions is to first check for changes and then apply the new manifest only if the resource has genuinely been modified.

To summarize everything discussed above, the logic for implementing a microservice or an operator for resource updates should be as follows:

  • Get the current manifest of the resource from the cluster.
  • Modify the retrieved resource.
  • Compare the modified resource with its original state, and if it has changed, apply the new manifest.

Let's name this approach GET-CHECK-APPLY

\

Collaborative Resource Management

Everything we've discussed so far represents a simple and elegant solution for a scenario where a microservice/user manages a resource. But what if multiple contributors are involved in modifying that same resource?

This brings us to the main topic of this article: how to resolve the challenges of shared resource management politely and diplomatically.

The first obvious step is to distribute attribute ownership among the contributors. For example, one service might be responsible for watching and updating the image, while another manages the number of replicas.

“service-a”

> cat test-deployment-service-a.yaml apiVersion: apps/v1 kind: Deployment metadata:   name: gpa-app spec:   ...   template:     ...     spec:       containers:         - name: gpa-app           image: nginx:1.21.0 # belongs to service-A           ... 

“service-b”

> cat test-deployment-service-b.yaml apiVersion: apps/v1 kind: Deployment metadata:   name: gpa-app spec:   replicas: 3  # belongs to service-B   ... 

Unfortunately, the GET-CHECK-APPLY approach will not be effective in this scenario. Since it operates on the entire resource manifest, a collision can occur when multiple services are working concurrently. Specifically, between the GET and APPLY steps of one service, another service might apply its own changes, which would then be overwritten by the first service's final APPLY.

\

Patch as a solution for collaborative resource management

The most straightforward and obvious path to solve the problem of collaborative resource management is to use PATCH. This approach works well for two main reasons:

  • Field ownership is already distributed among the contributors. By using PATCH, each service can take responsibility for a specific set of fields, preventing conflicts.
  • PATCH allows targeted update of only the required attributes. Instead of updating the entire manifest, you can send a partial update with just the fields you need to change. This is far more efficient and avoids overwriting modifications made by other services.
> cat test-deployment-service-a.yaml spec:   replicas: 3  > kubectl patch deployment gpa-app --patch-file test-deployment-service-a.yaml 

\

> cat test-deployment-service-b.yaml spec:   template:     spec:       containers:         - name: gpa-app           image: nginx:1.22.0  > kubectl patch deployment gpa-app --patch-file test-deployment-service-b.yaml 

\

> cat kubectl get deployment gpa-app -o yaml apiVersion: apps/v1 kind: Deployment metadata:   annotations:     ...   generation: 4  # one of the indicators that the resource has changed   resourceVersion: "520392"   name: gpa-app   ... spec:   replicas: 3  # changed replicas count by service-A   ...   template:     ...     spec:       containers:       - image: nginx:1.22.0 # changed image by service-B         ...       ... status:   ... 

\ Unfortunately, we still can't abandon the GET-CHECK step. Like APPLY, PATCH also triggers a resource version change, which generates an event and creates noise, bothering our "neighbors" (other services and systems).

As a result, we've found that GET-CHECK-PATCH is more convenient than GET-CHECK-APPLY for collaborative work`.

However, despite these improvements, this logic still feels quite cumbersome:

  • To update a resource, we always make two separate API calls: GET and PATCH (or APPLY).
  • We must implement complex logic to compare the initial state with the new state and decide whether to proceed.

In Kubernetes circles, this GET-CHECK-PATCH(APPLY) approach is known as Client-Side Apply (CSA), where all the logic for merging, conflict resolution, and validation is performed on the client side, and only the final result is applied.

While the client has significant control over the resource management process, many tools remain unavailable to it. For example, a client cannot prevent another from overwriting a set of fields that it owns.

\

Kubernetes SSA

Starting with Kubernetes v1.22, a highly effective and powerful declarative mechanism was introduced: Server-Side Apply (SSA).

SSA significantly simplifies collaborative resource management by moving the responsibility for updating, validating, and consolidating logic to the API server itself. The client only sends the desired state of the resource, and the Kubernetes API server handles all the complex logic under the hood.

A key feature introduced with SSA is the mechanism of shared field management. The Kubernetes API server now knows which client is managing which field within a resource's specification. When a client sends a manifest using SSA, the API server checks if that client owns the fields it's trying to modify. If the field is unowned or already belongs to that client, the change is applied successfully. However, if another client owns the field, the API server will return an error, alerting you to a conflict or overwriting the owner based on your settings.

SSA usage completely eliminates the need to use the GET-CHECK-PATCH(APPLY) approach. With SSA, you send the desired state, specify the client's name (field manager), and you receive the server's response.

It's important to note that using PATCH instead of APPLY of the entire manifest is still a best practice, as it allows your service to "claim" ownership of only the specific fields it manages.

We can use the same patch files from the previous example, changing the replicas and image, and apply them using SSA.

> kubectl patch deployment gpa-app --patch-file test-deployment-service-a.yaml --field-manager=service-a > kubectl patch deployment gpa-app --patch-file test-deployment-service-b.yaml --field-manager=service-b 

\

> kubectl get deployment gpa-app -o yaml apiVersion: apps/v1 kind: Deployment metadata:   annotations:     ...   generation: 6  # one of the indicators that the resource has changed   resourceVersion: "534637"   name: gpa-app   ... spec:   replicas: 1  # changed replicas count by service-A   ...   template:     ...     spec:       containers:       - image: nginx:1.21.0 # changed image by service-B         ...       ... status:   ... 

\ To view a list of all managed fields, you need to expand the command kubectl get, by adding the --show-managed-fields flag

> kubectl get deployment gpa-app -o yaml --show-managed-fields apiVersion: apps/v1 kind: Deployment metadata:   annotations:     ...   ...   managedFields:   - apiVersion: apps/v1     fieldsType: FieldsV1     fieldsV1:       f:spec:         f:replicas: {}   # confirmation that the spec.replicas belong to service-a     manager: service-a     operation: Update     time: "2025-08-26T00:23:50Z"   - apiVersion: apps/v1     fieldsType: FieldsV1     fieldsV1:       f:spec:         f:template:           f:spec:             f:containers:               k:{"name":"gpa-app"}:                 f:image: {}  # confirmation that the spec.template.spec.containers.[0].image belong to service-b     manager: service-b     operation: Update     time: "2025-08-26T00:24:05Z"   ...   name: gpa-app   ... spec:   ... 

As you've seen, Kubernetes has "claimed" the replicas field for "service-a" and the image field for "service-b".

This is the core of SSA's field management. If you now try to override the entire manifest again with SSA, Kubernetes will return an error because it detects a conflict.

> kubectl apply -f test-deployment.yaml --field-manager=service-c --server-side error: Apply failed with 1 conflict: conflict with "service-a" using apps/v1: .spec.replicas Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply   command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your   manifest to remove references to the fields that should keep their   current managers. * You may co-own fields by updating your manifest to match the existing   value; in this case, you'll become the manager if the other manager(s)   stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts 

It correctly identifies that you, as the new applicant, do not own the fields that are already claimed by "service-a" and "service-b." This behavior is a key advantage of SSA, as it prevents unintentional overwrites and ensures that shared resources are updated collaboratively and safely.

\

Conclusion

By diving deep into Kubernetes resource management, it becomes clear that the evolution from Client-Side Apply to Server-Side Apply is not just a command-line change. The SSA is a fundamental shift in the philosophy of interacting with a cluster. While SSA may seem like a silver bullet, it does have its complexities, requiring a deeper understanding of Kubernetes architecture for a successful implementation.

For a long time, CSA was our reliable companion. It gets the job done but comes with certain limitations. Its reliance on the kubectl.kubernetes.io/last-applied-configuration annotation makes it vulnerable to conflicts and errors, especially in complex, automated environments. In the hands of a single developer, CSA can be an effective tool for quick, simple operations. However, as soon as multiple systems or individuals try to manage the same resource simultaneously, its fragility becomes obvious. CSA can lead to unpredictable results, race conditions, and, as a consequence, cluster instability.

SSA solves these problems by moving complex logic to the API server. The field ownership management feature is a game-changer. The API server is no longer just executing commands; it becomes an intelligent arbiter that knows exactly who is responsible for which fields. This makes collaboration safe by preventing accidental overwrites and conflicts. For developers building operators and controllers, SSA is not just an option but a necessity. It unleashes the creation of robust and scalable systems that can coexist within the same cluster without interfering with each other.

So, when should you use each approach?

  • CSA can still be helpful in scenarios where you're manually managing resources and don't expect outside interference. It's light and straightforward for one-off operations.
  • SSA is the new standard for all automated systems, operators, and teams working in high-load or shared environments. It is the path toward truly declarative, safe, and predictable state management for your cluster.

Ultimately, understanding these two approaches is key to working effectively and without errors in Kubernetes. By choosing Server-Side Apply, you're not just using a new command; you're adopting a modern, reliable, and more innovative way to manage your infrastructure.

Thank you!

Market Opportunity
Moonveil Logo
Moonveil Price(MORE)
$0.002489
$0.002489$0.002489
+0.76%
USD
Moonveil (MORE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Crypto News: Donald Trump-Aligned Fed Governor To Speed Up Fed Rate Cuts?

Crypto News: Donald Trump-Aligned Fed Governor To Speed Up Fed Rate Cuts?

The post Crypto News: Donald Trump-Aligned Fed Governor To Speed Up Fed Rate Cuts? appeared on BitcoinEthereumNews.com. In recent crypto news, Stephen Miran swore in as the latest Federal Reserve governor on September 16, 2025, slipping into the board’s last open spot right before the Federal Open Market Committee kicks off its two-day rate discussion. Traders are betting heavily on a 25-basis-point trim, which would bring the federal funds rate down to 4.00%-4.25%, based on CME FedWatch Tool figures from September 15, 2025. Miran, who’s been Trump’s top economic advisor and a supporter of his trade ideas, joins a seven-member board where just three governors come from Democratic picks, according to the Fed’s records updated that same day. Crypto News: Miran’s Background and Quick Path to Confirmation The Senate greenlit Miran on September 15, 2025, with a tight 48-47 vote, following his nomination on September 2, 2025, as per a recent crypto news update. His stint runs only until January 31, 2026, stepping in for Adriana D. Kugler, who stepped down in August 2025 for reasons not made public. Miran earned his economics Ph.D. from Harvard and worked at the Treasury back in Trump’s first go-around. Afterward, he moved to Hudson Bay Capital Management as an economist, then looped back to the White House in December 2024 to head the Council of Economic Advisers. There, he helped craft Trump’s “reciprocal tariffs” approach, aimed at fixing trade gaps with China and the EU. He wouldn’t quit his White House gig, which irked Senator Elizabeth Warren at the September 7, 2025, confirmation hearings. That limited time frame means Miran gets to cast a vote straight away at the FOMC session starting September 16, 2025. The full board now features Chair Jerome H. Powell (Trump pick, term ends 2026), Vice Chair Philip N. Jefferson (Biden, to 2036), and folks like Lisa D. Cook (Biden, to 2028) and Michael S. Barr…
Share
BitcoinEthereumNews2025/09/18 03:14
What John Harbaugh And Mike Tomlin’s Departures Mean For NFL Coaching

What John Harbaugh And Mike Tomlin’s Departures Mean For NFL Coaching

The post What John Harbaugh And Mike Tomlin’s Departures Mean For NFL Coaching appeared on BitcoinEthereumNews.com. Baltimore Ravens head coach John Harbaugh (L
Share
BitcoinEthereumNews2026/01/15 10:56
Twitter founder's "weekend experiment": Bitchat encryption software becomes a "communication Noah's Ark"

Twitter founder's "weekend experiment": Bitchat encryption software becomes a "communication Noah's Ark"

Author: Nancy, PANews In the crypto world, both assets and technologies are gradually taking center stage with greater practical significance. In the past few months
Share
PANews2026/01/15 11:00