Kubernetes resource management is more than just creating, deleting, or updating objects. It's an intricate dance involving numerous tools, operators, and users. As our infrastructure grows, it becomes increasingly challenging to maintain control.Kubernetes resource management is more than just creating, deleting, or updating objects. It's an intricate dance involving numerous tools, operators, and users. As our infrastructure grows, it becomes increasingly challenging to maintain control.

Battle for Resources or the SSA Path to Kubernetes Diplomacy

Introduction

In the world of Kubernetes, resource management is more than just creating, deleting, or updating objects. It's an intricate dance involving numerous tools, operators, and users. As our infrastructure grows, it becomes increasingly challenging to maintain control, necessitating the adoption of more advanced and sophisticated resource management and control systems and approaches.

In this article, I will not delve into the basic methods of creating resources, as that is a rather trivial task. Instead, I would like to share my experience in optimizing resource update paths, which have proven to be immensely valuable in managing resources within large and complex Kubernetes clusters, as well as during the development of operators.

Update

"How do I update a resource?" This is often the first question that comes to mind for every developer, DevOps engineer, or anyone who has interacted with Kubernetes.

And the first answer that comes to mind is UPDATE - or, in Kubernetes terminology, APPLY.

This approach is entirely correct.

The kubectl apply command is an incredibly powerful and convenient tool: you simply modify the desired part of the resource manifest and apply it, and Kubernetes handles the rest.

Let's apply the following deployment manifest as an example:

> cat test-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata:   name: gpa-app spec:   replicas: 1   selector:     matchLabels:       app: gpa-app   template:     metadata:       labels:         app: gpa-app     spec:       containers:         - name: gpa-app           image: nginx:1.21.0           ports:             - containerPort: 80  > kubectl apply -f test-deployment.yaml 

\ However, the complexity comes when we want to automate the regular checking and updating of a resource, for example, by encapsulating the logic within a microservice.

In such a scenario, you always need to keep the source manifest to introduce changes and apply them. Storing the manifest directly within the microservice isn't an ideal solution for several reasons:

  • Any necessary manifest change, unrelated to the microservice's routine logic, would require code modification, building a new microservice image, and redeploying it. This introduces unnecessary downtime and operational inconvenience.
  • If the manifest within the cluster is manually modified, these changes will be overwritten by the stale values from the microservice's internal manifest on its next application cycle.

A more robust solution is to implement a logic that first retrieves (GET) the current state of the resource's manifest, updates the necessary fields, and then applies the changes back to the cluster:

  • Executing the GET command to retrieve the current state
> kubectl get deployment gpa-app -o yaml apiVersion: apps/v1 kind: Deployment metadata:   annotations:     ...   creationTimestamp: "2025-08-25T23:43:37Z"   generation: 1   name: gpa-app   namespace: default   resourceVersion: "495672"   uid: 2dec1474-988d-431b-9cf2-e8f41a624517 spec:   replicas: 1   ...   selector:     matchLabels:       app: gpa-app   ...   template:     metadata:       creationTimestamp: null       labels:         app: gpa-app     spec:       containers:       - image: nginx:1.21.0         imagePullPolicy: IfNotPresent         name: gpa-app         ports:         - containerPort: 80           protocol: TCP         ...       ... status:   ... 

\

  • Cleaning the manifest and updating the replicas count
> cat test-deployment-upd.yaml apiVersion: apps/v1 kind: Deployment metadata:   name: gpa-app spec:   replicas: 2   # new replicas count   selector:     matchLabels:       app: gpa-app   template:     metadata:       labels:         app: gpa-app     spec:       containers:         - name: gpa-app           image: nginx:1.21.0           ports:             - containerPort: 80 

\

  • applying changes
> kubectl apply -f test-deployment-upd.yaml 

And this is a very popular solution that's sufficient for most use cases.

However, we're discussing large, complex, and high-load systems where an extra request to the Kubernetes API can be an expensive operation. In the example above, we are making two separate requests every time: a GET followed by an APPLY.

To delve even deeper into this situation, let's consider the presence of microservices or other systems that subscribe to and react to resource events. In this scenario, we would constantly be spamming the cluster with events, even when no actual changes are being made to the resource.

:::info \ This situation does not apply to the standard kubectl apply command because the utility itself performs a validation of the applied manifest by using information stored in the resource's annotations. Specifically, it uses the kubectl.kubernetes.io/last-applied-configuration annotation to intelligently determine what has changed and send only the necessary updates.

:::

\ One of the most straightforward solutions is to first check for changes and then apply the new manifest only if the resource has genuinely been modified.

To summarize everything discussed above, the logic for implementing a microservice or an operator for resource updates should be as follows:

  • Get the current manifest of the resource from the cluster.
  • Modify the retrieved resource.
  • Compare the modified resource with its original state, and if it has changed, apply the new manifest.

Let's name this approach GET-CHECK-APPLY

\

Collaborative Resource Management

Everything we've discussed so far represents a simple and elegant solution for a scenario where a microservice/user manages a resource. But what if multiple contributors are involved in modifying that same resource?

This brings us to the main topic of this article: how to resolve the challenges of shared resource management politely and diplomatically.

The first obvious step is to distribute attribute ownership among the contributors. For example, one service might be responsible for watching and updating the image, while another manages the number of replicas.

“service-a”

> cat test-deployment-service-a.yaml apiVersion: apps/v1 kind: Deployment metadata:   name: gpa-app spec:   ...   template:     ...     spec:       containers:         - name: gpa-app           image: nginx:1.21.0 # belongs to service-A           ... 

“service-b”

> cat test-deployment-service-b.yaml apiVersion: apps/v1 kind: Deployment metadata:   name: gpa-app spec:   replicas: 3  # belongs to service-B   ... 

Unfortunately, the GET-CHECK-APPLY approach will not be effective in this scenario. Since it operates on the entire resource manifest, a collision can occur when multiple services are working concurrently. Specifically, between the GET and APPLY steps of one service, another service might apply its own changes, which would then be overwritten by the first service's final APPLY.

\

Patch as a solution for collaborative resource management

The most straightforward and obvious path to solve the problem of collaborative resource management is to use PATCH. This approach works well for two main reasons:

  • Field ownership is already distributed among the contributors. By using PATCH, each service can take responsibility for a specific set of fields, preventing conflicts.
  • PATCH allows targeted update of only the required attributes. Instead of updating the entire manifest, you can send a partial update with just the fields you need to change. This is far more efficient and avoids overwriting modifications made by other services.
> cat test-deployment-service-a.yaml spec:   replicas: 3  > kubectl patch deployment gpa-app --patch-file test-deployment-service-a.yaml 

\

> cat test-deployment-service-b.yaml spec:   template:     spec:       containers:         - name: gpa-app           image: nginx:1.22.0  > kubectl patch deployment gpa-app --patch-file test-deployment-service-b.yaml 

\

> cat kubectl get deployment gpa-app -o yaml apiVersion: apps/v1 kind: Deployment metadata:   annotations:     ...   generation: 4  # one of the indicators that the resource has changed   resourceVersion: "520392"   name: gpa-app   ... spec:   replicas: 3  # changed replicas count by service-A   ...   template:     ...     spec:       containers:       - image: nginx:1.22.0 # changed image by service-B         ...       ... status:   ... 

\ Unfortunately, we still can't abandon the GET-CHECK step. Like APPLY, PATCH also triggers a resource version change, which generates an event and creates noise, bothering our "neighbors" (other services and systems).

As a result, we've found that GET-CHECK-PATCH is more convenient than GET-CHECK-APPLY for collaborative work`.

However, despite these improvements, this logic still feels quite cumbersome:

  • To update a resource, we always make two separate API calls: GET and PATCH (or APPLY).
  • We must implement complex logic to compare the initial state with the new state and decide whether to proceed.

In Kubernetes circles, this GET-CHECK-PATCH(APPLY) approach is known as Client-Side Apply (CSA), where all the logic for merging, conflict resolution, and validation is performed on the client side, and only the final result is applied.

While the client has significant control over the resource management process, many tools remain unavailable to it. For example, a client cannot prevent another from overwriting a set of fields that it owns.

\

Kubernetes SSA

Starting with Kubernetes v1.22, a highly effective and powerful declarative mechanism was introduced: Server-Side Apply (SSA).

SSA significantly simplifies collaborative resource management by moving the responsibility for updating, validating, and consolidating logic to the API server itself. The client only sends the desired state of the resource, and the Kubernetes API server handles all the complex logic under the hood.

A key feature introduced with SSA is the mechanism of shared field management. The Kubernetes API server now knows which client is managing which field within a resource's specification. When a client sends a manifest using SSA, the API server checks if that client owns the fields it's trying to modify. If the field is unowned or already belongs to that client, the change is applied successfully. However, if another client owns the field, the API server will return an error, alerting you to a conflict or overwriting the owner based on your settings.

SSA usage completely eliminates the need to use the GET-CHECK-PATCH(APPLY) approach. With SSA, you send the desired state, specify the client's name (field manager), and you receive the server's response.

It's important to note that using PATCH instead of APPLY of the entire manifest is still a best practice, as it allows your service to "claim" ownership of only the specific fields it manages.

We can use the same patch files from the previous example, changing the replicas and image, and apply them using SSA.

> kubectl patch deployment gpa-app --patch-file test-deployment-service-a.yaml --field-manager=service-a > kubectl patch deployment gpa-app --patch-file test-deployment-service-b.yaml --field-manager=service-b 

\

> kubectl get deployment gpa-app -o yaml apiVersion: apps/v1 kind: Deployment metadata:   annotations:     ...   generation: 6  # one of the indicators that the resource has changed   resourceVersion: "534637"   name: gpa-app   ... spec:   replicas: 1  # changed replicas count by service-A   ...   template:     ...     spec:       containers:       - image: nginx:1.21.0 # changed image by service-B         ...       ... status:   ... 

\ To view a list of all managed fields, you need to expand the command kubectl get, by adding the --show-managed-fields flag

> kubectl get deployment gpa-app -o yaml --show-managed-fields apiVersion: apps/v1 kind: Deployment metadata:   annotations:     ...   ...   managedFields:   - apiVersion: apps/v1     fieldsType: FieldsV1     fieldsV1:       f:spec:         f:replicas: {}   # confirmation that the spec.replicas belong to service-a     manager: service-a     operation: Update     time: "2025-08-26T00:23:50Z"   - apiVersion: apps/v1     fieldsType: FieldsV1     fieldsV1:       f:spec:         f:template:           f:spec:             f:containers:               k:{"name":"gpa-app"}:                 f:image: {}  # confirmation that the spec.template.spec.containers.[0].image belong to service-b     manager: service-b     operation: Update     time: "2025-08-26T00:24:05Z"   ...   name: gpa-app   ... spec:   ... 

As you've seen, Kubernetes has "claimed" the replicas field for "service-a" and the image field for "service-b".

This is the core of SSA's field management. If you now try to override the entire manifest again with SSA, Kubernetes will return an error because it detects a conflict.

> kubectl apply -f test-deployment.yaml --field-manager=service-c --server-side error: Apply failed with 1 conflict: conflict with "service-a" using apps/v1: .spec.replicas Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply   command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your   manifest to remove references to the fields that should keep their   current managers. * You may co-own fields by updating your manifest to match the existing   value; in this case, you'll become the manager if the other manager(s)   stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts 

It correctly identifies that you, as the new applicant, do not own the fields that are already claimed by "service-a" and "service-b." This behavior is a key advantage of SSA, as it prevents unintentional overwrites and ensures that shared resources are updated collaboratively and safely.

\

Conclusion

By diving deep into Kubernetes resource management, it becomes clear that the evolution from Client-Side Apply to Server-Side Apply is not just a command-line change. The SSA is a fundamental shift in the philosophy of interacting with a cluster. While SSA may seem like a silver bullet, it does have its complexities, requiring a deeper understanding of Kubernetes architecture for a successful implementation.

For a long time, CSA was our reliable companion. It gets the job done but comes with certain limitations. Its reliance on the kubectl.kubernetes.io/last-applied-configuration annotation makes it vulnerable to conflicts and errors, especially in complex, automated environments. In the hands of a single developer, CSA can be an effective tool for quick, simple operations. However, as soon as multiple systems or individuals try to manage the same resource simultaneously, its fragility becomes obvious. CSA can lead to unpredictable results, race conditions, and, as a consequence, cluster instability.

SSA solves these problems by moving complex logic to the API server. The field ownership management feature is a game-changer. The API server is no longer just executing commands; it becomes an intelligent arbiter that knows exactly who is responsible for which fields. This makes collaboration safe by preventing accidental overwrites and conflicts. For developers building operators and controllers, SSA is not just an option but a necessity. It unleashes the creation of robust and scalable systems that can coexist within the same cluster without interfering with each other.

So, when should you use each approach?

  • CSA can still be helpful in scenarios where you're manually managing resources and don't expect outside interference. It's light and straightforward for one-off operations.
  • SSA is the new standard for all automated systems, operators, and teams working in high-load or shared environments. It is the path toward truly declarative, safe, and predictable state management for your cluster.

Ultimately, understanding these two approaches is key to working effectively and without errors in Kubernetes. By choosing Server-Side Apply, you're not just using a new command; you're adopting a modern, reliable, and more innovative way to manage your infrastructure.

Thank you!

Market Opportunity
Moonveil Logo
Moonveil Price(MORE)
$0.00249
$0.00249$0.00249
+0.80%
USD
Moonveil (MORE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

American Bitcoin’s $5B Nasdaq Debut Puts Trump-Backed Miner in Crypto Spotlight

American Bitcoin’s $5B Nasdaq Debut Puts Trump-Backed Miner in Crypto Spotlight

The post American Bitcoin’s $5B Nasdaq Debut Puts Trump-Backed Miner in Crypto Spotlight appeared on BitcoinEthereumNews.com. Key Takeaways: American Bitcoin (ABTC) surged nearly 85% on its Nasdaq debut, briefly reaching a $5B valuation. The Trump family, alongside Hut 8 Mining, controls 98% of the newly merged crypto-mining entity. Eric Trump called Bitcoin “modern-day gold,” predicting it could reach $1 million per coin. American Bitcoin, a fast-rising crypto mining firm with strong political and institutional backing, has officially entered Wall Street. After merging with Gryphon Digital Mining, the company made its Nasdaq debut under the ticker ABTC, instantly drawing global attention to both its stock performance and its bold vision for Bitcoin’s future. Read More: Trump-Backed Crypto Firm Eyes Asia for Bold Bitcoin Expansion Nasdaq Debut: An Explosive First Day ABTC’s first day of trading proved as dramatic as expected. Shares surged almost 85% at the open, touching a peak of $14 before settling at lower levels by the close. That initial spike valued the company around $5 billion, positioning it as one of 2025’s most-watched listings. At the last session, ABTC has been trading at $7.28 per share, which is a small positive 2.97% per day. Although the price has decelerated since opening highs, analysts note that the company has been off to a strong start and early investor activity is a hard-to-find feat in a newly-launched crypto mining business. According to market watchers, the listing comes at a time of new momentum in the digital asset markets. With Bitcoin trading above $110,000 this quarter, American Bitcoin’s entry comes at a time when both institutional investors and retail traders are showing heightened interest in exposure to Bitcoin-linked equities. Ownership Structure: Trump Family and Hut 8 at the Helm Its management and ownership set up has increased the visibility of the company. The Trump family and the Canadian mining giant Hut 8 Mining jointly own 98 percent…
Share
BitcoinEthereumNews2025/09/18 01:33
Whales Shift Focus to Zero Knowledge Proof’s 3000x ROI Potential as Zcash & Toncoin’s Rally Slows Down

Whales Shift Focus to Zero Knowledge Proof’s 3000x ROI Potential as Zcash & Toncoin’s Rally Slows Down

Explore how Zero Knowledge Proof (ZKP) is reshaping personal finance, challenging banks, and standing out as one of the top crypto gainers ahead of ZCash and Toncoin
Share
coinlineup2026/01/15 13:00
Visa Brings Stablecoins To $1.7T Platform In BVNK Deal

Visa Brings Stablecoins To $1.7T Platform In BVNK Deal

The post Visa Brings Stablecoins To $1.7T Platform In BVNK Deal appeared on BitcoinEthereumNews.com. Visa Brings Stablecoins To $1.7T Platform In BVNK
Share
BitcoinEthereumNews2026/01/15 13:03