Kubernetes has become the standard for deploying containerized systems. But with great flexibility comes great risk, without proper policies, clusters can easily become mismanaged…
Level Up Your Kubernetes with Professional Policy-as-Code

Kubernetes has become the standard for deploying containerized systems. But with great flexibility comes great risk, without proper policies, clusters can easily become mismanaged…
Do you remember the feeling when GitHub Copilot was first released? The day the world was thrilled by an AI that could complete lines of code as if by magic. But today, you can forget that image. What Microsoft announced at Build 2025 has elevated the term “Copilot” to a level we never imagined. Picture this: you toss an issue on GitHub to an AI, briefly say, “Do this!,” and walk away to make a coffee. When you return, the feature has been developed, tested, and a Pull Request has been opened, waiting for your review. This isn’t a fantasy. It’s what the GitHub Copilot Agent can actually do. This is the heart of a new era called Agentic AI, where AI doesn’t just wait for commands but takes responsibility for entire projects. The Great Leap: From “Assistant” ???????? to “Teammate” ???? To see the full picture, let’s compare them head-to-head. The Evolution of Copilot The Old Copilot (The Smart Assistant): Think of it as a magic tool in your toolbox. It helps with immediate tasks, like an electric drill that precisely makes a hole for you, but it doesn’t know what you’re building. Ultimately, you still have to be the designer and decision-maker, figuring out which parts go where. This is a Human-in-the-Loop process, where you control every step. Copilot Agent 2025 (The AI Teammate): This is like an automated assembly line. You just send in the blueprint (the Issue), and the system handles everything else—from selecting raw materials (planning) and cutting and assembling parts (writing code) to quality control (creating tests). It then delivers a finished product back to you for final inspection. This shifts to a Human-on-the-Loop model, where humans transition to the role of reviewers and final decision-makers. So, how does it work? The process is simple, just 4 steps. 4-Step Process of Copilot Agent ???? Delegate Task: It starts with a GitHub Issue you create or a command in Chat. ???? Plan: The Agent analyzes the goal and plans which files need to be modified or created. ✍️ Execute: It writes the code according to the plan, complete with Unit Tests. ???? Deliver: When finished, it creates a Pull Request with a summary of changes, waiting for the team to review. https://www.youtube.com/watch?v=EPyyyB23NUU What are the benefits of the Copilot Agent? ⚡ Work Faster: Repetitive tasks that used to take hours, like writing code or creating tests, can now be done in minutes, allowing for much faster delivery. ???? Better Quality: The AI helps write code that adheres to set standards, reducing human error. ???? More Time for Innovation: With less time spent on tedious work, the team has the energy and time to focus on design, complex problem-solving, and creating new innovations. Sounds good, right? But how do you get started? The Copilot Agent is still in Public Preview and has the following conditions: Subscription Required: A GitHub Copilot Enterprise or GitHub Copilot Business plan. Permission Required: An organization’s Admin must enable it. How to Use: For Admins: Go to Organization Settings > Copilot > Policies and enable the “Copilot Agent” feature. Or follow this link: Adding Copilot coding agent to your organization For Developers: The Easiest Way: On the desired GitHub Issue page, go to Assignees and select github-copilot to delegate the task. Another Way: In VS Code Chat, type /agent to enter Agent Mode and issue commands. For those who want to learn more about advanced usage, you can study further on Microsoft Learn, a direct guide for developers. Learn more at: Building Applications with GitHub Copilot Agent Mode How AI Agents Work & What You Should Know Behind the Scenes The Copilot Agent doesn’t work alone. It’s part of a larger architecture that Microsoft is building to allow all AI to work together seamlessly, such as: Copilot Workspace: Think of it as a workspace that allows the AI to understand the full context of your project. Open Protocols (MCP): Imagine a “USB-C for AI.” It’s a common language that allows agents, tools, and services to communicate and work together. https://www.youtube.com/watch?v=pkotufZchjE It’s Time to Truly Work with AI The era of AI-Native Development has begun. AI is no longer just a tool; it has become an integral part of the development team. The key question for organizations and developers today is not “Will AI replace us?” but rather, “Is your team ready to work with AI?“ Finally, SCB TechX is ready to be your Tech Partner who understands your business. From our experience managing large-scale systems, we can help you establish an automated DevOps Flow that supports future growth. For service inquiries, please contact us at https://bit.ly/4etA8Ym Read more details here: https://bit.ly/4dpGl6U ???? References Microsoft Build 2025 Official Blog Post: The age of AI agents and building the open agentic web GitHub Blog Post: GitHub Copilot: Meet the new coding agent GitHub Documentation: Adding Copilot coding agent to your organization Microsoft Learn Module: Building Applications with GitHub Copilot Agent Mode
Don’t Let Your S3 Turn Into an Abandoned Warehouse. Your AWS S3 or any cloud-based file storage might look harmless at a glance. But files that “no one touches” could be quietly burning your budget…
In today’s fast-paced world of software deployment, even a small gap in your CI/CD pipeline can cause major production issues, and cost your business more than expected. A green Success on Jenkins doesn’t always mean every step ran as it should…
As a team working closely with Platform, Cloud, and Infrastructure systems that support development and deployment processes, we’re responsible for the tools that our developers rely on to build services and deliver them to production. Our goal is to ensure that end users enjoy fast, user-friendly, and high-quality digital experiences.
A developer’s job is to build features that deliver the best experience to users. But for platform engineers, it’s about building tools and systems that help developers innovate faster, easier, and more efficiently…
“30% of Cloud Spending is Potentially Wasted” – this statement is particularly relevant for platform engineers, as most of our core tools run on the cloud. It challenges us to reflect on our current work and consider how we can make better use of the resources we have.test…
Cloud technology makes building and scaling applications quickly and flexibly easier than ever. But one of the biggest challenges organizations face is cost management. Controlling…
In today’s world, AI is no longer the exclusive domain of data scientists or machine learning engineers. We are entering an era where anyone can harness AI to work smarter and faster—especially professionals responsible for infrastructure management or platform development. Among the most noteworthy tools in this revolution is AWS Bedrock, a new service from Amazon Web Services that lets you access top-tier generative AI models from multiple providers without having to manage the infrastructure yourself. This article explains what AWS Bedrock is, why it’s impressive, whom it suits best, and how it can benefit platform engineers like us. Additionally, we share a brief firsthand experience to help those who might be interested in experimenting with it. What Is AWS Bedrock and Why Is It So Exciting? In simple terms, AWS Bedrock is a service that lets you access world-class generative AI models from several major vendors with just a click. In other words, it serves as a “one-stop shop” for high-quality AI models sourced from various providers such as Anthropic (Claude), AI21 Labs, Stability AI, Meta (Llama), and Amazon Titan (AWS’s own model). Each model offers unique strengths—some excel at summarizing information, others at translation or even image generation. Moreover, you can interact with these models through a single, unified API. The key benefits of AWS Bedrock include: ????️ No Infrastructure Hassles: Forget about setting up servers, tweaking GPUs, or managing auto-scaling—AWS handles it all. ???? Flexibility in Model Selection: If you fancy using Claude one day and want to try Llama the next, you can switch models without any vendor lock-in. ???? Seamless Integration: It works effortlessly through APIs or SDKs, making it easy to connect with Lambda, API Gateway, or your own custom applications. ???? Security and Compliance: Benefit from the robust security standards of AWS’s trusted infrastructure. Beyond these fundamental advantages, AWS Bedrock comes with a suite of advanced features designed to prevent misinformation and ensure the accuracy and suitability of responses: ???? Foundation Model APIs: Access various AI models from different providers through a unified API. ????️ Custom Model Fine-Tuning: Tailor select models to fit your specific use cases. ???? Agents for Bedrock: Create agents capable of utilizing a toolchain, executing chain-of-thought reasoning, or interfacing with external data sources such as DynamoDB or other external APIs. ???? Knowledge Bases: Allow models to reference data from S3 or RDS and respond to queries based on that information. ????️ Guardrails: Set safety boundaries—for instance, preventing the discussion of sensitive subjects or the provision of incorrect data. Getting Started with AI the Easy Way Using Amazon Titan on AWS Bedrock After discussing the theory behind AWS Bedrock, it’s time to share a hands-on experience to illustrate just how accessible this tool is. In our trial, we opted to use an AI model directly from AWS—the lightweight and cost-effective “Amazon Titan – Nova Micro.” Note that this model is currently available only in the N. Virginia region (it has yet to be launched in Thailand). Our Use Case We set up a scenario where Nova Micro was tasked with creating a list of subtasks needed for opening a Jira ticket. The goal was to kickstart tasks quickly and minimize repetitive work. Here’s how we prepared: Prompt Example ????✨ prompt = “Create a detailed list of subtasks needed to complete this task include testing and document. Format the response as a list where each item starts with ‘- ‘ and includes a clear and text, actionable subtask only one. The number of items may vary, but do not exceed 10. items. Task: ” Task Message message = ‘Install Jeknins on AWS ec2′ Nova Micro, our little model, works faster than expected! Just by entering the prepared prompt, the model can generate tasks completely and accurately, ready for real-world use. For example, it can instantly open a Jira task without the need to waste time thinking it through repeatedly. What I really like is the Chat mode, which is both easy and convenient — no need to write code to give it a try. And if the answer from this model isn’t quite what we’re looking for, we can easily switch to other models available in Bedrock to get the answer that best matches our needs, ready to be used for further tasks. In real working life, we see these models as becoming key assistants in enhancing work efficiency — especially when used in combination with automation systems. And when we talk about automation, we can’t skip the topic of coding, which allows us to instruct the models to perform tasks systematically on our behalf. So today, let’s try writing a simple piece of code to call AWS Bedrock via its API. What’s really great is that we can view the API request format directly from the AWS Console under the Bedrock section. By simply selecting the model we want to use, the system will display a ready-to-use API example right away. This makes it much easier to start coding without having to flip through multiple documentation pages or guess the parameters on our own. We chose to use Boto3 to experiment with calling a model via AWS Bedrock, and here are the results. It must be said — it was both simple and very convenient. You could say it opens the door to using AI in daily work without complexity. AWS Bedrock truly makes AI feel much more accessible and no longer something distant or difficult to approach. It’s just the beginning… but a crucial first step in bringing AI into your workflow. From our trial use of AWS Bedrock with a lightweight model like Amazon Titan – Nova Micro, we’ve seen that getting started with AI through Bedrock is incredibly simple and truly accessible. Even though this was just a basic use case, it clearly helped improve both speed and efficiency in our workflow — especially when combined with code through Boto3, which allows the results to be seamlessly integrated… Continue reading AWS Bedrock: An Innovative AI Tool That Empowers You to Create with Ease
Jenkins is a popular tool for Continuous Integration/Continuous Deployment (CI/CD) across the Software Development Lifecycle from building and testing to deploying and automation. Today, Khun Aom, Platform Services Engineer, SCB TechX reveals 5 powerful…