Gitlab and AWS announce a collaboration – what it means for AWS DevTools and who gains from it


Introduction

At AWS re:Invent 2024 in Las Vegas, AWS and Gitlab have announced a collaboration, especially with the focus on getting Amazon Q developer available within Gitlab.

In this blog I’m going to look closer at the announcement and about what I think this means for existing AWS DevTools like CodePipeline, CodeBuild and especially CodeCatalyst.

TL;DR – What has been announced

On december 3rd, 2024, Matt Garman, CEO of AWS, announced a collaboration between AWS and Gitlab.

This announcements makes AWS Q Developer with different features available with Gitlab and strengthens the relationship between these two partners.

Why Gitlab?

One of the first questions that came into my mind was “Why Gitlab?” and why a collaboration? With Github being one of the most frequently used DevTools, why is AWS interested in collaborating with Gitlab?

My personal opinion on this is that – with Gitlab being prelimary focussed on building strong DevTools that cover all parts of the product lifecycle, the relationship might be a win-win for both of the partners. Github is already too strong, too big and they most probably do not “need” AWS to collaborate as their platform is already atracting organizations and individuals more than enough. Gitlab on the other hand has been challenged by a couple of things in the past months (and years) and will benefit from some additional traction comming towards them.

In fact, I’ve heard rumous about Gitlab looking for investors – and we don’t know what happens “behind” the official announcements…

Gitlab is as an organization also flexible and agile enough to be able to react on changing demands of AWS customers that might come with this collaboration. With the focus on Open Source, Gitlab can also have AWS teams supporting changes in their code base.

What does this mean for AWS DevTools?

Well, “It depends” 🙁

What are AWS DevTools?

Let’s first define what we see as “AWS DevTools”. This itself is a difficult thing as the view on it it different. I personally count CodeCommit (deprecated), Cloud9 (deprecated), CodePipeline, CodeBuild, CodeArtifact, CodeCatalyst and ECR into the “DevTools” category, but if you look at things a little bit broader, also Amplify, AWS CDK, SAM could be seen as part of the “DevTools”. The only one of these that offers integrated, end to end tools for the product lifecycle is CodeCatalyst. As you most probably know, this is has been my favorite service for a few years.

DevTools at re:Invent 2024

If you look at the re:Invent session catalog, however, there seems to be a pattern of “services that get or do not get love”. Unfortunately, I have not been able to find a lot of sessions on the AWS DevTools in the catalog. Especially I have only found 3 sessions with a mention of AWS CodeCatalyst – which is a pitty, as most of the features announced for the Gitlab integration where already available in CodeCatalyst in 2023. This was totally different at re:Invent 2022 and 2023.

So, what does this mean?

CodePipeline, CodeBuild and CodeArtifact are essential building blocks and most probably also used insight AWS intensively – but they do not “compete” with the Gitlab integration, as CodeCommit & Cloud9 have been deprecated.

Because of this, I do not expect that this new collaboration will have a bigger impact on the development of these services.

Now, for CodeCatalyst, I am not sure.

A lot of questions and as I already wrote in a previous article, CodeCatalyst did not have any major announcements in the 2nd half of 2024. This also means that it unclear if the new functionalities, that are now available in Gitlab, have also launched in CodeCatalyst.

As I explained with someone from the CodeCatalyst team in this video, the way that the /dev feature is implemented in CodeCatalyst has a backend that runs Bedrock underneath. I assume that the same or similar backend services power the Gitlab and the CodeCatalyst implementation, at least that’s what I personally would do. I will need to test and verify if that is correct.

Still, without major updates and announcements, it’s unlikely that there is active development into CodeCatalyst currently, as the expertise to build DevTools at AWS has always been….let’s call it… “small sized”. So, the next weeks and months are going to decide about the path that CodeCatalyst will take.

Are you an active CodeCatalyst user? Please reach out to me and share your experiences with me!

Why I am disappointed of the announcement?

Maybe I am judging on this collaboration too early, but hey – an infrastructure and “building blocks” provider like AWS now “integrating their services in a 3rd party provider”? This sounds – a tiny bit – odd for me and I am not sure what to expect next. AWS is entering the space of building software & tools for developers, but without being able to control everything end to end – like they would be able to with CodeCatalyst.

If you are a subscriber to my YouTube channel you might remember that I, after the deprecation announcement of CodeCommit and Cloud9, tried to deploy “integrated devtools services” to see what could be used as an alternative to CodeCommit. I managed to get things deployed for two other tools, but for Gitlab I never published the video – because, after spending hours (and days) on it, I gave up – I didn’t get it to run properly on ECS and I did not want to pursue the EC2 path as suggested by the Gitlab documentation.

What I am trying to point out is that I would have loved to get a “managed service” to stand up Gitlab in my own AWS account, supported, mantained and managed by AWS. This would have made a huge difference in terms of how I look at the collaboration between Gitlab and AWS. It would have looked like a complete partnership, enabling AWS customers to use Gitlab as an integrated DevTool.

Also, it would have given AWS the power to control the infrastructure and network connectivity for the Amazon Q developer features that are now available through Gitlab.

What’s next and what stretch goals do I see?

If the integration between AWS and Gitlab is meant to “fly” and create additional traction to AWS Q Developer, the AWS team has some homework to do. I already mentioned the “managed service” dream that I have, but I also would encourage additional integration options with AWS from Gitlab. What about bringing integrations between Gitlab and AWS, with certain aspects of the AWS console or other DevTools?

What about a possibility to convert Gitlab pipelines to CodePipeline V2 on demand?

What about accessing AWS services and the verification of “Drift” against deployed AWS resources?

There is way more things that could come out of a closer collaboration between AWS and Gitlab!

And now, what is a “AWS DevTools Hero” in 2025?

If I look at my role as a DevTools Hero, I tend to get a little bit nervous as I look at the recent developments. What is a “Devtools Heroes” at the end of 2024 and the beginning of 2025? Should I become a “Q developer expert” and give guidance on the best prompts for Q developer ever? Or should I rather focus on CodePipeline or AWS Amplify?

What do you think should the role of an AWS DevTools Hero be in 2025?

Please let me know in the comments!

Some tasks to do after re:Invent 🙂

Now, reflecting after re:Invent 2024, I believe that there is a bunch of things that I should look at. Not promising if I have enough time to do all of it – but I think I should

  1. Look at the current functionalities in Gitlab and review how they work
  2. Discuss with the AWS teams to find better options on integration
  3. Set up Gitlab 🙂 and enable Q Developer in my own account
  4. Plan a migration strategy for all of my projects “off” CodeCatalyst?

Feedback?

Do you have feedback or thoughts around my thought process? Please let me know in the comments or reach out to me on LinkedIn.

Views: 154

The modern CI/CD toolbox: Strategies for consistency and reliability

Introduction

Welcome to the blogpost supporting the AWS re:Invent 2024 session “DEV335 – The modern CI/CD toolbox: Strategies for consistency and reliability”.

We aim to not only summerize but also enhance your session experience with this blog post.

Please do not hesitate to reach out and ask any questions that you might have in the comments or reach out to us directly on socials.

If you’re an old-school person, reach out to us by eMail.

Session walk-through and contents

CI/CD foundations

Continuous integration (CI) involves developers merging code changes frequently, ensuring that the cumulative code is tested regularly—preferably multiple times a day. Continuous delivery (CD), on the other hand, requires manual approval before deploying to production, while continuous deployment is entirely automated.

Unified CI/CD pipelines

Thorsten emphasized the importance of having a single, unified pipeline for all kinds of changes—whether they are application code updates, infrastructure modifications, or configuration changes. This helps maintain consistency, reduces risks, and simplifies compliance.

Code signing and attestation

According to Gunnar, ensuring the integrity of the code through signing and artifact attestation is paramount. This practice verifies that the code hasn’t been altered improperly, tracing each change back to a trusted source, which significantly reduces the risk of tampering and supply chain attacks.

GitOps: a new look at operations

Johannes took an in-depth look into how GitOps integrates Git with operations, streamlining deployment decision-making. GitOps supports a fast, automated transition into production environments without manual intervention, making it powerful for Kubernetes and other cloud-native projects. The main takeway is that with implementing GitOps, the decision to deploy a change to production is taken by the team members close to the context of the change instead of being taken by a “Change Advisory Board” or managers that are far away from the actual change.

Deployment strategies for minimizing risks

Several deployment strategies, including rolling deployments, blue-green deployments, and canary deployments, were outlined by Gunnar. Each strategy offers a different balance of speed and risk, with options to revert back to previous versions quickly if issues arise. You will need to choose the strategy the fits your business needs and your applications requirements.

Drift – Avoid it at any option

In this section, Johannes highlighted the challenges that come around with “Drift” in deployments – which is defined as any kind of manual changes that are done in your cloud deployment without going through Infrastructure as Code (IaC) and CI/CD. We gave the guidance to ensure that noone should get access to the target account to perform manual changes, instead, you should implement a “break-glass” pipeline that is focused on speed to recover form application downtime by rolling forward through CI/CD.

Ensuring consistency across pipelines

Torsten introduced an innovative approach to maintaining pipeline consistency using constructs. By centralizing the standard pipeline templates and allowing teams to extend them, organizations can adapt to specific needs without sacrificing consistency. This method also assists in managing migration between various CI/CD platforms effectively.

The role of security and compliance

Security and compliance are non-negotiable, integral parts of any CI/CD process. Integrating these practices from the beginning ensures that both security and compliance standards are maintained throughout the development lifecycle.

Feature flags and progressive delivery

Gunnar highlighted the importance of feature flags and progressive delivery in decoupling deployment from feature activation. With feature flags, changes can be made dynamically without redeployment, enhancing agility and reducing risk. This approach, used by companies like Netflix, enables controlled risk management and early detection of issues.

Avoiding vendor lock-in with projen-pipelines

Thorsten presented a possibility for CI/CD practicioners to adapt an open source project called projen-pipelines that empowers developers to switch between different CI/CD vendoers by allowing to define the pipelines in Typescript and implementing a renderer process that is able to generate pipeline code for Gitlab, Github, CodeCatalyst and bash.

Conclusions

The insights from this session highlighted the ever-evolving nature of CI/CD practices, where automation, innovation, and stringent security measures play crucial roles. As we continue to refine these practices, it’s clear that the right blend of technology and methodology can significantly impact the efficiency and reliability of software delivery processes.

Next steps

To dive deeper into these strategies, check out the resources and links provided below. Engage with the wider community to exchange ideas and best practices, and continue evolving your CI/CD processes to meet future challenges.

Thank you for attending this session and for taking the time to read the additional information provided here.

Please do not hesitate to reach out and ask any questions that you might have in the comments or reach out to us directly on socials.

If you’re an old-school person, reach out to us by eMail.

The links mentioned in the session

Views: 220

A first look at AWS EKS Auto Mode – hits, misses and possible improvements


Introduction

Today, AWS announced the availability of a new feature for AWS EKS called “Auto Mode”. With this, AWS focuses on solving some of the challenges that users have been mentioning ever since the release of EKS and Fargate.
In this article, we’ll explore the hits and misses (from my perspective) and where I think that the team still has some work left to do.

Feature TL;DR

EKS Auto Mode makes use of EC2 Managed Instances and simplifies the management of underlying compute resources for EKS clusters. In addition to that, it enables the use of a Karpenter backed, fully k8s API compliant way of scaling EKS data planes. The AWS EKS team takes responsibility of managing not only the infrastruture but also the AMIs that power the k8s cluster.

What changes for EKS Operations Engineers with this change?

With this change, EKS operations engineers will not need to scale EKS clusters in and out anymore. Karpenter scales infrastructure for nodes way faster than EC2 AutoScaling. Operations Engineers can focus on the applications instead of managing the underlying infrastructure.

How does the new feature change the responsibility model for EKS?

With this change, AWS takes on a lot of more responsibility within the EKS space. The EKS team will now manage the underlying AMIs, ensuring that they follow security best practices and are secure to use. AWS will also manage the node rotation and upgrade where required.

How do users interact with the new feature?

The new feature is available through the AWS console, the AWS cli and through infrastructure as code – with CloudFormation and Terraform supported right from the start.

In the AWS console, the new feature also simplifies the set up of a new EKS cluster by allowing a “quick start” mode for EKS. In this mode, the new EKS cluster creation process automatically selects sensible defaults for VPC and other settings.

Hits – where’s the feature good

As far as I have seen the feature finally gives AWS the possibility to have a automatically scaling implemented based on the k8s API standards and definitions. EKS Fargate was always a “try” to allow simplifying interacting with EKS, but due to the nature of the feature – being not compliant to the k8s API – you were missing out on additional possibilities like using a different CNIs, running sidecars, etc.

EKS Auto Mode changes this and simplifies the EKS experience.

The additional responsibility that AWS is taking on managing and securing the underlying infrastructure will also help organizations to build faster.

With the feature, the team also simplifies upgrading the control plane as with taking ownership of the underlying nodes, the team can guarantee compliance of the infrastructure setup with new k8s versions – and this includes some of the addons that are now built into the underlying deployment which is powered by the Bottlerocket OS.

Misses – what did the team miss to simplify?

The team did not simplify the network infrastructure setup. The feature also does not help you to make the management of networking and integrations for the clusters easier.

Other wishes for EKS

As already mentioned, I’m not a fan of the current possibilities for network management and the defaults taken for the EKS networking setup. The troubleshooting experience could also be better.

As a next step, I’d also love the EKS tam to take on additional responsibilities for addon management, empower us to build a real service mesh for east/west traffic management and further out-of-the box integrations with other AWS services or managed service providers.

An example for that could be the possibility of a managed Crossplane service or addon, as this k8s based tool is becoming more popular, not only for k8s but also managing AWS infrastructure.

The possibility to add ArgoCD or FluxCD as a component to your EKS management plane “out of the box” also seems appealing to me.

And then, there is the other thing that is constantely going on my nerves: with the idea of using ephemeral EKS clusters, the importance of “faster” cluster provisioning times rises. This could be achieve by optimizations on the EKS side or by allowing the usage of VClusters on EKS clusters out of the box.

Wrap up

This was my initial take on the newly announced AWS EKS Auto Mode. I’ll need to play around with it a bit more to be able to give a better assessment.

What did I miss, what do you think?

Please let me know and start a conversation, I’m eager to hear your thoughts and feedback!

Views: 253

The art of simplification when building on AWS

Introduction

AWS has existed for more than a decade and as of today there are more than 200 AWS services (and counting), even after a few “de-prioritizations” in 2024. The landscape of building cloud applications on AWS is big and ever growing and as builders we need to take hunderts of decisions every day.

One of the most common sentences that I have heard from “Cloud Architects” in the past weeks are sentences that start with “It depends…” when being asked about how to build or solve a specific challenge. I personally believe that it has become very complex and difficult to decide on the technology (or service) to use and that we as the AWS community need to do a better job at explaining consistently on how to take specific decisions for a specific application or architecture.

If we add the option of deploying a k8s cluster on AWS, the number of choices becomes even bigger as you can…build “anything” on k8s 🙂

I believe that it is too difficult to take choices and that we need to start looking at the “simplification” of building applications on AWS.

“A good cloud architect” knows when to use which service and which architecture, weighting between simplicity, complexity, costs, security foodprint and extensibility.

Let’s have a look on the current landscape and the challenges we see.

(This article was written before re:Invent 2024 so some of the details might be outdated by the time you read this 🙂
I’ll try to update this article if there are any related announcements at the conference.)

A few examples for things that could be simpler

As a preparation for this blog post, I’ve asked a few AWS Heroes and Community builders about where they think that AWS is too difficult and complex in november 2024. The answer I got vary based on the focus of the individual and role that each of them has. In this blog I’ll cluster them by topics.

Upgrading documentation, showcasing bast practices

The most common input that I’ve received by far is the ask for more supported and mantained example implementation, best practice documentations and recommendations. Most of the best practices for different services are presented in sessions at re:Invent or re:Inforce, in AWS Blog posts. Partly they are shared within the service documentation or Github – awslabs or aws. Unfortunately, a lot of them become outdated fast and are not actively maintained.
In our area of business, changes to technology are rapidly hapenning and thus best practices that are presented today are already outdated tomorrow.

AWS needs to do a better job at keeping documentation and best practice implementations up to date. This also includes a more frequent and better colllaboration in open source projects. Some of the AWS owned Open Source projects (like AWS cdk or the “containers-roadmap”) are loosing momentum because of missing engagement from the service teams in 2024.

When CodeCatalyst was announced in 2022, I had high hopes of the “Blueprints” functionality to become the “go-to” place for best practice implementations – but AWs unfortunatelly failed to deliver on that promise.
Blueprints are barely maintained and even tho the “genai-chatbot” blueprint has produced a large number of views on my Youtube channel, it feels a bit like they have been abandonned by AWS in the past months.

Simplify costs and cost management

As organizations mature in the usage of AWS and in building applications on AWS, a lot of them put a focus on understanding and analyzing the costs that are produced by their applications running in the cloud. AWS currently allows to track costs mainly based on usage and resources consumed.

This often makes it hard to track the costs allocated for a certain business functionality. Especially if you’re building multi-tenant applications on AWS, it can be really hard to understand and verify what each of the tenant is actually costing you.

We’d love to simplify the cost allocation per application or even per transaction to be able to properly understand the consumption of our budget. This also includes examples like Athena, where you’re billed for using Athena but for the same transaction you are also triggering S3 API calls which are then not allocated correctly to your Athena based application.

Another example that I recently encountered myself is the deployment of an EKS cluster that was deployed in a VPC with a Network firewall attached and activated GuardDuty. The EKS cluster itself was a portion of the totally allocated costs – it was a 20% costs for EKS, but – due to some application deployment challenges – 60% costs on the Network firewall and 20% on GuardDuty.

I wish for AWS to auto-discover my applications (e.g. by using myApplications) and transactions and to output the information that helps me understand the costs of my applications.

k8s and containers

Even in the containers world, AWS has too many options to go with: besides the prominent options like ECS and EKS, we have Beanstalk, AppRunner and even Lambda to run containers. While I understand that all of these building blocks empower builders to build applications using the service that they want to – but you still need to take choices and the migration between one another is often hard, complex and difficult. And – even worse – you need to be an expert of the service to be able to take the right choice for your use case.

I wish for this decision to be simpler, if not to say seamless. Builders potentially don’t want to take decisions on the service, they want to have their applications to adapt automatically to the changing requirements of your the applications they built. Having the possibility to switch from one to another service automatically, without (much) human intervention, would empower us to invent and simplify!

AWS EKS – my top challenges

I’ve been experimenting with AWS EKS lately – and to be honest, every time I start a new cluster, it is a real pain.

Everything is “simple” if you are able to work with defaults – like creating a new VPC in a non-enterprise environment. However, the default ocreation process allows to create public EKS clusters, which should be forbidden by default. Triaging network challenges for EKS are also still very complicated and the support of these kind of problems can be a painful experience.

I would love to get an “auto-fix” button that solves my networking problems on EKS clusters or verifies for me if my setup is correct.

In addition to that, now that EKS supports IPv6, it might be the right time to solve the never-ending IP adress problem that a lot of organizations have by enabling this by default and setting up the EKS clusters using IPv6 for private subnets and internal networking.

Another thing that currently EKS Fargate doesn’t solve is the possibility to use full-k8s-API options and scalability. If you want to implement something like Karpenter on your workloads, you will always need to fall back to the “self-managed” EC2 compute – and this is always painful, because it requires you to start managing your own AMIs and infrastructure. In this case, you also need to take care of the scalability of your cluster infrastructure, which seems to be and outdated thing to do in 2024.

Creating, running and depoying EKS clusters should become a commodoty and a “simple thing” – noone should be worried about it, as it is really only the starting point for you building on Kubernetes.

I hope that AWS takes away some of these challenges and helps organizations that are building on Kubernetes to be able to focus on what they wan to build – on their business value – instead of managing infrastructure for their clusters.

Investing into cross service integrations for serverless

The serverless landscape has evolved a lot over the past years. We’ve seen new functionalities and integrations become available but similar to the containers space, the amount of choices you can and need to take have increased.

At the same time, the integration between the services has not evolved a lot. The Infrastructure as Code (IaC) is massively fragmented with AWS CDK, CloudFormation, Serverless Application Model (SAM), Terraform and newer players like Pulumi growing. Lately, I’ve also encountered Crossplane as a “serious” option to write Infrastructure as Code and deploy infrastructure on AWS.

The obervability landscape is also big – with Open Telemetry, AWS X-Ray and missing integrations to other observability tools – it is difficult to build observability in serverless applications that span accross a lot of different services. Not all of the services support Open Telemetry integration out of the box – I believe this would be a great addition. Allowing to auto-discover transactions, giving developers insights into whats happening within their applications across multiple services helps to make application development easier.

Another thing that I got as a feedback during my conversations was the wish to simplify the setup and definitions of the API Gateway integrations with Load Balancers. The definition of routes, paths and payloads seem to still be difficult within the API Gateway and the differences between a “REST” and an “HTTP” API endpoint are sometimes confusing. And then, there is AppSync (hosted GraphQL)…. I see a lot of potential to simplify this setup and make it easier for developers to build APIs on AWS.

Enterprises & Govcloud

When talking about enterprises in general and enterprises building for Govcloud (and going forward the European Sovereign Cloud), the users would love to get features and services rolled out to Govcloud environments more frequent then currently. they also complain about the not all parts of the AWS console and the tooling being aware of the different partitions (“normal” AWS vs. “govcloud”). This should be improved aswell.

On the optimization and simplification front, I am regularly hearing the feedback that switching between different AWS accounts is a big issue – as we call out the “multi-account” deployments as a best practices, it becomes increasingly important to switch between them and simplify the integration.

Interview partners say the same about multi-region deployments, where the console does not support interacting with applications that are deployed in multiple regions. There’s also not a lot of out-of-the-box support for these kind of deployments within AWS.

When I recently listened to the [AWS Developers Podcast]() episode focused on the IAM identity center, I did hear a lot of very positive things on how to use it and integrate it within your organizations landscape. I do agree that it makes a lot of things simpler than IAM, but improving the User Experience and allowing additional automations to be implemented would be helpful.

General simplifications

Looking at the never-ending announcements about new releases focused on Generative AI, Amazon Bedrock, Amazon Q, Amazon Q Developer, Amazon Q for Business, …? It becomes difficult to navigate the landscape even after only 18 months since Generative AI has become a hype.

From the outside, the AWS’ messaging is not clear and distracting. With many teams exploring different options, the confusion will become bigger. We need to clarify which names, technologies and services to use in which use case. And it needs to be clearer what AWS wants to be in a the Gnerative AI landscape: a “building blocks” providers (through Bedrock and the Converse API), or a “player” in the field of users using generative AI – competing with OpenAI and others. this message is not yet clear – at least to me.

Making things simpler – helping architects to take better decisions

If I look at the AWS landscape a s cloud architect, I would love to be able to take decisions better and faster being supported by AWS. A tool or a service that supports taking decisions based on business requirements and scalability would be awesome, allowing me to focus on building applications and services instead of making me an expert in choosing between the “correct” compute mode for my applications. There’s just n possible options to build applications on AWS. [Serverlessland]() is a great step on maing this decisions easier, but we need more then that!

Thanks to the contributors 😉

While some of the participants of my small survey do not want to be named, I can thank Benjamin, Ran, Matt Morgan, Matt Martz and Andres for their contributions to this blog post. Your detailed feedback and input helped me a lot to shape this post – thank you for all you do for the AWS community.

Wrap up – the art of simplification

In november 2024 I believe that being a “great” cloud architect means, being able to take smart decisions and knowing when and why to choose specific combinations of services, this is an art that a lot of us still need to learn.

k8s is not always the right answer, but some times it might be. AWS Lambda and serverless applications is also not the best choice for everyone.

Simplifying your decisioning tree in architecture makes your role as a cloud architect easier.

What do you think? Where can AWS make your life as a builder and cloud architect simpler and easier?

Views: 167