Gitlab and AWS announce a collaboration – what it means for AWS DevTools and who gains from it


Introduction

At AWS re:Invent 2024 in Las Vegas, AWS and Gitlab have announced a collaboration, especially with the focus on getting Amazon Q developer available within Gitlab.

In this blog I’m going to look closer at the announcement and about what I think this means for existing AWS DevTools like CodePipeline, CodeBuild and especially CodeCatalyst.

TL;DR – What has been announced

On december 3rd, 2024, Matt Garman, CEO of AWS, announced a collaboration between AWS and Gitlab.

This announcements makes AWS Q Developer with different features available with Gitlab and strengthens the relationship between these two partners.

Why Gitlab?

One of the first questions that came into my mind was “Why Gitlab?” and why a collaboration? With Github being one of the most frequently used DevTools, why is AWS interested in collaborating with Gitlab?

My personal opinion on this is that – with Gitlab being prelimary focussed on building strong DevTools that cover all parts of the product lifecycle, the relationship might be a win-win for both of the partners. Github is already too strong, too big and they most probably do not “need” AWS to collaborate as their platform is already atracting organizations and individuals more than enough. Gitlab on the other hand has been challenged by a couple of things in the past months (and years) and will benefit from some additional traction comming towards them.

In fact, I’ve heard rumous about Gitlab looking for investors – and we don’t know what happens “behind” the official announcements…

Gitlab is as an organization also flexible and agile enough to be able to react on changing demands of AWS customers that might come with this collaboration. With the focus on Open Source, Gitlab can also have AWS teams supporting changes in their code base.

What does this mean for AWS DevTools?

Well, “It depends” 🙁

What are AWS DevTools?

Let’s first define what we see as “AWS DevTools”. This itself is a difficult thing as the view on it it different. I personally count CodeCommit (deprecated), Cloud9 (deprecated), CodePipeline, CodeBuild, CodeArtifact, CodeCatalyst and ECR into the “DevTools” category, but if you look at things a little bit broader, also Amplify, AWS CDK, SAM could be seen as part of the “DevTools”. The only one of these that offers integrated, end to end tools for the product lifecycle is CodeCatalyst. As you most probably know, this is has been my favorite service for a few years.

DevTools at re:Invent 2024

If you look at the re:Invent session catalog, however, there seems to be a pattern of “services that get or do not get love”. Unfortunately, I have not been able to find a lot of sessions on the AWS DevTools in the catalog. Especially I have only found 3 sessions with a mention of AWS CodeCatalyst – which is a pitty, as most of the features announced for the Gitlab integration where already available in CodeCatalyst in 2023. This was totally different at re:Invent 2022 and 2023.

So, what does this mean?

CodePipeline, CodeBuild and CodeArtifact are essential building blocks and most probably also used insight AWS intensively – but they do not “compete” with the Gitlab integration, as CodeCommit & Cloud9 have been deprecated.

Because of this, I do not expect that this new collaboration will have a bigger impact on the development of these services.

Now, for CodeCatalyst, I am not sure.

A lot of questions and as I already wrote in a previous article, CodeCatalyst did not have any major announcements in the 2nd half of 2024. This also means that it unclear if the new functionalities, that are now available in Gitlab, have also launched in CodeCatalyst.

As I explained with someone from the CodeCatalyst team in this video, the way that the /dev feature is implemented in CodeCatalyst has a backend that runs Bedrock underneath. I assume that the same or similar backend services power the Gitlab and the CodeCatalyst implementation, at least that’s what I personally would do. I will need to test and verify if that is correct.

Still, without major updates and announcements, it’s unlikely that there is active development into CodeCatalyst currently, as the expertise to build DevTools at AWS has always been….let’s call it… “small sized”. So, the next weeks and months are going to decide about the path that CodeCatalyst will take.

Are you an active CodeCatalyst user? Please reach out to me and share your experiences with me!

Why I am disappointed of the announcement?

Maybe I am judging on this collaboration too early, but hey – an infrastructure and “building blocks” provider like AWS now “integrating their services in a 3rd party provider”? This sounds – a tiny bit – odd for me and I am not sure what to expect next. AWS is entering the space of building software & tools for developers, but without being able to control everything end to end – like they would be able to with CodeCatalyst.

If you are a subscriber to my YouTube channel you might remember that I, after the deprecation announcement of CodeCommit and Cloud9, tried to deploy “integrated devtools services” to see what could be used as an alternative to CodeCommit. I managed to get things deployed for two other tools, but for Gitlab I never published the video – because, after spending hours (and days) on it, I gave up – I didn’t get it to run properly on ECS and I did not want to pursue the EC2 path as suggested by the Gitlab documentation.

What I am trying to point out is that I would have loved to get a “managed service” to stand up Gitlab in my own AWS account, supported, mantained and managed by AWS. This would have made a huge difference in terms of how I look at the collaboration between Gitlab and AWS. It would have looked like a complete partnership, enabling AWS customers to use Gitlab as an integrated DevTool.

Also, it would have given AWS the power to control the infrastructure and network connectivity for the Amazon Q developer features that are now available through Gitlab.

What’s next and what stretch goals do I see?

If the integration between AWS and Gitlab is meant to “fly” and create additional traction to AWS Q Developer, the AWS team has some homework to do. I already mentioned the “managed service” dream that I have, but I also would encourage additional integration options with AWS from Gitlab. What about bringing integrations between Gitlab and AWS, with certain aspects of the AWS console or other DevTools?

What about a possibility to convert Gitlab pipelines to CodePipeline V2 on demand?

What about accessing AWS services and the verification of “Drift” against deployed AWS resources?

There is way more things that could come out of a closer collaboration between AWS and Gitlab!

And now, what is a “AWS DevTools Hero” in 2025?

If I look at my role as a DevTools Hero, I tend to get a little bit nervous as I look at the recent developments. What is a “Devtools Heroes” at the end of 2024 and the beginning of 2025? Should I become a “Q developer expert” and give guidance on the best prompts for Q developer ever? Or should I rather focus on CodePipeline or AWS Amplify?

What do you think should the role of an AWS DevTools Hero be in 2025?

Please let me know in the comments!

Some tasks to do after re:Invent 🙂

Now, reflecting after re:Invent 2024, I believe that there is a bunch of things that I should look at. Not promising if I have enough time to do all of it – but I think I should

  1. Look at the current functionalities in Gitlab and review how they work
  2. Discuss with the AWS teams to find better options on integration
  3. Set up Gitlab 🙂 and enable Q Developer in my own account
  4. Plan a migration strategy for all of my projects “off” CodeCatalyst?

Feedback?

Do you have feedback or thoughts around my thought process? Please let me know in the comments or reach out to me on LinkedIn.

Views: 95

The modern CI/CD toolbox: Strategies for consistency and reliability

Introduction

Welcome to the blogpost supporting the AWS re:Invent 2024 session “DEV335 – The modern CI/CD toolbox: Strategies for consistency and reliability”.

We aim to not only summerize but also enhance your session experience with this blog post.

Please do not hesitate to reach out and ask any questions that you might have in the comments or reach out to us directly on socials.

If you’re an old-school person, reach out to us by eMail.

Session walk-through and contents

CI/CD foundations

Continuous integration (CI) involves developers merging code changes frequently, ensuring that the cumulative code is tested regularly—preferably multiple times a day. Continuous delivery (CD), on the other hand, requires manual approval before deploying to production, while continuous deployment is entirely automated.

Unified CI/CD pipelines

Thorsten emphasized the importance of having a single, unified pipeline for all kinds of changes—whether they are application code updates, infrastructure modifications, or configuration changes. This helps maintain consistency, reduces risks, and simplifies compliance.

Code signing and attestation

According to Gunnar, ensuring the integrity of the code through signing and artifact attestation is paramount. This practice verifies that the code hasn’t been altered improperly, tracing each change back to a trusted source, which significantly reduces the risk of tampering and supply chain attacks.

GitOps: a new look at operations

Johannes took an in-depth look into how GitOps integrates Git with operations, streamlining deployment decision-making. GitOps supports a fast, automated transition into production environments without manual intervention, making it powerful for Kubernetes and other cloud-native projects. The main takeway is that with implementing GitOps, the decision to deploy a change to production is taken by the team members close to the context of the change instead of being taken by a “Change Advisory Board” or managers that are far away from the actual change.

Deployment strategies for minimizing risks

Several deployment strategies, including rolling deployments, blue-green deployments, and canary deployments, were outlined by Gunnar. Each strategy offers a different balance of speed and risk, with options to revert back to previous versions quickly if issues arise. You will need to choose the strategy the fits your business needs and your applications requirements.

Drift – Avoid it at any option

In this section, Johannes highlighted the challenges that come around with “Drift” in deployments – which is defined as any kind of manual changes that are done in your cloud deployment without going through Infrastructure as Code (IaC) and CI/CD. We gave the guidance to ensure that noone should get access to the target account to perform manual changes, instead, you should implement a “break-glass” pipeline that is focused on speed to recover form application downtime by rolling forward through CI/CD.

Ensuring consistency across pipelines

Torsten introduced an innovative approach to maintaining pipeline consistency using constructs. By centralizing the standard pipeline templates and allowing teams to extend them, organizations can adapt to specific needs without sacrificing consistency. This method also assists in managing migration between various CI/CD platforms effectively.

The role of security and compliance

Security and compliance are non-negotiable, integral parts of any CI/CD process. Integrating these practices from the beginning ensures that both security and compliance standards are maintained throughout the development lifecycle.

Feature flags and progressive delivery

Gunnar highlighted the importance of feature flags and progressive delivery in decoupling deployment from feature activation. With feature flags, changes can be made dynamically without redeployment, enhancing agility and reducing risk. This approach, used by companies like Netflix, enables controlled risk management and early detection of issues.

Avoiding vendor lock-in with projen-pipelines

Thorsten presented a possibility for CI/CD practicioners to adapt an open source project called projen-pipelines that empowers developers to switch between different CI/CD vendoers by allowing to define the pipelines in Typescript and implementing a renderer process that is able to generate pipeline code for Gitlab, Github, CodeCatalyst and bash.

Conclusions

The insights from this session highlighted the ever-evolving nature of CI/CD practices, where automation, innovation, and stringent security measures play crucial roles. As we continue to refine these practices, it’s clear that the right blend of technology and methodology can significantly impact the efficiency and reliability of software delivery processes.

Next steps

To dive deeper into these strategies, check out the resources and links provided below. Engage with the wider community to exchange ideas and best practices, and continue evolving your CI/CD processes to meet future challenges.

Thank you for attending this session and for taking the time to read the additional information provided here.

Please do not hesitate to reach out and ask any questions that you might have in the comments or reach out to us directly on socials.

If you’re an old-school person, reach out to us by eMail.

The links mentioned in the session

Views: 143

A first look at AWS EKS Auto Mode – hits, misses and possible improvements


Introduction

Today, AWS announced the availability of a new feature for AWS EKS called “Auto Mode”. With this, AWS focuses on solving some of the challenges that users have been mentioning ever since the release of EKS and Fargate.
In this article, we’ll explore the hits and misses (from my perspective) and where I think that the team still has some work left to do.

Feature TL;DR

EKS Auto Mode makes use of EC2 Managed Instances and simplifies the management of underlying compute resources for EKS clusters. In addition to that, it enables the use of a Karpenter backed, fully k8s API compliant way of scaling EKS data planes. The AWS EKS team takes responsibility of managing not only the infrastruture but also the AMIs that power the k8s cluster.

What changes for EKS Operations Engineers with this change?

With this change, EKS operations engineers will not need to scale EKS clusters in and out anymore. Karpenter scales infrastructure for nodes way faster than EC2 AutoScaling. Operations Engineers can focus on the applications instead of managing the underlying infrastructure.

How does the new feature change the responsibility model for EKS?

With this change, AWS takes on a lot of more responsibility within the EKS space. The EKS team will now manage the underlying AMIs, ensuring that they follow security best practices and are secure to use. AWS will also manage the node rotation and upgrade where required.

How do users interact with the new feature?

The new feature is available through the AWS console, the AWS cli and through infrastructure as code – with CloudFormation and Terraform supported right from the start.

In the AWS console, the new feature also simplifies the set up of a new EKS cluster by allowing a “quick start” mode for EKS. In this mode, the new EKS cluster creation process automatically selects sensible defaults for VPC and other settings.

Hits – where’s the feature good

As far as I have seen the feature finally gives AWS the possibility to have a automatically scaling implemented based on the k8s API standards and definitions. EKS Fargate was always a “try” to allow simplifying interacting with EKS, but due to the nature of the feature – being not compliant to the k8s API – you were missing out on additional possibilities like using a different CNIs, running sidecars, etc.

EKS Auto Mode changes this and simplifies the EKS experience.

The additional responsibility that AWS is taking on managing and securing the underlying infrastructure will also help organizations to build faster.

With the feature, the team also simplifies upgrading the control plane as with taking ownership of the underlying nodes, the team can guarantee compliance of the infrastructure setup with new k8s versions – and this includes some of the addons that are now built into the underlying deployment which is powered by the Bottlerocket OS.

Misses – what did the team miss to simplify?

The team did not simplify the network infrastructure setup. The feature also does not help you to make the management of networking and integrations for the clusters easier.

Other wishes for EKS

As already mentioned, I’m not a fan of the current possibilities for network management and the defaults taken for the EKS networking setup. The troubleshooting experience could also be better.

As a next step, I’d also love the EKS tam to take on additional responsibilities for addon management, empower us to build a real service mesh for east/west traffic management and further out-of-the box integrations with other AWS services or managed service providers.

An example for that could be the possibility of a managed Crossplane service or addon, as this k8s based tool is becoming more popular, not only for k8s but also managing AWS infrastructure.

The possibility to add ArgoCD or FluxCD as a component to your EKS management plane “out of the box” also seems appealing to me.

And then, there is the other thing that is constantely going on my nerves: with the idea of using ephemeral EKS clusters, the importance of “faster” cluster provisioning times rises. This could be achieve by optimizations on the EKS side or by allowing the usage of VClusters on EKS clusters out of the box.

Wrap up

This was my initial take on the newly announced AWS EKS Auto Mode. I’ll need to play around with it a bit more to be able to give a better assessment.

What did I miss, what do you think?

Please let me know and start a conversation, I’m eager to hear your thoughts and feedback!

Views: 197

The art of simplification when building on AWS

Introduction

AWS has existed for more than a decade and as of today there are more than 200 AWS services (and counting), even after a few “de-prioritizations” in 2024. The landscape of building cloud applications on AWS is big and ever growing and as builders we need to take hunderts of decisions every day.

One of the most common sentences that I have heard from “Cloud Architects” in the past weeks are sentences that start with “It depends…” when being asked about how to build or solve a specific challenge. I personally believe that it has become very complex and difficult to decide on the technology (or service) to use and that we as the AWS community need to do a better job at explaining consistently on how to take specific decisions for a specific application or architecture.

If we add the option of deploying a k8s cluster on AWS, the number of choices becomes even bigger as you can…build “anything” on k8s 🙂

I believe that it is too difficult to take choices and that we need to start looking at the “simplification” of building applications on AWS.

“A good cloud architect” knows when to use which service and which architecture, weighting between simplicity, complexity, costs, security foodprint and extensibility.

Let’s have a look on the current landscape and the challenges we see.

(This article was written before re:Invent 2024 so some of the details might be outdated by the time you read this 🙂
I’ll try to update this article if there are any related announcements at the conference.)

A few examples for things that could be simpler

As a preparation for this blog post, I’ve asked a few AWS Heroes and Community builders about where they think that AWS is too difficult and complex in november 2024. The answer I got vary based on the focus of the individual and role that each of them has. In this blog I’ll cluster them by topics.

Upgrading documentation, showcasing bast practices

The most common input that I’ve received by far is the ask for more supported and mantained example implementation, best practice documentations and recommendations. Most of the best practices for different services are presented in sessions at re:Invent or re:Inforce, in AWS Blog posts. Partly they are shared within the service documentation or Github – awslabs or aws. Unfortunately, a lot of them become outdated fast and are not actively maintained.
In our area of business, changes to technology are rapidly hapenning and thus best practices that are presented today are already outdated tomorrow.

AWS needs to do a better job at keeping documentation and best practice implementations up to date. This also includes a more frequent and better colllaboration in open source projects. Some of the AWS owned Open Source projects (like AWS cdk or the “containers-roadmap”) are loosing momentum because of missing engagement from the service teams in 2024.

When CodeCatalyst was announced in 2022, I had high hopes of the “Blueprints” functionality to become the “go-to” place for best practice implementations – but AWs unfortunatelly failed to deliver on that promise.
Blueprints are barely maintained and even tho the “genai-chatbot” blueprint has produced a large number of views on my Youtube channel, it feels a bit like they have been abandonned by AWS in the past months.

Simplify costs and cost management

As organizations mature in the usage of AWS and in building applications on AWS, a lot of them put a focus on understanding and analyzing the costs that are produced by their applications running in the cloud. AWS currently allows to track costs mainly based on usage and resources consumed.

This often makes it hard to track the costs allocated for a certain business functionality. Especially if you’re building multi-tenant applications on AWS, it can be really hard to understand and verify what each of the tenant is actually costing you.

We’d love to simplify the cost allocation per application or even per transaction to be able to properly understand the consumption of our budget. This also includes examples like Athena, where you’re billed for using Athena but for the same transaction you are also triggering S3 API calls which are then not allocated correctly to your Athena based application.

Another example that I recently encountered myself is the deployment of an EKS cluster that was deployed in a VPC with a Network firewall attached and activated GuardDuty. The EKS cluster itself was a portion of the totally allocated costs – it was a 20% costs for EKS, but – due to some application deployment challenges – 60% costs on the Network firewall and 20% on GuardDuty.

I wish for AWS to auto-discover my applications (e.g. by using myApplications) and transactions and to output the information that helps me understand the costs of my applications.

k8s and containers

Even in the containers world, AWS has too many options to go with: besides the prominent options like ECS and EKS, we have Beanstalk, AppRunner and even Lambda to run containers. While I understand that all of these building blocks empower builders to build applications using the service that they want to – but you still need to take choices and the migration between one another is often hard, complex and difficult. And – even worse – you need to be an expert of the service to be able to take the right choice for your use case.

I wish for this decision to be simpler, if not to say seamless. Builders potentially don’t want to take decisions on the service, they want to have their applications to adapt automatically to the changing requirements of your the applications they built. Having the possibility to switch from one to another service automatically, without (much) human intervention, would empower us to invent and simplify!

AWS EKS – my top challenges

I’ve been experimenting with AWS EKS lately – and to be honest, every time I start a new cluster, it is a real pain.

Everything is “simple” if you are able to work with defaults – like creating a new VPC in a non-enterprise environment. However, the default ocreation process allows to create public EKS clusters, which should be forbidden by default. Triaging network challenges for EKS are also still very complicated and the support of these kind of problems can be a painful experience.

I would love to get an “auto-fix” button that solves my networking problems on EKS clusters or verifies for me if my setup is correct.

In addition to that, now that EKS supports IPv6, it might be the right time to solve the never-ending IP adress problem that a lot of organizations have by enabling this by default and setting up the EKS clusters using IPv6 for private subnets and internal networking.

Another thing that currently EKS Fargate doesn’t solve is the possibility to use full-k8s-API options and scalability. If you want to implement something like Karpenter on your workloads, you will always need to fall back to the “self-managed” EC2 compute – and this is always painful, because it requires you to start managing your own AMIs and infrastructure. In this case, you also need to take care of the scalability of your cluster infrastructure, which seems to be and outdated thing to do in 2024.

Creating, running and depoying EKS clusters should become a commodoty and a “simple thing” – noone should be worried about it, as it is really only the starting point for you building on Kubernetes.

I hope that AWS takes away some of these challenges and helps organizations that are building on Kubernetes to be able to focus on what they wan to build – on their business value – instead of managing infrastructure for their clusters.

Investing into cross service integrations for serverless

The serverless landscape has evolved a lot over the past years. We’ve seen new functionalities and integrations become available but similar to the containers space, the amount of choices you can and need to take have increased.

At the same time, the integration between the services has not evolved a lot. The Infrastructure as Code (IaC) is massively fragmented with AWS CDK, CloudFormation, Serverless Application Model (SAM), Terraform and newer players like Pulumi growing. Lately, I’ve also encountered Crossplane as a “serious” option to write Infrastructure as Code and deploy infrastructure on AWS.

The obervability landscape is also big – with Open Telemetry, AWS X-Ray and missing integrations to other observability tools – it is difficult to build observability in serverless applications that span accross a lot of different services. Not all of the services support Open Telemetry integration out of the box – I believe this would be a great addition. Allowing to auto-discover transactions, giving developers insights into whats happening within their applications across multiple services helps to make application development easier.

Another thing that I got as a feedback during my conversations was the wish to simplify the setup and definitions of the API Gateway integrations with Load Balancers. The definition of routes, paths and payloads seem to still be difficult within the API Gateway and the differences between a “REST” and an “HTTP” API endpoint are sometimes confusing. And then, there is AppSync (hosted GraphQL)…. I see a lot of potential to simplify this setup and make it easier for developers to build APIs on AWS.

Enterprises & Govcloud

When talking about enterprises in general and enterprises building for Govcloud (and going forward the European Sovereign Cloud), the users would love to get features and services rolled out to Govcloud environments more frequent then currently. they also complain about the not all parts of the AWS console and the tooling being aware of the different partitions (“normal” AWS vs. “govcloud”). This should be improved aswell.

On the optimization and simplification front, I am regularly hearing the feedback that switching between different AWS accounts is a big issue – as we call out the “multi-account” deployments as a best practices, it becomes increasingly important to switch between them and simplify the integration.

Interview partners say the same about multi-region deployments, where the console does not support interacting with applications that are deployed in multiple regions. There’s also not a lot of out-of-the-box support for these kind of deployments within AWS.

When I recently listened to the [AWS Developers Podcast]() episode focused on the IAM identity center, I did hear a lot of very positive things on how to use it and integrate it within your organizations landscape. I do agree that it makes a lot of things simpler than IAM, but improving the User Experience and allowing additional automations to be implemented would be helpful.

General simplifications

Looking at the never-ending announcements about new releases focused on Generative AI, Amazon Bedrock, Amazon Q, Amazon Q Developer, Amazon Q for Business, …? It becomes difficult to navigate the landscape even after only 18 months since Generative AI has become a hype.

From the outside, the AWS’ messaging is not clear and distracting. With many teams exploring different options, the confusion will become bigger. We need to clarify which names, technologies and services to use in which use case. And it needs to be clearer what AWS wants to be in a the Gnerative AI landscape: a “building blocks” providers (through Bedrock and the Converse API), or a “player” in the field of users using generative AI – competing with OpenAI and others. this message is not yet clear – at least to me.

Making things simpler – helping architects to take better decisions

If I look at the AWS landscape a s cloud architect, I would love to be able to take decisions better and faster being supported by AWS. A tool or a service that supports taking decisions based on business requirements and scalability would be awesome, allowing me to focus on building applications and services instead of making me an expert in choosing between the “correct” compute mode for my applications. There’s just n possible options to build applications on AWS. [Serverlessland]() is a great step on maing this decisions easier, but we need more then that!

Thanks to the contributors 😉

While some of the participants of my small survey do not want to be named, I can thank Benjamin, Ran, Matt Morgan, Matt Martz and Andres for their contributions to this blog post. Your detailed feedback and input helped me a lot to shape this post – thank you for all you do for the AWS community.

Wrap up – the art of simplification

In november 2024 I believe that being a “great” cloud architect means, being able to take smart decisions and knowing when and why to choose specific combinations of services, this is an art that a lot of us still need to learn.

k8s is not always the right answer, but some times it might be. AWS Lambda and serverless applications is also not the best choice for everyone.

Simplifying your decisioning tree in architecture makes your role as a cloud architect easier.

What do you think? Where can AWS make your life as a builder and cloud architect simpler and easier?

Views: 127

Dear AWS, how do I build & develop purely on AWS right now?

The announcements from AWS around deprecating certain services have raised a bunch of questions and concerns in the AWS community. 

As Jeff Barr wrote, these are the services:

S3 Select, CloudSearch, Cloud9, SimpleDB, Forecast, Data Pipeline, and CodeCommit.

This post will focus on Cloud9 and CodeCommit … and how I think this announcement impacts the “end to end” developer story for developers on AWS. We’ll also look at how the announcements impacts my “Go-To” services Amazon CodeCatalyst.

It is written from the perspective of a builder that mainly uses AWS tools for smaller side projects and can be seen as a “startup” that needs to quickly be up & running without much hassle.

Introduction

These annoucements, and the way that these deprecations where announced

Blog for CodeCommit

Blog for Cloud9

are in my humble opinion one of the worst possible ways. I know the teams at AWS have seen the feedback and I hope that there will be a clearer communication strategy going forward.

For me the combination of those posts with the assumption of CodeCatalyst being built on top of these services gives a very strange feeling on how much AWS is currently invested into Developers on AWS.

Let’s look at why I see a lot of impact of these announcements for builders and think about alternatives if you are using CodeCommit or Cloud9 for certain aspects today.

Tools required for SDLC

A few weeks ago I even dedicated a complete Shorts-Playlist on all of the Code* tools, looking at their usage and the approach to cover a full Software Development Lifecycle (SDLC) building on AWS.

In this series I drafted the diagram:

AWS Tools part of your SDLC until recent announcements

This being and end-to-end flow, AWS had at least two options to implement this process using their tools:

Either CodeCatalyst or a combination of different AWS Services

When CodeCatalyst was announced, I wrote about how CodeCatalyst can be used to cover all parts of your Secure Development Lifecycle (SDLC) process on AWS. Ever since then, there was an alternative on AWS using a combination of different building blocks: CodeCommit, CodeBuild, CodeDeploy, CodePipeline and others.

CodeCommit was a good, reliable managed Git server. For the purposes it solved, there weren’t many features to add. It was a managed service you didn’t need to think about and just “serve its purpose”.

Cloud9 was a hosted IDE, a development environment that users where able to access through their browser. This enabled builders to have a real IDE, even on old or underpowered computers, anywhere — even when being on vacation.

Developers on AWS are still able to use CodeCatalyst to cover for all parts of your product lifecycle or they had the alternative to use the different “building blocks” to compose their SDLC process. Both options gave value and helped AWS customers to solve certain aspects and problems.

Now, officially, only one option is left — CodeCatalyst.
CodeCatalyst is an integrated DevTools service that unites all of the building blocks under an opinionated, structured user interface. It was announced at re:Invent 2022 and went GA in early 2023. With the custom blueprints feature, it also enables builders to create project templates and share them with their team mates or dependend teams. Very powerful possibilities for teams to collaborate better and also share their best practices with other teams.

Those that didn’t need a “reliable managed Git server” where most probably using existing alternatives — that might solve the “job” better than CodeCommit — like Github, Gitlab or Atlassian. These users and AWS customers are not affected by the change.

What has changed with the July 2024 announcements — builders perspective

Now, the system landscape has changed.

Developers can not use Cloud9 anymore to develop software, they need to fallback to alternatives like Github Codespaces, Coder or Gitpod.

Developers cannot store their source code in CodeCommit anymore, they need to fallback to alternatives like Github, Gitlab or Bitbucket.

And given CodeCatalyst might be using CodeCommit under the hood and is using Cloud9 for the DevEnvironments – Can I really build something on top of CodeCatalyst going forward?

So this announcement of deprecation — without a “real” AWS native alternative — puts everyone building and developing software on AWS in the situation of needed to look for alternative setups.

Especially it forces you — if you are a small organization (or a startup) to engage with more than just one vendor as part of your SDLC lifecycle and process. I see this as a critical point to talk about aswell.

And, if you are building software or platforms on AWS where especially CodeCommit is part of application or the deployed architecture itself — You are now left without any option. If you want to integrate a Git server in your application on AWS, you will now need to self-host the git-server instead of using a managed service.

If you “just” needed a Git repository, quickly, fast and reliable — CodeCommit was the way to go. Now, you need to use a 3rd party alternative.

Now: What options on AWS do we have as builders?

What changed with the July 2024 announcements — business perspective

Looking at the announced changes from a different perspective, we need to acknowledge that AWS is a 90+ billion (90.000.000.000) dollar company. It is clear, being a business that aims to “make money”, that AWS needs to focus on services and solutions that are widely used, adapted and earn a good margin.

The reason might be that Cloud9 and CodeCommit where just not profitable enough to drive the expected growth of the business. Especially, as there are other services that do the same job better than Cloud9 and CodeCommit. So it might have been “just” a business decision to stop investing into these services and focus instead on Amazon Q that promises to help developers and builders on AWS.

This raises the question on which other services might soon or in the future be hit by exactly the same challenge. And – how is “success” measured by AWS on services? Is it “just” revenue or are there other points that are being considered?

But still — How this feels for me and questions I have (emotionally)

It feels like AWS has given up the game of engaging with their “Builders” and is now focused on the “Buyers” that “host” their applications on AWS.

If you think about how AWS started and if you look at how much effort AWS has spent this year on making us think that “Amazon Q Developer” is going to make our lives as developers easier…

How can I as an advocate for AWS as a platform be confident that I am valued as “Builder” on AWS? Will other services also disappear if they do not get enough traction?

And how much can I trust in Werner’s “Now, go build“?

How much “trust” can I put in the other Code* (CodeBuild, CodePipeline, …) tools on AWS?
With CodePipeline and CodeBuild getting a lot of notable updates right now (macOS, Github action runners, Stages rollback, …) the outsiders view is that at least these services are there to exist… but how much trust has the AWS team lost with Builders around the globe?

I’m eager to see how the different workshops, best practice documents, open source projects that use either CodeCommit oder Cloud9 (especially also AWS owned) will be adjusted and updated in the next weeks and months.

How much is also CodeCatalyst going to be the central place for Developers on AWS? How much updates will we see there?

How does this affect you – I would love to know!

I am really interested to hear how these announcements have affected your perspective on AWS and your view on the different AWS services.

Please share your thoughts either as a comment to this post or reach out to me personally!

What YOU can do next

You could now follow the advice from AWS and “migrate” away from CodeCommit or Cloud9 — but is this really what you want to do?
If you have a need to have a “Git server” or “Git repository” close to your applications on AWS, how do you do that?
You might need to host your own Git server on AWS….or you need to give up on that premise and fallback to alternative Git providers like Github, Gitlab, …

If you insist on having your own hosted Git within your AWS environment, there a few possible solutions…

…and potentially others that I am not aware of….

In order to host a “simple” Git setup I’ve recently made this repository public that deploys Gitness as a Git Repository on ECS. It will cost you roughly 50 USD/month. See also a relevant blog post.
Inspired by this, Jakub Wolynko did the same thing for Onedev – please see https://github.com/3sky/onedev-on-ecs if you would like to try that out.

As an alternative for Cloud9, you can use vscode.dev, which runs VS Code in the browser or other alternatives that are more integrated and personalized like gitpod.io or Github Codespaces.

But is this REALLY what you want to do if you are working on AWS only?

What I hope to get from the AWS team

As re:Invent is approaching fast and that usually sets the direction for a lot of AWS services, I really hope to get reliable information and roadmap clarifications around the AWS developer tools.

I’d like to understand if I can rely on CodeCatalyst, CodePipeline, CodeBuild, CodeArtifact, CodeDeploy, … and other AWS services that help developers to build software on AWS.

Does anyone know if this page ever mentioned CodeCatalyst? Please let me know!

In addition to that, I would love to get a better and more detailed overview on what the level of support will be that customers of the “deprecated” services will get: Security Updates? Priority Support?
Creating one page that summerizes that for all “deprecated” services would be amazing!

And – last but not least – make sure that Amazong Q knows which services you are deprecating!

Screenshot taken on 6th of September, 4pm CEST

If you’ve read this post until here, I would love to get your view and your feedback on this topic!

Thanks for the feedback I got before publishing this article and while I know you don’t agree with everything I wrote, it’s great to get your feedback, Monika, Raphael, Ran, Markus and others 🙂

Please let me know either in the comments or directly on my social channels — LinkedIn, X being the ones I still use mostly 😉 

Views: 710

A self-hosted CodeCommit alternative

A few weeks ago AWS CodeCommit became a deprecated service on AWS. This means, customers cannot create new repositories anymore – refer to this announcement for all details: Blog for CodeCommit

There are obviously a lot of alternatives to CodeCommit (Github, Gitlab, …) but if you need a “self-hosted” Git repository in your own AWS account this can become a little bit harder to provision.

If you’re looking for a “self-hosted” Git server, a bunch of tools come up:

  • Gitea
  • Gitness
  • Gitlab
  • …and obviously also others or just a “git” server running on EC2

As I wanted to be able to deploy something that works “out of the box” I looked at how to provision one of these alternatives on my own AWS account.

Gitness

Gitness is an open source development platform packed with the power of code hosting and automated DevOps pipelines. It’s biggest mantainer is Harness. It includes “way more” than just Git – eg Gitspaces, Pipelines, etc.

Deployment of Gitness

If you want to deploy Gitness, the docs point you at running it locally using Docker, or by deploying it on EC2 or k8s.

My aim was to “only” make the “Git” component available and because of that I’ve chosen Amazon ECS with EFS as storage to provision Gitness.

Open Source Code for the deployment of Gitness

I’ve set up a project on Github where all of the code examples are available for you to look at:

https://github.com/Lock128/setup-gitness

In the rest of the article, I’ll try to walk you through the most important parts of the project.

Please note that this code is not production-ready, it is more a PoC to showcase the direction that you could take.

Source Overview

We’re using AWS CDK and Typescript for Infrastructure As Code (IaC).

We have a “bin” directory where the main application is and a “lib” directory where the “Cloudformation Stack” is.

The “real code” is in setup-gitness-stack.ts where we create:

Required modifications?

You only need to set up “Secrets” in your repository with the names AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

After the initial deployment, you will need to modify the GITNESS_URL_BASE in https://github.com/Lock128/setup-gitness/blob/main/lib/setup-gitness-stack.ts#L104 to point it at the Load Balancer URL that has been set up for you.

Deployment

If you “fork” this repository and then push something to the “main” branch, a Github Action workflow will deploy this to your AWS account.

The deployed CloudFormation stack will contain everythign that you need:

The expected costs for this is roughly 50 USD / month.

Next steps from here

As mentioned above the code presented here is not production ready and it does not allow to use all of the functionalities of Gitness.

Things that you will need to consider / think of when using this code as a starting point:

  • What’s the backup schedule for the data on EFS?
  • What’s the required scaling policies?
  • Do you need a custom domain name?
  • Do we need to do security hardening – on the image, on the infrastructure, etc.?
  • Do we need to have multiple environments (development / production) for Gitness?
  • How to connect to this Git server from your VPC?
  • How to automate user creation or tie it to an existing AWS Identity Center?

As you can see, this is just a starting point of your journey to host your own Git Server on AWS 🙂

Please let me know if you have better ideas, suggestions or alternatives!

Views: 299

The state of CodeCatalyst in July 2024

I am personally using CodeCatalyst regularly for a lot of private projects, I also work a lot with other users of CodeCatalyst and I give feedback to the CodeCatalyst team regularly. In this post I look at the state of the tool in July 2024 and about how I make use of it on a regular basis.


A few more months in…

CodeCatalyst has been officially announced in december 2022 and reached GA in april 2023. Since then, it has been getting a lot of updates and changes, some of them you’ve potentially never had a look on.
In december 2023, major updates for enterprise customers were announced alongside other features like packages and Amazon Q integration functionalities.

CodeCatalyst Best New Updates in July 2024

Since last re:invent, CodeCatalyst has gradually increased the third party integrations with the option to have your source code stored in Gitlab, Github or Bitbucket. We have also seen the expansion of Custom Blueprints to CodeGeneration for repositories stored outside CodeCatalyst itself.

Just recently, we have also seen the possibility to have more than one space attached to a single IAM identity center, which allows further usage of CodeCatalyst for more enterprise customers.

CodeCatalyst also announced the possibility to expand packages usage to other providers than just npm – you are now also able to store maven based artifacts or OCI based images in packages.

Major updates to custom blueprints and additional blueprints allow and anble you to on the onside import source code into CodeCatalyst and on the other side to create a custom blueprint out of an existing project. This should make creating blueprints more accessible.

For a few months it has also been possible to include “approval gates” in CodeCatalyst workflows. This is a very limited functionality, but it still allows some important use cases.

Is CodeCatalyst ready for prime time?

It still depends.

While CodeCatalyst has drastically improved and matured over the last 12 months, there are still a few things that need to get better before I would 100% recommend you to use it.

Things that mainly concern me as of now: CI/CD capabilities and integration with AWS services.

The CI/CD capabilities are still limited and need to be improved to be more flexible and integrated. Approval rules need to be more sophisticated and allow some more specification.

If you already have CI/CD workflows or branch permissions set up in a tool of your choice, having “import” functionalities that translate existing Github Actions, Jenkins pipelines or Gitlab workflows into CodeCatalyst workflows is missing as well as the option to automatically set up branch permissions.

Other than that, CodeCatalyst is pretty much ready to be used for prime time and it has some functionalities that are outstanding and should be marketted more.

Next steps? What I think could come next…

The brave option

I still believe that that most underrated functionalitiy of CodeCatalyst is the Custom Blueprints functionality. If you’re living in a k8s world, Backstage has been leading, together with others, the field of “Internal Developer Portals” that empower developers perform actions quicker and more eficient in their day to day live. Especially Backstage starts with the possibility of scaffolding projects and generating code. However, Backstage does not allow you to keep track of changes to the relevant templates later.

Custom Blueprints – and also “existing Blueprints” do empower developers to do exactly the same thing.

Given CodeCatalyst has already been opening itself with other third party integrations like allowing a full Github, Gitlab and Bitbucket integration, I can see a potential of opening CodeCatalyst up even further.

With the given and already available marketplace in CodeCatalyst – that is not yet used very much – this could be opened up to allow other providers to add additional integrations, actions, blue prints.

Still the team would need to add additional functionalities like dashboards, widgets, … to make CodeCatalyst like an “Internal Developer Portal”.

What is unclear to me is whether AWS will be brave enough to perform another 1-2 years of investment into CodeCatalyst before it can become the central place for developers on AWS. I am also not sure AWS will finally go All-In on CodeCatalyst or if they will continue to invest into the existing Code* tools (CodeCommit/CodePipeline/CodeBuild/CodeArtifact).

The usual way for AWS developer tools

AWS will continue to invest half-focused and try to stay “on track” to help a huge customer base to achieve the simple things with CodeCatalyst. Integrations to other AWS services will be missed, the adoption rate will be small. With this kind of investment, AWS will have multiple solutions of Developer Tools (CodeCatalyst vs. CodeCommit/CodePipeline/CodeBuild/CodeArtifact) in the portfolio that both do not solve “all” problems and usecases but serve different customer bases.

What I think will happen

Give CodeCatalyst is build in different service teams we will see some teams heavily investing into making “their” part of the product successful (e.g. “Packages”, “CI/CD” or “Amazon Q in CodeCatalyst”). We will start seing these unique capabilities reach other parts of AWS services or potentially also other platforms. CodeCatalyst as a product will continue to exist but the different service teams will start to focus on where they can make more “money”. CodeCatalyst will not be able to deliver the promise it had when it was announced as the “central place for DevOps teams on AWS”. CodeCatalyst functionalities will be made available through the AWS console. With that, CodeCatalyst as “the product” that I was hoping for will cease to exist.

What do you think about my ideas and assumptions? Do you think I am wrong?

Drop me a comment or a note, I’d love to hear what your take on the future of CodeCatalyst is!

Views: 796

Another year in the community – Thank you, AWS community team #thankfulforest2024 #firevalleyrocks

The year 2023 is close to its end and we’re approaching “Holiday Season” – which is one more reason to take a few minutes to say THANKS to the ones that work every single day to empower the AWS Community.

We did this last year, too – so it was about time to try something else – and the community did it again: All the trees of the Community forest

Saying “thanks” with my speciality service

I’ve found a way to make a tree shine in CodeCatalyst using the Workflows – its not as colorful as that Jenn did and not as detailed as Brian’s approach … and it should definately not try to replicate the team structure or org chart, but it shows that all of the work that the AWS Community team does, all of the support, guidance and investments make the AWS Community a strong foundation of everyone that wants to be part of it!

The community is open for everyone, you can even start your own Meetup easily.

Thank you, AWS Community Team

I am really thankful to be part of the AWS Community and it’s energizing to see the ideas, the sessions, the discussions that we all have together. You, Ross & team, make this possible every single day. Thank you for empowering us, for guiding us and for enabling us to be successful.

Here’s the code for my CodeCatalyst workflow:

Name: firevalleyrocks
SchemaVersion: "1.0"
Triggers:
  - Type: Push
    Branches:
      - main
Actions:
  Ross:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Ross!"
        - Run: echo "Thanks for all of your support in 2023!"
  Taylor:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Taylor!"
        - Run: echo "Thank you for making the Heroes a true community!"
    Compute:
      Type: Lambda
    DependsOn:
      - Ross
  Jason:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Jason!"
        - Run: echo "Thank you for making me start my Community Journey and for making the Community Builders what they are!"
        - Run: echo "Sorry, but you're red!"
        - Run: xxx
    Compute:
      Type: Lambda
    DependsOn:
      - Ross
  Maria:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Maria!"
    Compute:
      Type: Lambda
    DependsOn:
      - Ross
  Ernesto:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Ernesto!"
    Compute:
      Type: Lambda
    DependsOn:
      - Taylor
  Farrah:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Farrah!"
    Compute:
      Type: Lambda
    DependsOn:
      - Taylor
  Lily:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Lily!"
    Compute:
      Type: Lambda
    DependsOn:
      - Jason
  Thembile:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Thembile!"
    Compute:
      Type: Lambda
    DependsOn:
      - Maria
  Susan:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Susan!"
    Compute:
      Type: Lambda
    DependsOn:
      - Maria
  Albert:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Albert!"
    Compute:
      Type: Lambda
    DependsOn:
      - Ernesto
  Shafraz:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Shafraz!"
    Compute:
      Type: Lambda
    DependsOn:
      - Ernesto
  Wesley:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Wesley!"
    Compute:
      Type: Lambda
    DependsOn:
      - Lily
  Ben:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Ben!"
    Compute:
      Type: Lambda
    DependsOn:
      - Lily
  Will:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Will!"
    Compute:
      Type: Lambda
    DependsOn:
      - Susan
  Nelly:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Nelly!"
    Compute:
      Type: Lambda
    DependsOn:
      - Susan
  Community:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Community!"
    Compute:
      Type: Lambda
    DependsOn:
      - Nelly
      - Will
      - Ben
      - Wesley
      - Shafraz
      - Albert
  COmmunity:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Community!"
    Compute:
      Type: Lambda
    DependsOn:
      - Community
  Commonity:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Community!"
    Compute:
      Type: Lambda
    DependsOn:
      - COmmunity

CodeCatalyst at re:Invent 2023, Youtube and a Speakers Directory

In 2023 I’ve become lucky. I’ve started my own YouTube channel, where I present all of the release highlights of re:invent 2023 for CodeCatalyst, I’ve become an AWS Hero but more important than that, I’ve made a lot of friends around the globe. I’ve empowered others to become part of the community and I’ve challenged others with questions, tasks and ideas like the Speakers Direcory.

Thank you for making my year 2023 unforgettable and for making me smile when I think about what we achieved together!

Views: 7429

Application Composer levels up a lot and adds amazing IDE integration capabilities

In this post we’re going to look at the new functionalities that have been added to Application Composer

In this post we’re going to look at the new functionalities that have been added to Application Composer by re:Invent 2023. After announcing the support of all CloudFormation resources earlier in the year, Application Composer now allows editing StepFunctions within the same user interface and – even cooler – announces the integration of an IDE plugin that allows developers to build serverless functions locally.

Application Composer as a serverless, rapid prototyping service adds additional capabilities to empower developers building serverless applications

Application Composer, that was originally announced last year at re:Invent 2022, has gotten a lot of major improvements thoughout 2023. As we are right at re:Invent 2023, its time to look back on which new capabilities have been added and how they influence building serverless applications using AppComposer.

Supporting all CloudFormation resouces

Already a few weeks ago the team announced that all over 1000 CloudFormation resources are now supported by AppComposer. This really gave a big update and make it simpler to build all kind of serverless applications. However, as this only alled AppComposer to expose the resources, this still requires the developer to know all required connections between the different resources. I personally would love to get more “supported” resources (just like L2 resources in CDK) to be made available as part of AppComposer. I would hope that this will be an additional functionality soon.

Integrating additional services

With the integration of the Stepfunctions Workflow Studio within the same interface, developers can now build and end to end application within composer before using the generated SAM or CDK templates to trigger the deployment. As a next step I think it would be great to also be able to define Eventbridge Rules & Pipes within the same interface.

Local development and IDE integration

AppComposer announced a Visual Studio Code integration that makes it possible to build and design serverless applications right from your IDE!

With this feature, you can visualize your serverless applications without being within your browser or the AWS console – start building, whereever you are and whenever you want!

I have not been able to try out this functionality yet, but especially the integration with sam connect that allows to also directly deploy the changes you made to your picture / template will make a big different in building applications using AppComposer.

Also I think we should not underestimate the possibility that this offers to vizualize existing CloudFormation templates through either the IDE plugin or the AWS Console. This will help to explain big and difficult already existing applications and empowers teams to have a fruitful conversation about changes they would like to implement in existing templates, as have a visualization makes the conversation easier.

What’s next for Application Composer? What are my wishes?

Already last year I have asked to integrate AppComposer in CodeCatalyst and I believe that this would be an awesome possibility to quickly start serverless projects. Application Composer today feels like a playground – to make the service more usable, it needs to have a “deployment” component that allows you to automate the lifecycle of your serverless application (including a full CI/CD pipeline).

I also last year asked for creation of CDK out of Application Composer – or even importing it – but instead of investing into that direction AWS recently announced the existance of the CDK Builder Tool – wouldn’t it be better to merge those initiatives together?

As already mentioned above, supporting additional “CDK-L2-like” patterns – or maybe the “Patterns” from serverlessland.com would be amazing – so users do not need to know how to set up IAM roles, connections between API Gateway and Lambda, … manually would make this a much more usable product!

What are your thoughts around the recent announcements of AppComposer? What are your experiences with it?

Views: 435

re:Capping re:Invent 2023 – Not everything that happens in Vegas should stay there! Let’s go and build!

In this article I will try to re:Cap a few of the announcements at re:Invent 2023 but also share my personal experiences and learnings that covers what I think that should be shared with the world…!

What happens in Vegas…

…should not always stay in Las Vegas! This year’s re:Invent has been another great experience for me and it was amazing to meet AWS enthutiasts from all of the world. I’ve learned a tons of stuff, saw a bunch of cool sessions and also experienced to be part of a big family. All the friendships that have been build in the past few days, the shared knowledge and experiences that have been shared have a big influecnce on myself and shape me.

The technical aspects of re:Invent

This year the technical aspects of re:Invent existed but where not as important to me as they used to be in my previous attendances. Of course AWS hd a bunch of important announcements – some of them bigger, some smaller. Renato has them written up at InfoQ and the AWS Newsblog has them covered too. Luc, the winner of this years “Now, Go Build” award 2023 has created a web application that helps you to read all of them and not miss a single one.

For me, there are a few that stand out:

Of course, there where a bunch of other announcement, minor and smaller ones, but these are the ones that I have remembered and thus they are meaningful to me. Now let’s move over to the more important aspects of re:Invent!

The community aspects of re:Invent

re:Invent 2023 has been once more a gathering of the AWS Community in one place and it has brought a lot of us together to talk, laugh and align. Not everyone was able to join us due to different reasons – but I am sure that you have felt the power of the community throughtout the week by following us on the different social channels.

Being part of the AWS Heros

As I posted last year, going to re:Invent means meeting with friends and getting together. Being an AWS Hero, made it more intense than before: We feel community out of our heart and that’s what makes us strong. Wherever I was in Las Vegas, I saw a fellow Hero.

We all have super powers and our powers are different. One of my super-powers is connecting people – and I hope that I was able to show this in the last few days.

Others have other powers – a few of use were able to present one of their talks – Anahit with her spciality around MSK, Anurag around data patterns and Ran on Lambda Power Tools. Others are great listeners and others have the vision of how things need to or should look in a few years – it was great to see everyones powers in one place and I know that combining them we can incluence to make things better!

Thank you, Taylor and the rest of the team, for creating this group and bringing us together again!

Working with Builders, User Group leaders and others from the community

The AWS Community consists of so much more than the Heros. Thank you, dear Community Builders – lead my Jason and the team – for being an unbelievable source of power throughtout the week. Your entuthiasm, your great ideas and your dedication are what makes us stronger. I’ve been reading a lot of the posts from Builders around the globe that were not able to make it to Las Vegas and it is energizing to see that.

The User Group Leaders that we have world wide on the other side help to thrive the AWS Community across the whole yearand bring us together regularly — to learn, to play or to share knowledge. Thank you all, for helping us to shape where the community goes and for making the community successful. I was glad to be able to meet a lot of you and share my experiences as welll as listening to your experiences.

I had the great pleasure to get the whole team of core contributors of the Speakers Directory together and we were able to present our project as well as take a picture of all of use 🙂

We are going to continue our investment and will help user group leaders to find speakers through our tool!

Working with AWS employees

This year, I’ve joined the club of many other Heros that go to sessions where they can meet AWS service team members that they have worked with before 🙂

I attended a few CodeCatalyst sessions to meet the team that I’ve been working with for more than 12 months “live and in person” and loved to see the energy and innovation live on stage – but I also attended other sessions just to say HI to certain speakers.

Employees at AWS are smart and can often tell you the perspective of WHY something has been build and it’s great to know some more background of a new feature. Thank you all for spending time with me and sharing your thoughts and passion with me!

To those AWS employees in the community and DevRel team – another big THANKS for making the event unforgettable with all of your dedication and support – I love spending time with you and creating new ideas on how to make the AWS community stronger and more engaging than ever before!

A look ahead…

As I try to use my time on the flight to put my head around what I am taking away from the last few days and from re:Invent 2023, I’m still digesting, as many others, what we have all learned and heard.

A few key take-aways:

  • AWS doesn’t feel “secure” anymore to be a market leader
  • innovation at AWS is coming (Q), but it’s still early stages
  • AWS keeps listening to their customers (see the DB2 RDS announcement and the StepFunction HTTP Integration)
  • Community Sessions (COM or DEV track) are the ones to attend at re:invent, or sessions that are AWS + a customer (level 300/400)

What I’m considering to do in the next 3 months

First of all I’m planning to cover the CodeCatalyst announcements at my YouTube channel to explain the impact of the new features to interested enterprise customers.

I’m also looking at trying out a lot of the cool things that have been announced in our AWS Speakers Directory Project, besides hosting multiple User Group meetups of the AWS UserGroup Bergstrasse.

What I’m considering to do in 2024

Of course I will continue my engagement in the AWS Förderverein DACH – we are planning another AWS Community Day in Munich next year!

I also plan to continue my work with the the CodeCatalyst team to shape the product – please let me know if YOU have input on what thte next important steps are.

I would love to work with the AWS team to, for 2024 at re:Invent organize another pre:Invent Community Hike and to talk about the possibility of hosting a complete track at re:Invent where Community Members join forces with AWS employees. I listened to a session (Ran Isenberg and Heitor Lessa) and that was a very powerful message.

Last but not least, I would like to help community members to grow and shape their careers in Cloud – if you need help or have questions, do not hesitate to reach out for questions, I’m happy to help or to connect you with someone that can help!

Thank you for reading until this point, if you have any feedback, let me know!

Views: 268