This post introduces you to Lockatmo – a small home project displaying Netatmo weather data on an ePaper.
I started using my Netatmo weather station (#reklame #marketing) in 2017, so nearly 8 years ago. The device I bought at that time is still active and nowadays has two additional indoor devices as well as a rain measurement station. It’s still up an running!
It comes integrated with Apple Homekit, Amazon Alexa and Google Assistant.
The Netatmo Weather mobile application is decent and it actually displays all of the collected weather data in a nice format. There is also a Web App for it that you can access. For a nerd like me, thats all I need! 🙂
But… not everyone in my family is a nerd and sometimes pulling out a mobile device takes time…that you don’t have, because you want to know the temperature NOW and not in 3 seconds…
…and Legrand (Netatmo manufacturer) does not yet have an “official” physical display… (if you know one, please send me a message!)
_And as I am a engineer and builder, I built something on my own 😉
The initial, simple version
I built an initial version of this in 2018, here’s the architecture diagram for it:
As you can see, that one was pretty simple:
A small python script, accessing the Netatmo API using an application and a secret, retrieving the data and using an ePaper from Waveshare to display the basic information. I was using the netatmo-api-python Open Source library to access the API.
This worked very well until 2023, when Legrand decided to enforce the usage of OAuth2 – see documentation for the usage of the API. This means, I also needed to move to using OAuth2 – which I was able to initially do using the same library.
But then, in 2024, Legrand changed something on the API…
…and the OAuth2 tokens started to expire regularly, with errors that I was not able to understand at that time… As my Python skills are worse than my Typescript skills and I was not able to debug the Python script as required, I decided to move the project to Typescript using the Open Source project netatmo-api-client. This project also works great and does exactly what it should, but I was not able to fix the expiring tokens problem.
Every time the tokens expired, I was not directly able to access the log files and problems on the raspberry pi and thus I was not able to get down to the root cause.
So, I decided to move ahead and build a Cloud native, serverless solution!
The current version and how I solved the expiring tokens
So, I started off building this solution with the simplest possible implementation:
A regularly triggered AWS Lambda function would call the Netatmo API and store the data in DynamoDB
The python script on the raspberry pi should call an API Gateway that returns the Netatmo data through lambda
And well… the same problem happened again, now in AWS Lambda…
But now, I was able to triage this better problem better using CloudWatch!
And with that new possibilities, I was able to find out that my calls where regularlyfailing with a Usage limit exceeded error message and because of that all sub-sequent API calls returned a 403 error message…
The only way to fix that, was to (manually) log into the Netatmo Developer Homepage to create a new token and update the information in the Systems Manager Parameter Store.
I actually needed to do that regularly… and as I am always too busy and lazy…this resulted in multiple hours or days of “downtime” of the service… and the display was showing old data…
So, my clients (my family), was once again not able to use this properly…
The new architecture
As you can see, this is a “tiny” bit more complicated than before 🙂
It took me a while (and a few /dev and prompts with Amazon Q Developer) to implement it.(If you want to know more about Q, watch this video…)
Now, if the AWS Lambda function that retrieves the data from Netatmo, will continue to fail with an 403 error.
If it does, it will now automatically start a Github Actions Workflow that is able to run a full “OAuth flow” using Playwright, simulating user actions in the OAuth flow!
My family is now happy as the data is always up to date on the physical display :-).
Benefits and next steps
So, what are the benefits of this approach?
I finally do not need to update the tokens regularly manually anymore and the project “self-heals” once the tokens expire.
In addition, I also get my “own” history of the data retrieved from Netatmo which I can potentially analyze if I want to. I’m not doing it yet, but who knows? 🙂
And what are the next steps?
Everything that I’ve built so far is written in code, using Github Actions to deploy the required infrastructure.
The only things needed to be set up manually:
the OIDC access for Github Actions to the AWS Account
the Netatmo App to retrieve clientId, clientSecret and tokens
set up secrets in Github and secrets and tokens in the Systems Manager Parameter Store
The code is not yet open source…
Would you be curious for me to make this aailable on Github?
Please let me know in the comments or reach out to me on LinkedIn!
At AWS re:Invent 2024 in Las Vegas, AWS and Gitlab have announced a collaboration, especially with the focus on getting Amazon Q developer available within Gitlab.
In this blog I’m going to look closer at the announcement and about what I think this means for existing AWS DevTools like CodePipeline, CodeBuild and especially CodeCatalyst.
TL;DR – What has been announced
On december 3rd, 2024, Matt Garman, CEO of AWS, announced a collaboration between AWS and Gitlab.
This announcements makes AWS Q Developer with different features available with Gitlab and strengthens the relationship between these two partners.
Why Gitlab?
One of the first questions that came into my mind was “Why Gitlab?” and why a collaboration? With Github being one of the most frequently used DevTools, why is AWS interested in collaborating with Gitlab?
My personal opinion on this is that – with Gitlab being prelimary focussed on building strong DevTools that cover all parts of the product lifecycle, the relationship might be a win-win for both of the partners. Github is already too strong, too big and they most probably do not “need” AWS to collaborate as their platform is already atracting organizations and individuals more than enough. Gitlab on the other hand has been challenged by a couple of things in the past months (and years) and will benefit from some additional traction comming towards them.
In fact, I’ve heard rumous about Gitlab looking for investors – and we don’t know what happens “behind” the official announcements…
Gitlab is as an organization also flexible and agile enough to be able to react on changing demands of AWS customers that might come with this collaboration. With the focus on Open Source, Gitlab can also have AWS teams supporting changes in their code base.
What does this mean for AWS DevTools?
Well, “It depends” 🙁
What are AWS DevTools?
Let’s first define what we see as “AWS DevTools”. This itself is a difficult thing as the view on it it different. I personally count CodeCommit (deprecated), Cloud9 (deprecated), CodePipeline, CodeBuild, CodeArtifact, CodeCatalyst and ECR into the “DevTools” category, but if you look at things a little bit broader, also Amplify, AWS CDK, SAM could be seen as part of the “DevTools”. The only one of these that offers integrated, end to end tools for the product lifecycle is CodeCatalyst. As you most probably know, this is has been my favorite service for a few years.
DevTools at re:Invent 2024
If you look at the re:Invent session catalog, however, there seems to be a pattern of “services that get or do not get love”. Unfortunately, I have not been able to find a lot of sessions on the AWS DevTools in the catalog. Especially I have only found 3 sessions with a mention of AWS CodeCatalyst – which is a pitty, as most of the features announced for the Gitlab integration where already available in CodeCatalyst in 2023. This was totally different at re:Invent 2022 and 2023.
So, what does this mean?
CodePipeline, CodeBuild and CodeArtifact are essential building blocks and most probably also used insight AWS intensively – but they do not “compete” with the Gitlab integration, as CodeCommit & Cloud9 have been deprecated.
Because of this, I do not expect that this new collaboration will have a bigger impact on the development of these services.
Now, for CodeCatalyst, I am not sure.
A lot of questions and as I already wrote in a previous article, CodeCatalyst did not have any major announcements in the 2nd half of 2024. This also means that it unclear if the new functionalities, that are now available in Gitlab, have also launched in CodeCatalyst.
As I explained with someone from the CodeCatalyst team in this video, the way that the /dev feature is implemented in CodeCatalyst has a backend that runs Bedrock underneath. I assume that the same or similar backend services power the Gitlab and the CodeCatalyst implementation, at least that’s what I personally would do. I will need to test and verify if that is correct.
Still, without major updates and announcements, it’s unlikely that there is active development into CodeCatalyst currently, as the expertise to build DevTools at AWS has always been….let’s call it… “small sized”. So, the next weeks and months are going to decide about the path that CodeCatalyst will take.
Are you an active CodeCatalyst user?Please reach out to me and share your experiences with me!
Why I am disappointed of the announcement?
Maybe I am judging on this collaboration too early, but hey – an infrastructure and “building blocks” provider like AWS now “integrating their services in a 3rd party provider”? This sounds – a tiny bit – odd for me and I am not sure what to expect next. AWS is entering the space of building software & tools for developers, but without being able to control everything end to end – like they would be able to with CodeCatalyst.
If you are a subscriber to my YouTube channel you might remember that I, after the deprecation announcement of CodeCommit and Cloud9, tried to deploy “integrated devtools services” to see what could be used as an alternative to CodeCommit. I managed to get things deployed for two other tools, but for Gitlab I never published the video – because, after spending hours (and days) on it, I gave up – I didn’t get it to run properly on ECS and I did not want to pursue the EC2 path as suggested by the Gitlab documentation.
What I am trying to point out is that I would have loved to get a “managed service” to stand up Gitlab in my own AWS account, supported, mantained and managed by AWS. This would have made a huge difference in terms of how I look at the collaboration between Gitlab and AWS. It would have looked like a complete partnership, enabling AWS customers to use Gitlab as an integrated DevTool.
Also, it would have given AWS the power to control the infrastructure and network connectivity for the Amazon Q developer features that are now available through Gitlab.
What’s next and what stretch goals do I see?
If the integration between AWS and Gitlab is meant to “fly” and create additional traction to AWS Q Developer, the AWS team has some homework to do. I already mentioned the “managed service” dream that I have, but I also would encourage additional integration options with AWS from Gitlab. What about bringing integrations between Gitlab and AWS, with certain aspects of the AWS console or other DevTools?
What about a possibility to convert Gitlab pipelines to CodePipeline V2 on demand?
What about accessing AWS services and the verification of “Drift” against deployed AWS resources?
There is way more things that could come out of a closer collaboration between AWS and Gitlab!
And now, what is a “AWS DevTools Hero” in 2025?
If I look at my role as a DevTools Hero, I tend to get a little bit nervous as I look at the recent developments. What is a “Devtools Heroes” at the end of 2024 and the beginning of 2025? Should I become a “Q developer expert” and give guidance on the best prompts for Q developer ever? Or should I rather focus on CodePipeline or AWS Amplify?
What do you think should the role of an AWS DevTools Hero be in 2025?
Please let me know in the comments!
Some tasks to do after re:Invent 🙂
Now, reflecting after re:Invent 2024, I believe that there is a bunch of things that I should look at. Not promising if I have enough time to do all of it – but I think I should
Look at the current functionalities in Gitlab and review how they work
Discuss with the AWS teams to find better options on integration
Set up Gitlab 🙂 and enable Q Developer in my own account
Plan a migration strategy for all of my projects “off” CodeCatalyst?
Feedback?
Do you have feedback or thoughts around my thought process? Please let me know in the comments or reach out to me on LinkedIn.
Welcome to the blogpost supporting the AWS re:Invent 2024 session “DEV335 – The modern CI/CD toolbox: Strategies for consistency and reliability”.
We aim to not only summerize but also enhance your session experience with this blog post.
Please do not hesitate to reach out and ask any questions that you might have in the comments or reach out to us directly on socials.
If you’re an old-school person, reach out to us by eMail.
Session walk-through and contents
CI/CD foundations
Continuous integration (CI) involves developers merging code changes frequently, ensuring that the cumulative code is tested regularly—preferably multiple times a day. Continuous delivery (CD), on the other hand, requires manual approval before deploying to production, while continuous deployment is entirely automated.
Unified CI/CD pipelines
Thorsten emphasized the importance of having a single, unified pipeline for all kinds of changes—whether they are application code updates, infrastructure modifications, or configuration changes. This helps maintain consistency, reduces risks, and simplifies compliance.
Code signing and attestation
According to Gunnar, ensuring the integrity of the code through signing and artifact attestation is paramount. This practice verifies that the code hasn’t been altered improperly, tracing each change back to a trusted source, which significantly reduces the risk of tampering and supply chain attacks.
GitOps: a new look at operations
Johannes took an in-depth look into how GitOps integrates Git with operations, streamlining deployment decision-making. GitOps supports a fast, automated transition into production environments without manual intervention, making it powerful for Kubernetes and other cloud-native projects. The main takeway is that with implementing GitOps, the decision to deploy a change to production is taken by the team members close to the context of the change instead of being taken by a “Change Advisory Board” or managers that are far away from the actual change.
Deployment strategies for minimizing risks
Several deployment strategies, including rolling deployments, blue-green deployments, and canary deployments, were outlined by Gunnar. Each strategy offers a different balance of speed and risk, with options to revert back to previous versions quickly if issues arise. You will need to choose the strategy the fits your business needs and your applications requirements.
Drift – Avoid it at any option
In this section, Johannes highlighted the challenges that come around with “Drift” in deployments – which is defined as any kind of manual changes that are done in your cloud deployment without going through Infrastructure as Code (IaC) and CI/CD. We gave the guidance to ensure that noone should get access to the target account to perform manual changes, instead, you should implement a “break-glass” pipeline that is focused on speed to recover form application downtime by rolling forward through CI/CD.
Ensuring consistency across pipelines
Torsten introduced an innovative approach to maintaining pipeline consistency using constructs. By centralizing the standard pipeline templates and allowing teams to extend them, organizations can adapt to specific needs without sacrificing consistency. This method also assists in managing migration between various CI/CD platforms effectively.
The role of security and compliance
Security and compliance are non-negotiable, integral parts of any CI/CD process. Integrating these practices from the beginning ensures that both security and compliance standards are maintained throughout the development lifecycle.
Feature flags and progressive delivery
Gunnar highlighted the importance of feature flags and progressive delivery in decoupling deployment from feature activation. With feature flags, changes can be made dynamically without redeployment, enhancing agility and reducing risk. This approach, used by companies like Netflix, enables controlled risk management and early detection of issues.
Avoiding vendor lock-in with projen-pipelines
Thorsten presented a possibility for CI/CD practicioners to adapt an open source project called projen-pipelines that empowers developers to switch between different CI/CD vendoers by allowing to define the pipelines in Typescript and implementing a renderer process that is able to generate pipeline code for Gitlab, Github, CodeCatalyst and bash.
Conclusions
The insights from this session highlighted the ever-evolving nature of CI/CD practices, where automation, innovation, and stringent security measures play crucial roles. As we continue to refine these practices, it’s clear that the right blend of technology and methodology can significantly impact the efficiency and reliability of software delivery processes.
Next steps
To dive deeper into these strategies, check out the resources and links provided below. Engage with the wider community to exchange ideas and best practices, and continue evolving your CI/CD processes to meet future challenges.
Thank you for attending this session and for taking the time to read the additional information provided here.
Please do not hesitate to reach out and ask any questions that you might have in the comments or reach out to us directly on socials.
If you’re an old-school person, reach out to us by eMail.
Today, AWS announced the availability of a new feature for AWS EKS called “Auto Mode”. With this, AWS focuses on solving some of the challenges that users have been mentioning ever since the release of EKS and Fargate. In this article, we’ll explore the hits and misses (from my perspective) and where I think that the team still has some work left to do.
Feature TL;DR
EKS Auto Mode makes use of EC2 Managed Instances and simplifies the management of underlying compute resources for EKS clusters. In addition to that, it enables the use of a Karpenter backed, fully k8s API compliant way of scaling EKS data planes. The AWS EKS team takes responsibility of managing not only the infrastruture but also the AMIs that power the k8s cluster.
What changes for EKS Operations Engineers with this change?
With this change, EKS operations engineers will not need to scale EKS clusters in and out anymore. Karpenter scales infrastructure for nodes way faster than EC2 AutoScaling. Operations Engineers can focus on the applications instead of managing the underlying infrastructure.
How does the new feature change the responsibility model for EKS?
With this change, AWS takes on a lot of more responsibility within the EKS space. The EKS team will now manage the underlying AMIs, ensuring that they follow security best practices and are secure to use. AWS will also manage the node rotation and upgrade where required.
How do users interact with the new feature?
The new feature is available through the AWS console, the AWS cli and through infrastructure as code – with CloudFormation and Terraform supported right from the start.
In the AWS console, the new feature also simplifies the set up of a new EKS cluster by allowing a “quick start” mode for EKS. In this mode, the new EKS cluster creation process automatically selects sensible defaults for VPC and other settings.
Hits – where’s the feature good
As far as I have seen the feature finally gives AWS the possibility to have a automatically scaling implemented based on the k8s API standards and definitions. EKS Fargate was always a “try” to allow simplifying interacting with EKS, but due to the nature of the feature – being not compliant to the k8s API – you were missing out on additional possibilities like using a different CNIs, running sidecars, etc.
EKS Auto Mode changes this and simplifies the EKS experience.
The additional responsibility that AWS is taking on managing and securing the underlying infrastructure will also help organizations to build faster.
With the feature, the team also simplifies upgrading the control plane as with taking ownership of the underlying nodes, the team can guarantee compliance of the infrastructure setup with new k8s versions – and this includes some of the addons that are now built into the underlying deployment which is powered by the Bottlerocket OS.
Misses – what did the team miss to simplify?
The team did not simplify the network infrastructure setup. The feature also does not help you to make the management of networking and integrations for the clusters easier.
Other wishes for EKS
As already mentioned, I’m not a fan of the current possibilities for network management and the defaults taken for the EKS networking setup. The troubleshooting experience could also be better.
As a next step, I’d also love the EKS tam to take on additional responsibilities for addon management, empower us to build a real service mesh for east/west traffic management and further out-of-the box integrations with other AWS services or managed service providers.
An example for that could be the possibility of a managed Crossplane service or addon, as this k8s based tool is becoming more popular, not only for k8s but also managing AWS infrastructure.
The possibility to add ArgoCD or FluxCD as a component to your EKS management plane “out of the box” also seems appealing to me.
And then, there is the other thing that is constantely going on my nerves: with the idea of using ephemeral EKS clusters, the importance of “faster” cluster provisioning times rises. This could be achieve by optimizations on the EKS side or by allowing the usage of VClusters on EKS clusters out of the box.
Wrap up
This was my initial take on the newly announced AWS EKS Auto Mode. I’ll need to play around with it a bit more to be able to give a better assessment.
What did I miss, what do you think?
Please let me know and start a conversation, I’m eager to hear your thoughts and feedback!
AWS has existed for more than a decade and as of today there are more than 200 AWS services (and counting), even after a few “de-prioritizations” in 2024. The landscape of building cloud applications on AWS is big and ever growing and as builders we need to take hunderts of decisions every day.
One of the most common sentences that I have heard from “Cloud Architects” in the past weeks are sentences that start with “It depends…” when being asked about how to build or solve a specific challenge. I personally believe that it has become very complex and difficult to decide on the technology (or service) to use and that we as the AWS community need to do a better job at explaining consistently on how to take specific decisions for a specific application or architecture.
If we add the option of deploying a k8s cluster on AWS, the number of choices becomes even bigger as you can…build “anything” on k8s 🙂
I believe that it is too difficult to take choices and that we need to start looking at the “simplification” of building applications on AWS.
“A good cloud architect” knows when to use which service and which architecture, weighting between simplicity, complexity, costs, security foodprint and extensibility.
Let’s have a look on the current landscape and the challenges we see.
(This article was written before re:Invent 2024 so some of the details might be outdated by the time you read this 🙂 I’ll try to update this article if there are any related announcements at the conference.)
A few examples for things that could be simpler
As a preparation for this blog post, I’ve asked a few AWS Heroes and Community builders about where they think that AWS is too difficult and complex in november 2024. The answer I got vary based on the focus of the individual and role that each of them has. In this blog I’ll cluster them by topics.
The most common input that I’ve received by far is the ask for more supported and mantained example implementation, best practice documentations and recommendations. Most of the best practices for different services are presented in sessions at re:Invent or re:Inforce, in AWS Blog posts. Partly they are shared within the service documentation or Github – awslabs or aws. Unfortunately, a lot of them become outdated fast and are not actively maintained. In our area of business, changes to technology are rapidly hapenning and thus best practices that are presented today are already outdated tomorrow.
AWS needs to do a better job at keeping documentation and best practice implementations up to date. This also includes a more frequent and better colllaboration in open source projects. Some of the AWS owned Open Source projects (like AWS cdk or the “containers-roadmap”) are loosing momentum because of missing engagement from the service teams in 2024.
When CodeCatalyst was announced in 2022, I had high hopes of the “Blueprints” functionality to become the “go-to” place for best practice implementations – but AWs unfortunatelly failed to deliver on that promise. Blueprints are barely maintained and even tho the “genai-chatbot” blueprint has produced a large number of views on my Youtube channel, it feels a bit like they have been abandonned by AWS in the past months.
Simplify costs and cost management
As organizations mature in the usage of AWS and in building applications on AWS, a lot of them put a focus on understanding and analyzing the costs that are produced by their applications running in the cloud. AWS currently allows to track costs mainly based on usage and resources consumed.
This often makes it hard to track the costs allocated for a certain business functionality. Especially if you’re building multi-tenant applications on AWS, it can be really hard to understand and verify what each of the tenant is actually costing you.
We’d love to simplify the cost allocation per application or even per transaction to be able to properly understand the consumption of our budget. This also includes examples like Athena, where you’re billed for using Athena but for the same transaction you are also triggering S3 API calls which are then not allocated correctly to your Athena based application.
Another example that I recently encountered myself is the deployment of an EKS cluster that was deployed in a VPC with a Network firewall attached and activated GuardDuty. The EKS cluster itself was a portion of the totally allocated costs – it was a 20% costs for EKS, but – due to some application deployment challenges – 60% costs on the Network firewall and 20% on GuardDuty.
I wish for AWS to auto-discover my applications (e.g. by using myApplications) and transactions and to output the information that helps me understand the costs of my applications.
k8s and containers
Even in the containers world, AWS has too many options to go with: besides the prominent options like ECS and EKS, we have Beanstalk, AppRunner and even Lambda to run containers. While I understand that all of these building blocks empower builders to build applications using the service that they want to – but you still need to take choices and the migration between one another is often hard, complex and difficult. And – even worse – you need to be an expert of the service to be able to take the right choice for your use case.
I wish for this decision to be simpler, if not to say seamless. Builders potentially don’t want to take decisions on the service, they want to have their applications to adapt automatically to the changing requirements of your the applications they built. Having the possibility to switch from one to another service automatically, without (much) human intervention, would empower us to invent and simplify!
AWS EKS – my top challenges
I’ve been experimenting with AWS EKS lately – and to be honest, every time I start a new cluster, it is a real pain.
Everything is “simple” if you are able to work with defaults – like creating a new VPC in a non-enterprise environment. However, the default ocreation process allows to create public EKS clusters, which should be forbidden by default. Triaging network challenges for EKS are also still very complicated and the support of these kind of problems can be a painful experience.
I would love to get an “auto-fix” button that solves my networking problems on EKS clusters or verifies for me if my setup is correct.
In addition to that, now that EKS supports IPv6, it might be the right time to solve the never-ending IP adress problem that a lot of organizations have by enabling this by default and setting up the EKS clusters using IPv6 for private subnets and internal networking.
Another thing that currently EKS Fargate doesn’t solve is the possibility to use full-k8s-API options and scalability. If you want to implement something like Karpenter on your workloads, you will always need to fall back to the “self-managed” EC2 compute – and this is always painful, because it requires you to start managing your own AMIs and infrastructure. In this case, you also need to take care of the scalability of your cluster infrastructure, which seems to be and outdated thing to do in 2024.
Creating, running and depoying EKS clusters should become a commodoty and a “simple thing” – noone should be worried about it, as it is really only the starting point for you building on Kubernetes.
I hope that AWS takes away some of these challenges and helps organizations that are building on Kubernetes to be able to focus on what they wan to build – on their business value – instead of managing infrastructure for their clusters.
Investing into cross service integrations for serverless
The serverless landscape has evolved a lot over the past years. We’ve seen new functionalities and integrations become available but similar to the containers space, the amount of choices you can and need to take have increased.
At the same time, the integration between the services has not evolved a lot. The Infrastructure as Code (IaC) is massively fragmented with AWS CDK, CloudFormation, Serverless Application Model (SAM), Terraform and newer players like Pulumi growing. Lately, I’ve also encountered Crossplane as a “serious” option to write Infrastructure as Code and deploy infrastructure on AWS.
The obervability landscape is also big – with Open Telemetry, AWS X-Ray and missing integrations to other observability tools – it is difficult to build observability in serverless applications that span accross a lot of different services. Not all of the services support Open Telemetry integration out of the box – I believe this would be a great addition. Allowing to auto-discover transactions, giving developers insights into whats happening within their applications across multiple services helps to make application development easier.
Another thing that I got as a feedback during my conversations was the wish to simplify the setup and definitions of the API Gateway integrations with Load Balancers. The definition of routes, paths and payloads seem to still be difficult within the API Gateway and the differences between a “REST” and an “HTTP” API endpoint are sometimes confusing. And then, there is AppSync (hosted GraphQL)…. I see a lot of potential to simplify this setup and make it easier for developers to build APIs on AWS.
Enterprises & Govcloud
When talking about enterprises in general and enterprises building for Govcloud (and going forward the European Sovereign Cloud), the users would love to get features and services rolled out to Govcloud environments more frequent then currently. they also complain about the not all parts of the AWS console and the tooling being aware of the different partitions (“normal” AWS vs. “govcloud”). This should be improved aswell.
On the optimization and simplification front, I am regularly hearing the feedback that switching between different AWS accounts is a big issue – as we call out the “multi-account” deployments as a best practices, it becomes increasingly important to switch between them and simplify the integration.
Interview partners say the same about multi-region deployments, where the console does not support interacting with applications that are deployed in multiple regions. There’s also not a lot of out-of-the-box support for these kind of deployments within AWS.
When I recently listened to the [AWS Developers Podcast]() episode focused on the IAM identity center, I did hear a lot of very positive things on how to use it and integrate it within your organizations landscape. I do agree that it makes a lot of things simpler than IAM, but improving the User Experience and allowing additional automations to be implemented would be helpful.
General simplifications
Looking at the never-ending announcements about new releases focused on Generative AI, Amazon Bedrock, Amazon Q, Amazon Q Developer, Amazon Q for Business, …? It becomes difficult to navigate the landscape even after only 18 months since Generative AI has become a hype.
From the outside, the AWS’ messaging is not clear and distracting. With many teams exploring different options, the confusion will become bigger. We need to clarify which names, technologies and services to use in which use case. And it needs to be clearer what AWS wants to be in a the Gnerative AI landscape: a “building blocks” providers (through Bedrock and the Converse API), or a “player” in the field of users using generative AI – competing with OpenAI and others. this message is not yet clear – at least to me.
Making things simpler – helping architects to take better decisions
If I look at the AWS landscape a s cloud architect, I would love to be able to take decisions better and faster being supported by AWS. A tool or a service that supports taking decisions based on business requirements and scalability would be awesome, allowing me to focus on building applications and services instead of making me an expert in choosing between the “correct” compute mode for my applications. There’s just n possible options to build applications on AWS. [Serverlessland]() is a great step on maing this decisions easier, but we need more then that!
Thanks to the contributors 😉
While some of the participants of my small survey do not want to be named, I can thank Benjamin, Ran, Matt Morgan, Matt Martz and Andres for their contributions to this blog post. Your detailed feedback and input helped me a lot to shape this post – thank you for all you do for the AWS community.
Wrap up – the art of simplification
In november 2024 I believe that being a “great” cloud architect means, being able to take smart decisions and knowing when and why to choose specific combinations of services, this is an art that a lot of us still need to learn.
k8s is not always the right answer, but some times it might be. AWS Lambda and serverless applications is also not the best choice for everyone.
Simplifying your decisioning tree in architecture makes your role as a cloud architect easier.
What do you think? Where can AWS make your life as a builder and cloud architect simpler and easier?
S3 Select, CloudSearch, Cloud9, SimpleDB, Forecast, Data Pipeline, and CodeCommit.
This post will focus on Cloud9 and CodeCommit … and how I think this announcement impacts the “end to end” developer story for developers on AWS. We’ll also look at how the announcements impacts my “Go-To” services Amazon CodeCatalyst.
It is written from the perspective of a builder that mainly uses AWS tools for smaller side projects and can be seen as a “startup” that needs to quickly be up & running without much hassle.
Introduction
These annoucements, and the way that these deprecations where announced
are in my humble opinion one of the worst possible ways. I know the teams at AWS have seen the feedback and I hope that there will be a clearer communication strategy going forward.
For me the combination of those posts with the assumption of CodeCatalyst being built on top of these services gives a very strange feeling on how much AWS is currently invested into Developers on AWS.
Let’s look at why I see a lot of impact of these announcements for builders and think about alternatives if you are using CodeCommit or Cloud9 for certain aspects today.
Tools required for SDLC
A few weeks ago I even dedicated a complete Shorts-Playlist on all of the Code* tools, looking at their usage and the approach to cover a full Software Development Lifecycle (SDLC) building on AWS.
In this series I drafted the diagram:
AWS Tools part of your SDLC until recent announcements
This being and end-to-end flow, AWS had at least two options to implement this process using their tools:
Either CodeCatalyst or a combination of different AWS Services
When CodeCatalyst was announced, I wrote about how CodeCatalyst can be used to cover all parts of your Secure Development Lifecycle (SDLC) process on AWS. Ever since then, there was an alternative on AWS using a combination of different building blocks: CodeCommit, CodeBuild, CodeDeploy, CodePipeline and others.
CodeCommit was a good, reliable managed Git server. For the purposes it solved, there weren’t many features to add. It was a managed service you didn’t need to think about and just “serve its purpose”.
Cloud9 was a hosted IDE, a development environment that users where able to access through their browser. This enabled builders to have a real IDE, even on old or underpowered computers, anywhere — even when being on vacation.
Developers on AWS are still able to use CodeCatalyst to cover for all parts of your product lifecycle or they had the alternative to use the different “building blocks” to compose their SDLC process. Both options gave value and helped AWS customers to solve certain aspects and problems.
Now, officially, only one option is left — CodeCatalyst. CodeCatalyst is an integrated DevTools service that unites all of the building blocks under an opinionated, structured user interface. It was announced at re:Invent 2022 and went GA in early 2023. With the custom blueprints feature, it also enables builders to create project templates and share them with their team mates or dependend teams. Very powerful possibilities for teams to collaborate better and also share their best practices with other teams.
Those that didn’t need a “reliable managed Git server” where most probably using existing alternatives — that might solve the “job” better than CodeCommit — like Github, Gitlab or Atlassian. These users and AWS customers are not affected by the change.
What has changed with the July 2024 announcements — builders perspective
Now, the system landscape has changed.
Developers can not use Cloud9 anymore to develop software, they need to fallback to alternatives like Github Codespaces, Coder or Gitpod.
Developers cannot store their source code in CodeCommit anymore, they need to fallback to alternatives like Github, Gitlab or Bitbucket.
And given CodeCatalyst might be using CodeCommit under the hood and is using Cloud9 for the DevEnvironments – Can I really build something on top of CodeCatalyst going forward?
So this announcement of deprecation — without a “real” AWS native alternative — puts everyone building and developing software on AWS in the situation of needed to look for alternative setups.
Especially it forces you — if you are a small organization (or a startup) to engage with more than just one vendor as part of your SDLC lifecycle and process. I see this as a critical point to talk about aswell.
And, if you are building software or platforms on AWS where especially CodeCommit is part of application or the deployed architecture itself — You are now left without any option. If you want to integrate a Git server in your application on AWS, you will now need to self-host the git-server instead of using a managed service.
If you “just” needed a Git repository, quickly, fast and reliable — CodeCommit was the way to go. Now, you need to use a 3rd party alternative.
Now: What options on AWS do we have as builders?
What changed with the July 2024 announcements — business perspective
Looking at the announced changes from a different perspective, we need to acknowledge that AWS is a 90+ billion (90.000.000.000) dollar company. It is clear, being a business that aims to “make money”, that AWS needs to focus on services and solutions that are widely used, adapted and earn a good margin.
The reason might be that Cloud9 and CodeCommit where just not profitable enough to drive the expected growth of the business. Especially, as there are other services that do the same job better than Cloud9 and CodeCommit. So it might have been “just” a business decision to stop investing into these services and focus instead on Amazon Q that promises to help developers and builders on AWS.
This raises the question on which other services might soon or in the future be hit by exactly the same challenge. And – how is “success” measured by AWS on services? Is it “just” revenue or are there other points that are being considered?
But still — How this feels for me and questions I have (emotionally)
It feels like AWS has given up the game of engaging with their “Builders” and is now focused on the “Buyers” that “host” their applications on AWS.
If you think about how AWS started and if you look at how much effort AWS has spent this year on making us think that “Amazon Q Developer” is going to make our lives as developers easier…
How can I as an advocate for AWS as a platform be confident that I am valued as “Builder” on AWS? Will other services also disappear if they do not get enough traction?
And how much can I trust in Werner’s “Now, go build“?
How much “trust” can I put in the other Code* (CodeBuild, CodePipeline, …) tools on AWS? With CodePipeline and CodeBuild getting a lot of notable updates right now (macOS, Github action runners, Stages rollback, …) the outsiders view is that at least these services are there to exist… but how much trust has the AWS team lost with Builders around the globe?
I’m eager to see how the different workshops, best practice documents, open source projects that use either CodeCommit oder Cloud9 (especially also AWS owned) will be adjusted and updated in the next weeks and months.
How much is also CodeCatalyst going to be the central place for Developers on AWS? How much updates will we see there?
How does this affect you – I would love to know!
I am really interested to hear how these announcements have affected your perspective on AWS and your view on the different AWS services.
Please share your thoughts either as a comment to this post or reach out to me personally!
What YOU can do next
You could now follow the advice from AWS and “migrate” away from CodeCommit or Cloud9 — but is this really what you want to do? If you have a need to have a “Git server” or “Git repository” close to your applications on AWS, how do you do that? You might need to host your own Git server on AWS….or you need to give up on that premise and fallback to alternative Git providers like Github, Gitlab, …
If you insist on having your own hosted Git within your AWS environment, there a few possible solutions…
As an alternative for Cloud9, you can use vscode.dev, which runs VS Code in the browser or other alternatives that are more integrated and personalized like gitpod.io or Github Codespaces.
But is this REALLY what you want to do if you are working on AWS only?
What I hope to get from the AWS team
As re:Invent is approaching fast and that usually sets the direction for a lot of AWS services, I really hope to get reliable information and roadmap clarifications around the AWS developer tools.
I’d like to understand if I can rely on CodeCatalyst, CodePipeline, CodeBuild, CodeArtifact, CodeDeploy, … and other AWS services that help developers to build software on AWS.
Does anyone know if this page ever mentioned CodeCatalyst? Please let me know!
In addition to that, I would love to get a better and more detailed overview on what the level of support will be that customers of the “deprecated” services will get: Security Updates? Priority Support? Creating one page that summerizes that for all “deprecated” services would be amazing!
And – last but not least – make sure that Amazong Q knows which services you are deprecating!
Screenshot taken on 6th of September, 4pm CEST
If you’ve read this post until here, I would love to get your view and your feedback on this topic!
Thanks for the feedback I got before publishing this article and while I know you don’t agree with everything I wrote, it’s great to get your feedback, Monika, Raphael, Ran, Markus and others 🙂
Please let me know either in the comments or directly on my social channels — LinkedIn, X being the ones I still use mostly 😉
A few weeks ago AWS CodeCommit became a deprecated service on AWS. This means, customers cannot create new repositories anymore – refer to this announcement for all details: Blog for CodeCommit
There are obviously a lot of alternatives to CodeCommit (Github, Gitlab, …) but if you need a “self-hosted” Git repository in your own AWS account this can become a little bit harder to provision.
If you’re looking for a “self-hosted” Git server, a bunch of tools come up:
…and obviously also others or just a “git” server running on EC2
As I wanted to be able to deploy something that works “out of the box” I looked at how to provision one of these alternatives on my own AWS account.
Gitness
Gitness is an open source development platform packed with the power of code hosting and automated DevOps pipelines. It’s biggest mantainer is Harness. It includes “way more” than just Git – eg Gitspaces, Pipelines, etc.
Deployment of Gitness
If you want to deploy Gitness, the docs point you at running it locally using Docker, or by deploying it on EC2 or k8s.
My aim was to “only” make the “Git” component available and because of that I’ve chosen Amazon ECS with EFS as storage to provision Gitness.
Open Source Code for the deployment of Gitness
I’ve set up a project on Github where all of the code examples are available for you to look at:
I am personally using CodeCatalyst regularly for a lot of private projects, I also work a lot with other users of CodeCatalyst and I give feedback to the CodeCatalyst team regularly. In this post I look at the state of the tool in July 2024 and about how I make use of it on a regular basis.
A few more months in…
CodeCatalyst has been officially announced in december 2022 and reached GA in april 2023. Since then, it has been getting a lot of updates and changes, some of them you’ve potentially never had a look on.
In december 2023, major updates for enterprise customers were announced alongside other features like packages and Amazon Q integration functionalities.
CodeCatalyst also announced the possibility to expand packages usage to other providers than just npm – you are now also able to store maven based artifacts or OCI based images in packages.
Major updates to custom blueprints and additional blueprints allow and anble you to on the onside import source code into CodeCatalyst and on the other side to create a custom blueprint out of an existing project. This should make creating blueprints more accessible.
For a few months it has also been possible to include “approval gates” in CodeCatalyst workflows. This is a very limited functionality, but it still allows some important use cases.
Is CodeCatalyst ready for prime time?
It still depends.
While CodeCatalyst has drastically improved and matured over the last 12 months, there are still a few things that need to get better before I would 100% recommend you to use it.
Things that mainly concern me as of now: CI/CD capabilities and integration with AWS services.
The CI/CD capabilities are still limited and need to be improved to be more flexible and integrated. Approval rules need to be more sophisticated and allow some more specification.
If you already have CI/CD workflows or branch permissions set up in a tool of your choice, having “import” functionalities that translate existing Github Actions, Jenkins pipelines or Gitlab workflows into CodeCatalyst workflows is missing as well as the option to automatically set up branch permissions.
Other than that, CodeCatalyst is pretty much ready to be used for prime time and it has some functionalities that are outstanding and should be marketted more.
Next steps? What I think could come next…
The brave option
I still believe that that most underrated functionalitiy of CodeCatalyst is the Custom Blueprints functionality. If you’re living in a k8s world, Backstage has been leading, together with others, the field of “Internal Developer Portals” that empower developers perform actions quicker and more eficient in their day to day live. Especially Backstage starts with the possibility of scaffolding projects and generating code. However, Backstage does not allow you to keep track of changes to the relevant templates later.
Custom Blueprints – and also “existing Blueprints” do empower developers to do exactly the same thing.
Given CodeCatalyst has already been opening itself with other third party integrations like allowing a full Github, Gitlab and Bitbucket integration, I can see a potential of opening CodeCatalyst up even further.
With the given and already available marketplace in CodeCatalyst – that is not yet used very much – this could be opened up to allow other providers to add additional integrations, actions, blue prints.
Still the team would need to add additional functionalities like dashboards, widgets, … to make CodeCatalyst like an “Internal Developer Portal”.
What is unclear to me is whether AWS will be brave enough to perform another 1-2 years of investment into CodeCatalyst before it can become the central place for developers on AWS. I am also not sure AWS will finally go All-In on CodeCatalyst or if they will continue to invest into the existing Code* tools (CodeCommit/CodePipeline/CodeBuild/CodeArtifact).
The usual way for AWS developer tools
AWS will continue to invest half-focused and try to stay “on track” to help a huge customer base to achieve the simple things with CodeCatalyst. Integrations to other AWS services will be missed, the adoption rate will be small. With this kind of investment, AWS will have multiple solutions of Developer Tools (CodeCatalyst vs. CodeCommit/CodePipeline/CodeBuild/CodeArtifact) in the portfolio that both do not solve “all” problems and usecases but serve different customer bases.
What I think will happen
Give CodeCatalyst is build in different service teams we will see some teams heavily investing into making “their” part of the product successful (e.g. “Packages”, “CI/CD” or “Amazon Q in CodeCatalyst”). We will start seing these unique capabilities reach other parts of AWS services or potentially also other platforms. CodeCatalyst as a product will continue to exist but the different service teams will start to focus on where they can make more “money”. CodeCatalyst will not be able to deliver the promise it had when it was announced as the “central place for DevOps teams on AWS”. CodeCatalyst functionalities will be made available through the AWS console. With that, CodeCatalyst as “the product” that I was hoping for will cease to exist.
What do you think about my ideas and assumptions? Do you think I am wrong?
Drop me a comment or a note, I’d love to hear what your take on the future of CodeCatalyst is!
The year 2023 is close to its end and we’re approaching “Holiday Season” – which is one more reason to take a few minutes to say THANKS to the ones that work every single day to empower the AWS Community.
I’ve found a way to make a tree shine in CodeCatalyst using the Workflows – its not as colorful as that Jenn did and not as detailed as Brian’s approach … and it should definately not try to replicate the team structure or org chart, but it shows that all of the work that the AWS Community team does, all of the support, guidance and investments make the AWS Community a strong foundation of everyone that wants to be part of it!
The community is open for everyone, you can even start your own Meetup easily.
Thank you, AWS Community Team
I am really thankful to be part of the AWS Community and it’s energizing to see the ideas, the sessions, the discussions that we all have together. You, Ross & team, make this possible every single day. Thank you for empowering us, for guiding us and for enabling us to be successful.
CodeCatalyst at re:Invent 2023, Youtube and a Speakers Directory
In 2023 I’ve become lucky. I’ve started my own YouTube channel, where I present all of the release highlights of re:invent 2023 for CodeCatalyst, I’ve become an AWS Hero but more important than that, I’ve made a lot of friends around the globe. I’ve empowered others to become part of the community and I’ve challenged others with questions, tasks and ideas like the Speakers Direcory.
Thank you for making my year 2023 unforgettable and for making me smile when I think about what we achieved together!
In this post we’re going to look at the new functionalities that have been added to Application Composer
In this post we’re going to look at the new functionalities that have been added to Application Composer by re:Invent 2023. After announcing the support of all CloudFormation resources earlier in the year, Application Composer now allows editing StepFunctions within the same user interface and – even cooler – announces the integration of an IDE plugin that allows developers to build serverless functions locally.
Application Composer as a serverless, rapid prototyping service adds additional capabilities to empower developers building serverless applications
Application Composer, that was originally announced last year at re:Invent 2022, has gotten a lot of major improvements thoughout 2023. As we are right at re:Invent 2023, its time to look back on which new capabilities have been added and how they influence building serverless applications using AppComposer.
Supporting all CloudFormation resouces
Already a few weeks ago the team announced that all over 1000 CloudFormation resources are now supported by AppComposer. This really gave a big update and make it simpler to build all kind of serverless applications. However, as this only alled AppComposer to expose the resources, this still requires the developer to know all required connections between the different resources. I personally would love to get more “supported” resources (just like L2 resources in CDK) to be made available as part of AppComposer. I would hope that this will be an additional functionality soon.
Integrating additional services
With the integration of the Stepfunctions Workflow Studio within the same interface, developers can now build and end to end application within composer before using the generated SAM or CDK templates to trigger the deployment. As a next step I think it would be great to also be able to define Eventbridge Rules & Pipes within the same interface.
Local development and IDE integration
AppComposer announced a Visual Studio Code integration that makes it possible to build and design serverless applications right from your IDE!
With this feature, you can visualize your serverless applications without being within your browser or the AWS console – start building, whereever you are and whenever you want!
I have not been able to try out this functionality yet, but especially the integration with sam connect that allows to also directly deploy the changes you made to your picture / template will make a big different in building applications using AppComposer.
Also I think we should not underestimate the possibility that this offers to vizualize existing CloudFormation templates through either the IDE plugin or the AWS Console. This will help to explain big and difficult already existing applications and empowers teams to have a fruitful conversation about changes they would like to implement in existing templates, as have a visualization makes the conversation easier.
What’s next for Application Composer? What are my wishes?
Already last year I have asked to integrate AppComposer in CodeCatalyst and I believe that this would be an awesome possibility to quickly start serverless projects. Application Composer today feels like a playground – to make the service more usable, it needs to have a “deployment” component that allows you to automate the lifecycle of your serverless application (including a full CI/CD pipeline).
I also last year asked for creation of CDK out of Application Composer – or even importing it – but instead of investing into that direction AWS recently announced the existance of the CDK Builder Tool – wouldn’t it be better to merge those initiatives together?
As already mentioned above, supporting additional “CDK-L2-like” patterns – or maybe the “Patterns” from serverlessland.com would be amazing – so users do not need to know how to set up IAM roles, connections between API Gateway and Lambda, … manually would make this a much more usable product!
What are your thoughts around the recent announcements of AppComposer? What are your experiences with it?