Dear AWS, how do I build & develop purely on AWS right now?

The announcements from AWS around deprecating certain services have raised a bunch of questions and concerns in the AWS community. 

As Jeff Barr wrote, these are the services:

S3 Select, CloudSearch, Cloud9, SimpleDB, Forecast, Data Pipeline, and CodeCommit.

This post will focus on Cloud9 and CodeCommit … and how I think this announcement impacts the “end to end” developer story for developers on AWS. We’ll also look at how the announcements impacts my “Go-To” services Amazon CodeCatalyst.

It is written from the perspective of a builder that mainly uses AWS tools for smaller side projects and can be seen as a “startup” that needs to quickly be up & running without much hassle.

Introduction

These annoucements, and the way that these deprecations where announced

Blog for CodeCommit

Blog for Cloud9

are in my humble opinion one of the worst possible ways. I know the teams at AWS have seen the feedback and I hope that there will be a clearer communication strategy going forward.

For me the combination of those posts with the assumption of CodeCatalyst being built on top of these services gives a very strange feeling on how much AWS is currently invested into Developers on AWS.

Let’s look at why I see a lot of impact of these announcements for builders and think about alternatives if you are using CodeCommit or Cloud9 for certain aspects today.

Tools required for SDLC

A few weeks ago I even dedicated a complete Shorts-Playlist on all of the Code* tools, looking at their usage and the approach to cover a full Software Development Lifecycle (SDLC) building on AWS.

In this series I drafted the diagram:

AWS Tools part of your SDLC until recent announcements

This being and end-to-end flow, AWS had at least two options to implement this process using their tools:

Either CodeCatalyst or a combination of different AWS Services

When CodeCatalyst was announced, I wrote about how CodeCatalyst can be used to cover all parts of your Secure Development Lifecycle (SDLC) process on AWS. Ever since then, there was an alternative on AWS using a combination of different building blocks: CodeCommit, CodeBuild, CodeDeploy, CodePipeline and others.

CodeCommit was a good, reliable managed Git server. For the purposes it solved, there weren’t many features to add. It was a managed service you didn’t need to think about and just “serve its purpose”.

Cloud9 was a hosted IDE, a development environment that users where able to access through their browser. This enabled builders to have a real IDE, even on old or underpowered computers, anywhere — even when being on vacation.

Developers on AWS are still able to use CodeCatalyst to cover for all parts of your product lifecycle or they had the alternative to use the different “building blocks” to compose their SDLC process. Both options gave value and helped AWS customers to solve certain aspects and problems.

Now, officially, only one option is left — CodeCatalyst.
CodeCatalyst is an integrated DevTools service that unites all of the building blocks under an opinionated, structured user interface. It was announced at re:Invent 2022 and went GA in early 2023. With the custom blueprints feature, it also enables builders to create project templates and share them with their team mates or dependend teams. Very powerful possibilities for teams to collaborate better and also share their best practices with other teams.

Those that didn’t need a “reliable managed Git server” where most probably using existing alternatives — that might solve the “job” better than CodeCommit — like Github, Gitlab or Atlassian. These users and AWS customers are not affected by the change.

What has changed with the July 2024 announcements — builders perspective

Now, the system landscape has changed.

Developers can not use Cloud9 anymore to develop software, they need to fallback to alternatives like Github Codespaces, Coder or Gitpod.

Developers cannot store their source code in CodeCommit anymore, they need to fallback to alternatives like Github, Gitlab or Bitbucket.

And given CodeCatalyst might be using CodeCommit under the hood and is using Cloud9 for the DevEnvironments – Can I really build something on top of CodeCatalyst going forward?

So this announcement of deprecation — without a “real” AWS native alternative — puts everyone building and developing software on AWS in the situation of needed to look for alternative setups.

Especially it forces you — if you are a small organization (or a startup) to engage with more than just one vendor as part of your SDLC lifecycle and process. I see this as a critical point to talk about aswell.

And, if you are building software or platforms on AWS where especially CodeCommit is part of application or the deployed architecture itself — You are now left without any option. If you want to integrate a Git server in your application on AWS, you will now need to self-host the git-server instead of using a managed service.

If you “just” needed a Git repository, quickly, fast and reliable — CodeCommit was the way to go. Now, you need to use a 3rd party alternative.

Now: What options on AWS do we have as builders?

What changed with the July 2024 announcements — business perspective

Looking at the announced changes from a different perspective, we need to acknowledge that AWS is a 90+ billion (90.000.000.000) dollar company. It is clear, being a business that aims to “make money”, that AWS needs to focus on services and solutions that are widely used, adapted and earn a good margin.

The reason might be that Cloud9 and CodeCommit where just not profitable enough to drive the expected growth of the business. Especially, as there are other services that do the same job better than Cloud9 and CodeCommit. So it might have been “just” a business decision to stop investing into these services and focus instead on Amazon Q that promises to help developers and builders on AWS.

This raises the question on which other services might soon or in the future be hit by exactly the same challenge. And – how is “success” measured by AWS on services? Is it “just” revenue or are there other points that are being considered?

But still — How this feels for me and questions I have (emotionally)

It feels like AWS has given up the game of engaging with their “Builders” and is now focused on the “Buyers” that “host” their applications on AWS.

If you think about how AWS started and if you look at how much effort AWS has spent this year on making us think that “Amazon Q Developer” is going to make our lives as developers easier…

How can I as an advocate for AWS as a platform be confident that I am valued as “Builder” on AWS? Will other services also disappear if they do not get enough traction?

And how much can I trust in Werner’s “Now, go build“?

How much “trust” can I put in the other Code* (CodeBuild, CodePipeline, …) tools on AWS?
With CodePipeline and CodeBuild getting a lot of notable updates right now (macOS, Github action runners, Stages rollback, …) the outsiders view is that at least these services are there to exist… but how much trust has the AWS team lost with Builders around the globe?

I’m eager to see how the different workshops, best practice documents, open source projects that use either CodeCommit oder Cloud9 (especially also AWS owned) will be adjusted and updated in the next weeks and months.

How much is also CodeCatalyst going to be the central place for Developers on AWS? How much updates will we see there?

How does this affect you – I would love to know!

I am really interested to hear how these announcements have affected your perspective on AWS and your view on the different AWS services.

Please share your thoughts either as a comment to this post or reach out to me personally!

What YOU can do next

You could now follow the advice from AWS and “migrate” away from CodeCommit or Cloud9 — but is this really what you want to do?
If you have a need to have a “Git server” or “Git repository” close to your applications on AWS, how do you do that?
You might need to host your own Git server on AWS….or you need to give up on that premise and fallback to alternative Git providers like Github, Gitlab, …

If you insist on having your own hosted Git within your AWS environment, there a few possible solutions…

…and potentially others that I am not aware of….

In order to host a “simple” Git setup I’ve recently made this repository public that deploys Gitness as a Git Repository on ECS. It will cost you roughly 50 USD/month. See also a relevant blog post.
Inspired by this, Jakub Wolynko did the same thing for Onedev – please see https://github.com/3sky/onedev-on-ecs if you would like to try that out.

As an alternative for Cloud9, you can use vscode.dev, which runs VS Code in the browser or other alternatives that are more integrated and personalized like gitpod.io or Github Codespaces.

But is this REALLY what you want to do if you are working on AWS only?

What I hope to get from the AWS team

As re:Invent is approaching fast and that usually sets the direction for a lot of AWS services, I really hope to get reliable information and roadmap clarifications around the AWS developer tools.

I’d like to understand if I can rely on CodeCatalyst, CodePipeline, CodeBuild, CodeArtifact, CodeDeploy, … and other AWS services that help developers to build software on AWS.

Does anyone know if this page ever mentioned CodeCatalyst? Please let me know!

In addition to that, I would love to get a better and more detailed overview on what the level of support will be that customers of the “deprecated” services will get: Security Updates? Priority Support?
Creating one page that summerizes that for all “deprecated” services would be amazing!

And – last but not least – make sure that Amazong Q knows which services you are deprecating!

Screenshot taken on 6th of September, 4pm CEST

If you’ve read this post until here, I would love to get your view and your feedback on this topic!

Thanks for the feedback I got before publishing this article and while I know you don’t agree with everything I wrote, it’s great to get your feedback, Monika, Raphael, Ran, Markus and others 🙂

Please let me know either in the comments or directly on my social channels — LinkedIn, X being the ones I still use mostly 😉 

Views: 718

The state of CodeCatalyst in July 2024

I am personally using CodeCatalyst regularly for a lot of private projects, I also work a lot with other users of CodeCatalyst and I give feedback to the CodeCatalyst team regularly. In this post I look at the state of the tool in July 2024 and about how I make use of it on a regular basis.


A few more months in…

CodeCatalyst has been officially announced in december 2022 and reached GA in april 2023. Since then, it has been getting a lot of updates and changes, some of them you’ve potentially never had a look on.
In december 2023, major updates for enterprise customers were announced alongside other features like packages and Amazon Q integration functionalities.

CodeCatalyst Best New Updates in July 2024

Since last re:invent, CodeCatalyst has gradually increased the third party integrations with the option to have your source code stored in Gitlab, Github or Bitbucket. We have also seen the expansion of Custom Blueprints to CodeGeneration for repositories stored outside CodeCatalyst itself.

Just recently, we have also seen the possibility to have more than one space attached to a single IAM identity center, which allows further usage of CodeCatalyst for more enterprise customers.

CodeCatalyst also announced the possibility to expand packages usage to other providers than just npm – you are now also able to store maven based artifacts or OCI based images in packages.

Major updates to custom blueprints and additional blueprints allow and anble you to on the onside import source code into CodeCatalyst and on the other side to create a custom blueprint out of an existing project. This should make creating blueprints more accessible.

For a few months it has also been possible to include “approval gates” in CodeCatalyst workflows. This is a very limited functionality, but it still allows some important use cases.

Is CodeCatalyst ready for prime time?

It still depends.

While CodeCatalyst has drastically improved and matured over the last 12 months, there are still a few things that need to get better before I would 100% recommend you to use it.

Things that mainly concern me as of now: CI/CD capabilities and integration with AWS services.

The CI/CD capabilities are still limited and need to be improved to be more flexible and integrated. Approval rules need to be more sophisticated and allow some more specification.

If you already have CI/CD workflows or branch permissions set up in a tool of your choice, having “import” functionalities that translate existing Github Actions, Jenkins pipelines or Gitlab workflows into CodeCatalyst workflows is missing as well as the option to automatically set up branch permissions.

Other than that, CodeCatalyst is pretty much ready to be used for prime time and it has some functionalities that are outstanding and should be marketted more.

Next steps? What I think could come next…

The brave option

I still believe that that most underrated functionalitiy of CodeCatalyst is the Custom Blueprints functionality. If you’re living in a k8s world, Backstage has been leading, together with others, the field of “Internal Developer Portals” that empower developers perform actions quicker and more eficient in their day to day live. Especially Backstage starts with the possibility of scaffolding projects and generating code. However, Backstage does not allow you to keep track of changes to the relevant templates later.

Custom Blueprints – and also “existing Blueprints” do empower developers to do exactly the same thing.

Given CodeCatalyst has already been opening itself with other third party integrations like allowing a full Github, Gitlab and Bitbucket integration, I can see a potential of opening CodeCatalyst up even further.

With the given and already available marketplace in CodeCatalyst – that is not yet used very much – this could be opened up to allow other providers to add additional integrations, actions, blue prints.

Still the team would need to add additional functionalities like dashboards, widgets, … to make CodeCatalyst like an “Internal Developer Portal”.

What is unclear to me is whether AWS will be brave enough to perform another 1-2 years of investment into CodeCatalyst before it can become the central place for developers on AWS. I am also not sure AWS will finally go All-In on CodeCatalyst or if they will continue to invest into the existing Code* tools (CodeCommit/CodePipeline/CodeBuild/CodeArtifact).

The usual way for AWS developer tools

AWS will continue to invest half-focused and try to stay “on track” to help a huge customer base to achieve the simple things with CodeCatalyst. Integrations to other AWS services will be missed, the adoption rate will be small. With this kind of investment, AWS will have multiple solutions of Developer Tools (CodeCatalyst vs. CodeCommit/CodePipeline/CodeBuild/CodeArtifact) in the portfolio that both do not solve “all” problems and usecases but serve different customer bases.

What I think will happen

Give CodeCatalyst is build in different service teams we will see some teams heavily investing into making “their” part of the product successful (e.g. “Packages”, “CI/CD” or “Amazon Q in CodeCatalyst”). We will start seing these unique capabilities reach other parts of AWS services or potentially also other platforms. CodeCatalyst as a product will continue to exist but the different service teams will start to focus on where they can make more “money”. CodeCatalyst will not be able to deliver the promise it had when it was announced as the “central place for DevOps teams on AWS”. CodeCatalyst functionalities will be made available through the AWS console. With that, CodeCatalyst as “the product” that I was hoping for will cease to exist.

What do you think about my ideas and assumptions? Do you think I am wrong?

Drop me a comment or a note, I’d love to hear what your take on the future of CodeCatalyst is!

Views: 821

Amazon CodeCatalyst’s packages support – a glimpse at what’s to come for artifact management

In this post you will get a short introduction and my personal assessment on how the new Packages component in CodeCatalyst can help you to set up your complete SDLC in Amazon CodeCatalyst.

Only npm supported – a early launch to show what we can expect going forward and to get feedback from users?

With the initial launch Amazon CodeCatalyst allows to manage npm package repositories inside CodeCatalyst. When seing this for the first time I was a bit concerned that this decision – to initially launch with only npm support – will not help a lot, as I expect that organizations would need to store other type of artifacts (jar files, containers, python paackages, …). But I think that at least for Cloud Native Projects that use typescript this will solve the problem of storing artifacts and accessing them natively within CodeCatalyst. Let’s look at how they can be used today.

Using packages repositories in CodeCatalyst

Today you can set up package repositories in CodeCatalyst and connect to upstream (public or private) repositories. You can also change the sort order and the upstream order of the package repositories. The documentation covers the different possibilities to set up repositories very well. You can access the package repositories from your local machine by setting up the connection locally

npm config set registry https://packages.region.codecatalyst.aws/npm/space-name/proj-name/repo-name/

After setting up multiple repositories you can set the order of retrieval and usage. You can also set it up as a “pass-through” repository and allow access to public repositories.

Using packages in CodeCatalyst

Packages in CodeCatalyst are integrated with workflows. You can read or store packages with native actions or by setting up the npm repository manually.

You can also read and publish packages from other systems by using Personal Access Tokens (PAT). The documentation outlines currently supported client commands.

The CodeCatalyst workflows will by default use the set up repositories that you have set up within CodeCatalyst.

What I think we need next in packages

With the launch of the packages component in CodeCatalyst AWS has added a definately missing functionality in CodeCatalyst. Users are not able to store artifacts and share them within a space which makes it possible to create, re-use and deploy immutable artifacts natively within the tool.

This is a much needed functionality, but as I already mentioned, I believe that limiting this to npm packages only limits the use cases for it. I would have expected npmpython and docker support at launch and was a bit disappointed that this was not included.

I do also believe that the “packages” functionality should be better integrated and allow further configuration options, especially when loading/reading packages from upstream repositories – e.g. being able to limit to only certain licences or ensuring that packages included are still “supported” (and regularly updated) or security-scanned.

These type of functionalities would have made the new option “meaningful” and would have empowered developers to build better software too.

I can imagine that we will see pythonjava and dockersupport pretty soon due to these reasons and it would be great to also support “internal” repositories as “upstream” repositories.

How do YOU think that this new functionality will help you to adapt CodeCatalyst?
Please drop me a message on socials or an E-Mail!

Views: 565

Amazon CodeCatalyst’s integration with the IAM identity center allows SSO with your own IdP

In this post you will learn how the new integration of the IAM Identity center with Amazon CodeCatalyst allows to use your own IdP as single source of truth of users for CodeCatalyst spaces. We will look at how you can connect a space to an IdP and on how this can be used to set up certain permissions based on your IdP roles.

Single Sign On (SSO) using IAM Identity Center empowers CodeCatalyst users to use their own IdP to provision user accounts in CodeCatalyst

With the new integration for Amazon CodeCatalyst into the AWS Identity Center organizations can now connect to their own existing IdP (AzureAD, Ping, Okta, …) going through the AWS Identity Center.

Benefits of the new integration

With this new integration users and organizations go to the Space settings in CodeCatalyst and choose to integrate with an existing instance of the IAM Identity Center.

This also connects potentialy to any other IdP that uses the IAM Itentity Center as a proxy. Administrators will only need to manage roles and permissions in on central place. The roles from the IdP can then be mapped to certain team roles in Amazon CodeCatalyst.

What I don’ t like about the new integration

There are two things that I didn’t like when looking at this integration:

Why do we need to go through the IAM Identity Center and cannot allow direct SAML or OIDC integration for a specific space? With the “need” to have an IAM Identity center organizations also need to directly connect to an existing AWS Account to manage connectivity – and this once again brings back a “tooling” account that initially I was hoping to replace with CodeCatalyst.

The second thing that I missed was the possibility to automatically map existing roles to groups in Amazon CodeCatalyst – today this is a – one time – manually effort that needs to be conducted. If you need to do that only once for your organization that isn’t a problem, but if you decide to have multiple spaces for your organization and need to perform this multiple times, then this can be very time consuming.

Summary

I believe that this change will have a very big impact on the adoption for CodeCatalyst as administrators and security teams in organizations will make it easier to allow the usage now that there is a central place to manage identities. And with the possibility to actually try out CodeCatalyst in an enterprise world, I hope that we will also see more adoption of CodeCatalyst and with that also a more streamlined and better communicated roadmap of CodeCatalyst.
If you would like to learn more on how to activate this, please refer to the documentation.

Views: 189

Amazon Codecatalyst reaches “GA” status and becomes available for general use

The new service announced by Amazon in Las Vegas at re:Invent 2022 which is an integrated DevOps service to empower development teams to develop and deliver software faster finally reaches the “general availability” status. As I have previously outlined, this achievement is very important for Amazon and the CodeCatalyst team. Congratulations to the team for reaching this goal, which I can imagine is not an easy step for this product. The tool touches a lot of very sensitive parts of a software project and I can imagine the security standards being really high. 

A hugh achievement – thank you everyone in the team for investing into CodeCatalyst and for listening as closely to the customer feedback as you are!

What changes did get implemented for GA?

As part of the GA release we see a lot of minor improvements in the User Interface and color changes. In the last weeks, we have seen a few “bigger” changes – like the possibility to use Dev Environments for Github based projects. We also got “graviton based” execution environments for CI/CD workflows which, according to AWS, should reduce our costs.

It is still hard to track down all of the changes in CodeCatalyst, as there is – to my knowledge – no public or semi-public roadmap. This is one of the things that I’d love to see, as for an integrated service that is at the core of the Developer Experience for teams, any minor change can either improve or destroy the “usage experience”. If you as a team invest into adoption a new tool like CodeCatalyst, they will need to know how changes in workflow, features or user interface can influence their day-to-day activities. Let’s see, maybe the team can share “something” like a “changelog” with us (or even an RFC process like Amplify or AppSync)?

Reached “GA” – so who can start using it now?

As of today CodeCatalyst is only availble in US regions and this means that it can be adopted mainly by US enterprise customers. CodeCatalyst already gives you the possibility to set up different Spaces for your account and within a space you can manage multiple projects. So in theory, CodeCatalyst is “ready to be used” by everyone. 

Practically speaking, it is easier to adapt the service for new projects than for existing projects , as there is no real “import” functionality. Yes, you can integrate existing Github projects, but that only integrates the source code. Unfortunately that does not make all of the “cool” things available right from the start of integrating the source: existing workflows (CI/CD pipelines) are lost and need to be re-build, issues/tickets are not imported into CodeCatalyst (tho they can be made available through the JIRA integration). 

I have been regularly using CodeCatalyst (both for imported and “new” projects) – and I really think that the tool already works very well. 

The “killer feature” that I see for new projects are the “blue prints” which essentially get you started within minutes, e.g. to deploy a SPA application, or to have a “true” CI/CD pipeline for a full stack application following the DPRA

Right now I would recommend using CodeCatalyst for any new project that you start to start building out your workflows and best practices.

So what do I still need to recommend CodeCatalyst for existing projects?

There are a few things that I have already been writing about:

  • “Import” of existing CI/CD workflows (e.g. Github actions, CDK Pipelines or CodePipelines)
  • Fully import projects
    • existing issues from Github or JIRA
    • Git-based projects including the history
  • Tighter security settings and permissions
    • Fine granular roles to allow or forbid access to specific parts of a project
    • Options to allow or forbid execution of workflows (or to deployments)
  • Additional workflow options
    • Manual approvals are very high on my wish list
    • Integration of other AWS services natively

A question for the readers: What do YOU think that you need to adopt CodeCatalyst?

A big question for the CodeCatalyst team – HOW MANY AWS TEAMS ARE USING CODECATALYST FOR PRODUCTION DEPLOYMENTS TODAY?

Where do I see the potential for CodeCatalyst?

CodeCatalyst is a big bet by AWS. There is a big potential that can really improve the life of development teams and these are the main things that I believe that can out-grow other existing solutions:

  • Integration of AWS Services / deployments metrics
    • the true integration with AWS APIs
    • Integration into “post-deployment” verifications (e.g. auto roll-back after failed CloudWatch metrics)
  • “At-hand” developer support to improve efficiency
    • with CodeWhisperer (who recently reached GA) AWS already aims to support developers during the development phase, but with CodeCatalyst AWS can take this to the next level:
    • AI support during Pull Request Reviews (or automated approvals for PRs – e.g. by including CodeGuru, etc., automated merges, etc.)
    • AI support during workflow executions (when to approve, when to deloy, when to promote, etc.)
    • With improvement proposals for workflows if the “AI model” recognizes patterns (in issue workflows or CI/CD workflows)
  • With automated improvements for existing projects based on blue prints
    • Best practices change – and so blue prints change – and if the CodeCatalyst team can automatically apply them for existing projects, customers will benefit from it

And last but not least:

I trully believe that every software project should start with a CI/CD pipeline – and with the Blue Prints including the CI/CD workflow that follows DPRA and other AWS best practices, we can trully make this possible: Empower developers to deliver their software projects in minutes right after starting their project.

Do you see the potential in CodeCatalyst? If you do not see any potential in the tool – why not?

Views: 1725

Pipeline strategies for a mono-repo – experiences with our Football Match Center projects in CodeCatalyst

Both Christian and I have been writing about our “Football Match Center” project – and as part of this project we obviously also needed a CI/CD (Continuous Integration and Continuous Deployment) pipeline. Our aim was to be able to integrate changes that we do regularly and see commits to the main branch being directly and automatically deployed to our environments.

I will first try to define some pre-requisites and then talk about learnings and experiences. 

What is a mono-repo

A mono-repo is an abbreviation of a “mono repository” which I understand as being a single repository, where different microservices or components are stored and saved in the same git repository. This can be various different services, infrastructure or user interface components or backend services.

A mono-repo has special requirements when building the CI/CD pipeline.

Expectations for our CI/CD pipeline

For our CI/CD pipeline we wanted to be able push changes to production quickly and be able to iterate fast. We wanted to achieve 100% automation for everything required for our project. As we have been writing, we completely develop this project using Amazon CodeCatalyst and thus the pipeline also should be build using the Workflows in CodeCatalyst.

Going forward we want to ensure that the pipeline also includes all CI/CD best practices as well as security scans and automated integration or end to end tests.

How to structure your pipelines

In this article we will purely focus on the CI/CD pipeline for your “main” or “trunk” branch – the production branch that will be used to deploy your software or product to the production environment.

We will not consider pipelines that should be executed on feature branches or on pull request creation.

The “one-pipeline-to-rule-them-all” approach

In this approach all services are deployed within the same pipeline. This means that there is only a single pipeline for the “main” branch. All services that are independed rom each other can deployed in parallel, services that have a dependency need to be deployed one after another. Dependencies or information from one to another service can be pushed through the pipeline using environment variables.

This can lead to longer deployment/execution timelines but ensures that “one commit” to this “main” branch is always deployed completely after a commit. If tests are included in the pipeline, they will need to cover all aspects of the application.

The “context-specific” or “component-specific” approach

Different components or contextes get a different pipeline – which means that e.g. the backend-services are deployed in one pipeline and the frontend-services in a different pipeline. 

In this approach, you automate the deployments for components and need to ensure that, if there are dependencies between the components, the pipeline verifies the dependencies. If one component requires information from another one you need to pass these dependencies using other options.

This can lead to faster iteration cycles for specific components but increases the complexity of the pipeline dependencies. You can also do not directly see if a specific commit has been deployed for all components or not.

The “one-pipeline-for-each-service” approach

This is the most decoupled option for building a CI/CD pipeline. Each service (lambda function, backend, microservice) gets its own pipeline. For each service, you can implement service specific steps as part of the pipeline. 

One of the main requirement for this is that the services are fully decoupled, otherwise managing dependencies can get very difficult. However, this allows a very fast iteration and development cycle for each microservice as the pipeline execution for each service is usually very fast.

The pipeline needs to verify the dependencies for each service as it executes the deployment.

Football Match Center – our experiences with building our CI/CD pipeline in Amazon CodeCatalyst

For our project we decided to start with a “mono-repo” – in our case today, we have a CDK application (written in Typescript) that describes the required infrastructure and includes Lambda functions (where required) and a user interface which is written in Flutter.

From a deployment perspective, the CDK application needs to be deployed on AWS and the Flutter application then needs to be deployed on a S3 bucket to serve as a Single Page Application (SPA) behind Cloudfront. Obviously this deployment/upload has the pre-requisite of the S3 bucket to be already available.

How we started

We started, very classic, with the “one-pipeline-to-rule-them-all” approach. We had one single pipeline that was used to deploy all services that are part of the infrastructure.

This pipeline started with “cdk synth” using the “CDK deploy” action in CodeCatalyst and then had other steps that depended on the first one – to executing the “flutter build” and later the “UI deploy” (using the S3 deploy action).

In this first version, the CDK deploy step had variables/output with the name of the S3 bucket and the CloudFront distribution ID passing it it to the next step where the output of “flutter build” was then uploaded and the CloudFront distribution invalidation request was triggered.

In this approach a commit to the “main” branch always triggered the same pipeline and this pipeline deployed the complete application.

We also used only natively available CodeCatalyst actions for deployment – “cdk deploy” and “build”. For the Flutter action we used a Github Action for flutter.

Experiences and pipeline adjustments

With this approach we had the problem that the Flutter build step took ~8 minutes and blocked a new iteration of changes in the CDK application or the lambda function. Thus, this slowed down our development cycle.

In addition to that we found out that there was no possibility to influence the CDK version with the CDK deploy action – but we wanted to be able to use the version defined in our Projen project – to be able to deploy to development environments from our local with the same version as from the CI/CD pipeline.

Both of these findings and experiences brought us to implement some changes to the pipeline:

  • We separated the UI build from the CDK build
  • We moved away from using “cdk deploy” and replaced it with a “build” step – to be able to trigger “projen” as part of the pipeline

So now we have two pipelines:

  1. CDK deployment
    • Triggered on changes to the “cdk-app/*” folder
      • Executing CDK synth, build and deploy steps – but not using the “cdk deploy” action but a normal build step instead
      • We adjusted the CDK app to include Cloudformation exports that exports the S3 bucket name and the Cloudfront distribution ID
  2. Ui deployment
    • Triggered on changes to the “ui/*” folder
    • Reads the values for the S3 bucket and the CloudFront distribution ID from the CloudFormation exports using the AWS cli
    • Executing the Flutter build steps and the S3 deploy action

These changes reduced in faster iterations for the development cycle of the CDK app and allowed decoupling the backend from the UI part. We were also able to fix the CDK version to the version we have selected in Projen.

In our project we have chosen the “context-specific” approach for the pipeline.

My recommendations for building CI/CD pipelines for a mono-repo

Our CI/CD pipeline is not perfect yet and we’re yet to add some important things to our pipeline.

From the experiences we have made I am still not convinced that our “context-spefic” approach is the right path.

As of writing this post in early April 2023 I’m inclined to move towards a model where we combine the “context specific” and the “one-pipeline-to-rule-them-all” approach: context-specific for “lower”, non production environments and then a single pipeline that does the promotion to our production environment.

Today we do not yet have a production environment, so we did not answer that question yet 🙂

How do you solve this challenge around building CI/CD pipelines for mono-repos?

Views: 690

What can we expect for the General Availability (GA) of Amazon CodeCatalyst?

At re:Invent 2022, as usualy, different new AWS Services or functionalities have been announced in Preview. Now, at the beginning of April 2023, a few of them have already reached the “General Availability” (GA) status – Application Composer (in early march), Latice (in late march). My favourite new service, Amazon CodeCatalyst, has not yet reached this goal – but I have a feeling that now is the right time to think about what and when we can expect this status.

You wonder what CodeCatalyst is? Watch this video on my YouTube channel or read my two initial posts about it.

Why is reaching the “GA” milestone so important?

Before starting with my assumptions on what we can expect for GA, lets clarify why reaching this milestone is so important. Being “in Preview” can mean a lot of different things. In a lot of organizations this usually translate to “limited availability”, a service not being available in all regions or not being reliable or scalable. For other organizations, it means that specific aspects of the product can be immature or not reliable. It can also mean that bigger API change are yet to be implemented or missing security guardrails. 

In general, this can be seen as a “beta” offering which is not appropiate to use for productive workloads.

Because of these reasons and maybe others, a lot of organizations (especially US based) do not allow using or adapting services that are in “Preview”.

For all of my experiences, tests, videos and projects I was so far able to only be on the free tier. And I assume that this will also be the truth for most of my readers: You can get a long way using the Free Tier that Amazon CodeCatalyst offers today.

So thats another big reason for AWS to push this service out of “Preview”: It gives organizations that are forbidden to use the service in “Preview” the possibility to start using and adopting the service – and with that Amazon can start earning money with the service which unil now might be difficult.

And as we know, AWS tries to “work backwards” from client requirements and the early usage of CodeCatalyst will drive further investments into the service.

What to expect for GA of CodeCatalyst?

Simple: Nothing big – most probably only regional rollout.

I do personally not expect any major new features for the service as the team has been constantely releasing new features and functionalities to the service on a regular cadance. There was simply not more time to work on bigger features while preparing the “General Availability” (GA).

What the CodeCatalyst already has delivered until today…

Let’s look at what has been added to CodeCatalyst since its official release in december 2022:

  • Additional Reporting auto-discovery
  • Change Tracking – the possibility to see which changes have been deployed to a certain environment
  • Additional Workflow native actions and improvements, E.g.
    • a problem with the CDK action to be able to define the “workpath” of a CDK app
    • Additional native actions
  • Linked issues to Pull Requests – you are now able to link issues to a pull request
  • UX improvements
    • Log files wider accessible in UI – at the beginning you where not able to make the log view larger, now this is possible
    • Page title adjustments
  • New Blue Prints (like the “Textract” one)
  • Development environments for Github based projects

This is not a complete list, but the things that I personally noticed and that I liked to see.

So…when is “the date”?

Hard to guess, but I would expect “soon”. Ideally right before a month starts, which will make the billing cycle easier 🙂 

So I would guess “end of april” which would bring the service right in time for the Berlin Summit (3rd of May).

Next steps for CodeCatalyst

In my last posts I have already been communicating my thoughts and features that I would love to see. But what will AWS implement?

Given reaching the “GA” status opens the way to “enterprise clients” I would expect that one of the first features will be Single-Sign-On functionalities, maybe with an integration to Okta, Ping, Azure Active Directory or other already existing IdPs.

In addition to that I believe that the User Interface needs to get some tweaks to streamline the navigation and workflow – that’s something that I personally experience every day: not knowing when and where to click to get on the rigth place. Also I think that additional service integrations will be added – e.g. StepFunctions or SNS, maybe SQS – see also my post about sending notifications from workflows.

And then there is one last thing which has been getting only limited attention so far: APIs and CLI integrations that can be used – so I would expect a major update there.

I’m really looking forward to see CodeCatalyst reaching GA – I’ve had various conversations with the team in the last months and I know that they have a true vision to make CodeCatalyst successfull as a trully AWS integrated and fully functional DevOps tool.

Are there features you are missing? Please let me know and I will forward them to the team.

Views: 771

Sending notifications from CodeCatalyst Workflows in March 2023

As Amazon CodeCatalyst is still in Preview, it has only limited integration possibilities with other AWS services or external tools.
Sending notifications from a Workflow execution is something that I believe is critical for a CI/CD system – and as I focus on CI/CD at the moment I’ll focus on the notifications on Workflows in this article.

What kind of notifications do I need or expect?

As a user of a CI/CD and Workflow tool there are different levels of notifications that I would like to receive:

  1. Start / End and Status of Workflow execution
  2. State / Stage transitions (for longer running workflows)
  3. Approvals (if required)

In addition to that, based on the context of the notification I would like to get context-specific information:

a) For the “Start” event I would like to know who or which trigger started the workflow, which branch and version it is running on, which project and workflow has been triggered. If possible getting the expected execution time / finish time would be good
b) For the “End” event I would like to know how long the execution took and if it was successful or not. I would also like to know if artifacts have been created or if deployments have been done. If the “End” is because of a failure, I would love to know the failure reason (e.g. tests failed, deployment failed, …)
c) For the state transitions I’d love to know the “time since started” and “expected completion time”. I would also like to, obviously, know the state that has been completed and the one that will now be started.
d) For approvals I’d love to be able to get the information about the approval ask and all required information (commit Id, branch) to do the approval

What does CodeCatalyst Support today?

Right now CodeCatalyst allows to set up notifications to Slack.
Please see details on how to set this up here.
This notifications are also minimal right now:

In Slack this looks like this:

How can I enhance the notification possibilities?

Luckily one of the “core actions” is the possibility to trigger a Lambda function and this is what we are going to use here to be able to trigger advanced notifications using Amazon SNS.
In our example we are going to use this to send an eMail to a specific address, but you can also use any other destinations supported by SNS like SMS or AWS ChatBot.

Setting up pre-requisites

Unfortunately we will need to set up an SNS topic and a Lambda function in a dedicated AWS account in order to use these advanced notifications.
This means that we are “breaking” the concept of CodeCatalyst not requiring access to the AWS Console, but this is the only way that I found so far to be able to send additional notifications.

Ideally we would be setting up the SNS topic and the lambda function using CDK, but that increases the complexity of the workflow and of the setup and because of that I’m not including that in this blog post.

Setting up the SNS topic

Please create a SNS topic following the AWS documentation through the console.
We assume the topic to be in “eu-central-1” and the name to be “codecatalyst-workflow-topic“.

After the topic has been set up, you will need to subscribe your eMail address to it.

Setting up the lambda function

You can follow this blog post to manually set up the lambda function through the AWS console, please ensure to give the Lambda functions permissions to use the SNS topic.
The required code using Python will look like this:

import boto3

sns = boto3.client('sns')

def lambda_handler(event, context):
    try:
        message = event['message']
        topic_arn = 'arn:aws:sns:eu-central-1:<accountID>:codecatalyst-workflow-topic'

        response = sns.publish(
            TopicArn=topic_arn,
            Message=message
        )
        print('Message sent to SNS topic:', response['MessageId'])
    except Exception as e:
        print('Error sending message: ', e)

Obviously the same can be achieved using Typscript, Go or any other supported function.
Please adjust the topic_arn to match the topic that you just created.
After creation this Lambda function will now have an ARN which should look similar to this:
arn:aws:lambda:eu-central-1:<accountId>:function:send-sns-notification-python

We will need this ARN when setting up the notification in our Workflow.

Integration into the workflow

Integrating this Lambda function into a workflow is easy:

  NotifyMe:
    Identifier: aws/lambda-invoke@v1
    Environment:
      Connections:
        - Role: CodeCatalystPreviewDevelopmentAdministrator-wzkn0l
          Name: "<connection>"
      Name: development
    Inputs:
      Sources:
        - WorkflowSource
    Compute:
      Type: Lambda
    Configuration:
      RequestPayload: '{"message":"branchName: ${WorkflowSource.BranchName}\nCommitID: ${WorkflowSource.CommitId}\nWorkflow-Name: NOT-AVAILABLE\nSTATUS: EXECUTED"}'
      ContinueOnError: false
      AWSRegion: eu-central-1
      LogType: Tail
      Function: arn:aws:lambda:eu-central-1:<accoutId>:function:send-sns-notification-python

As you can see, we are integrating an “aws/lambda-invoke@v1” action which then points to the lambda function that we just created.

In the “RequestPayload” we are passing a few information to the Lambda function which will then be passed to the SNS topic as part of the message.
This is how the message will look when received as eMail:

Missing information and next steps for enhanced notifications

As you can see we are able to send notifications from CodeCatalyst to multiple targets, including eMail with this option.

What we are missing is – and I am not sure if thats possible or not – is all of the “metadata” of the workflow execution like:

  • Workflow-Name
  • State-Name
  • Project Name and additional information

In the documentation I was not able to find out the available environment variables about these information…. If you do have any ideas on how to access this metadata, please let me know!

Views: 713

A second look at Amazon CodeCatalyst – CI/CD natively on AWS to empower developers to deliver faster and reduce heavy lifting for small to medium software engineering and DevOps teams

A few weeks ago, on december 1st 2022, Werner Vogels announced Amazon CodeCatalyst. I’ve previously shared my initial thoughts and findings in a blog post. In this post, I’m going to share a few more findings and insights into using Amazon CodeCatalyst and will also see if any of my wishes from the wishlist for CI/CD on AWS have been resolved with CodeCatalyst.

What I have been playing around with…

CodeCatalyst login page

One of my personal projects that I am working on together with a few friends is pegasus-galaxy.net and the CI/CD pipeline that I had built with CDK Pipelines (that I also presented at re:Invent 2022) was the first one to try to move over.
In context, we’re talking about a Flutter application for Web running behind CloudFront, deployed using CDK.

I decided to try CodeCatalyst out and go “all in” – and that means moving the code from Bitbucket into CodeCatalyst as well as setting up the other users in CodeCatalyst and moving the workflows (=CI/CD pipelines) over to CodeCatalyst.

CodeCatalyst Menu

In this article I am going to go through each of the sections in CodeCatalyst and will share my experiences, thoughts and findings.

Where I have ideas on how to improve the day-to-day work with the tool, will try to share that.

Before going into details, lets start with the most important thing:

Amazon CodeCatalyst works very well and reliable and the current version of the service is a great foundation for moving all of your CI/CD and development practices to AWS.

The CodeCatalyst team has been very supportive on re:Post, so if you have a question, feel free to ask it there!

CodeCatalyst Overview – Spaces and Projects

Spaces are the “Top-Level” option to organize your CodeCatalyst account. You will need to associate an AWS Account for billing used AWS resources. Each AWS (billing) account can be associated only with one CodeCatalyst Space.

CodeCatalyst Spaces Billing page

While this seems like a limitation as you will need to create a different billing account for a 2nd space, I can right now not see an impact for my day to day work. For anything that I run on the same AWS account, I would assume that using a project within the same space should be good.

You can manage Projects, Members and AWS Account connections on the space page. In the “extensions”, CodeCatalyst currently allows a connection only to the JIRA Cloud. I would expect that additional 3rd party extensions will be supported in the GA version of CodeCatalyst.

Projects Overview and options

A project is a “unit of work” in your product or software that you are building.
Within projects, you can manage issues, manage your code repositories, execute workflows (CI/CD) and review report results.

Projects are associated to a Space – and you can create as much projects in a Space as you want. You can add team members to a project, that are not able to access all projects in the space. Unfortunately I have not yet found an option to “hide” projects from Team members that are added on the Space itself.

Managing issues / tickets

CodeCatalyst currently provides two options to manage your issues or tasks:
1) Link to JIRA Cloud Project
2) Internal issue management

If you use the option to link to a JIRA cloud project, the “issues” link is replaced by a link to your JIRA Cloud project.

Internal issue management

The internal issue management system currently offers everything that is required for a simple Kanban workflow. You can create issues, add them to a backlog or a Kanban board, assign them to project members and track their current status.
I personally think that the current functionalities are good enough for small teams and simple projects – I’m actually already working with it in a small project and will add additional feedback as soon as I gain more experience.

Code

Within the “Source” part of a project, you can manage source repositories or connections to source repositories in Github. I expect that other providers will be added going forward (e.g. Gitlab, CodeCommit, Bitbucket, …).
You can also manage pull requests and approvals – I was only able to test this using internal source repositories, not using a linked repository.

The last option – the Dev Environments – is the most exciting functionality – it gives you the possibility to host development environments (similar to Gitpod) on AWS using Cloud9 but also, and this is really cool, using Visual Studio or JetBrains IDEs.
When using that option, the IDE on your local PC is only the “presentation layer”, the source code is stored and run on an AWS instance and the IDE uses remote connectivity to talk to the Dev Environment in the background.

CI/CD

CodeCatalyst currently uses the same approach as Github Actions to manage your workflows or CI/CD pipelines – you are able to manage your Workflows using YAML files. The syntax is simple and understandable. There is a minimal set of Actions available as part of the preview. You are also able to use existing Github Actions as part of your workflow.

Workflow overview in CodeCatalyst

The workflow functionality is very powerful. In my tests I have not yet been able to test all parts of the capabilities. Workflows can be defined for certain directories, for certain triggers or branches. Test reports will be exposed in the “reports” functionality.

CodeCatalyst offers a graphical overview for workflows and alows to edit them in the UI, too. This functionaly works pretty well and helps to quickly get you started building your first workflow in CodeCatalyst.

I’ll need to test the workflows more to be able to give additional insights into how good or bad they are currently running. My simple pipeline that builds a Flutter Application, deploys my Infrastructure as Code using CDK and then publishes the new version of the Flutter app runs without problems.

One of my main concerns so far is the execution time, however the team has been working on a possibility to use Lambda as an execution environment.
This option however does not yet support the execution of Github actions and also has some other limitations.

The other features that are part of the “CI/CD” – Environments, Compute and Secrets – I did not have time to play around with this. If you have any experiences with it, please add your thoughts in a comment to this article!

Reports

The reports today only suport test reports. I have not used the functionality enough to assess this, but I am sure that the CodeCatalyst team is going to add additional reporting options going forward.

Things I like most about CodeCatalyst (Preview) after 6 weeks of usage

Just a short list of things that I already like:
– Integration of Github Actions as workflow actions
– Managing workflows using UI & code

Things I miss in CodeCatalyst (Preview) after 6 weeks of usage

– macOS builds (e.g. for Flutter iOS apps) are still not possible
– granular permissions for workflow and Pull Request triggers
– and….

Let’s talk about Open Source Projects

Right now there is no option to share a project or a repository that is hosted within CodeCatalyst as an Open Source project. This is really a limitation if you want to use CodeCatalyst for Open Source project – or if I would like to share a CodeCatalyst repository with example workflows.
I hope this functionality will be added soon.

Wrap up and next steps for me with CodeCatalyst

I need to admit – writing this post took longer than expected 🙂
I wanted to publish it before christmas and now it seems to be a bit “late” already as I am sure that a lot of you have made your own experiences with CodeCatalyst today – please SHARE your findings with me – links of Blogs that you have written or other content you have created, I am eager to consume it!

My next steps with CodeCatalyst

I am working on migrating my project pegasus-galaxy.net completely to CodeCatalyst and collaborate with my team on it there. With that, I will be able to proof CodeCatalyst in a “real world” application that it is “multi-platform” application – using Flutter for Web, Android and iOS – and a Serverless AWS based backend.
If you’re interested to join this project, please do not hesitate to reach out – skills that we need right now:
AppSync, DynamoDB and development/software engineering (Flutter, Typescript, Java, or Node?)

Views: 384