As a few of you might have seen, AWS has today launched PartyRock – an Amazon Bedrock Playground that can be used to generate and build applications using GenAI technologies.
PartyRock is an educational tool for providing any builder with low-friction access to learn through experimentation in a foundation model playground built on Amazon Bedrock. It is not a product or service in the traditional AWS definition and should never be referred to as such. The preferred descriptor is playground, though in most cases tool is also acceptable.
AWS
German engineering – Party from Berlin
I’ve been fortunate to be able to know people that are behind this launch and I’m excited for it, not only because a bunch of the engineering team members are part of the AWS Development Center in Germany but also because I’ve tried to not touch anything related to GenAI until this tool was made available.
After being able to look at the tool and playing around with it, I can see a lot of benefits of using Generative AI going forward and I see a lot of the value that this technology can bring us in addition to “simple ChatGPT like” Chatbots.
Build your first GenAI App with PartyRock
It’s too simple. Click on the “Generate app” button, add a prompt for Amazon Bedrock and within seconds you have a working application that uses GenAI under the hood!
Pretty cool, isn’t it? Even tho the underlying model does not have access to the session catalog (which is a pitty), I liked the outcome that you can look at in this snapshot.
Holding to it’s promises it allows you to experience with GenAI
As the introduction of the AWS team says, PartyRock really makes Bedrock accessible and gives everyone a great possibility to “try out” how the different models behave with different prompts. I can only encourage you to try it out and make your own experiences with it. It’s worth your time! Being part of the AWS Community (in this case, being a Hero or a Community Builder) gave us the advantage of a few hours to try this out before the official release…and this gives me the chance to already NOW present you a few cool use cases that other builders have created with this tool 🙂
PartyRock Apps & Use cases that have made me smile or bring value
It’s amazing to see how creative AWS Community Builders and Heroes are 🙂 Here a bunch of the apps that I’ve looked at and played around with and I think are worth sharing:
Generates questions for a specific topic that you can use in a quiz – the original idea is from Dixit, I’ve linked my version which is “remixed” and adds a language option
Did you find an exciting app that I should include? Reach out to me and let me know 🙂
Where do we take it from here? How PartyRock helps and what I would love to get
PartyRock is a great starting point for experimenting with GenAI!
To take this to the next level there is a few things I’d love to get:
Make your PartyRock app “yours”
Deploy to my AWS Account Button
Export to CDK / IaC Option
Export to CodeCatalyst Project
Additional UI options
Radio Boxes, ComboBoxes
Make the applications “user aware”
Other generation options than text/image
Especially the first point would help developers to take action after creating the app – you could directly use the generated app and use this on your own AWS Account and this will help you understand how Bedrock fits into your existing AWS architecture.
What do you think of it? I’d love to hear your feedback and thoughts!
As Matt already wrote in his project introduction post we have been using Amazon CodeCatalyst to handle all of our project activities. This gives a very good indication into where CodeCatalyst is already “ready for prime time” and where further adjustments need to be made in order to develop the service into a fully fledge, all-or-nothing, integrated DevOps service. The one pushing for using CodeCatalyst was myself, as I wanted to try out the new Amazon service that recently went “GA” after being announced at re:invent 2023 in a bigger team and in a real project. It was great fun – and we learned a lot, too! Let’s look at some details.
How we planned our tasks using “issues”
Our journey with CodeCatalyst started with a – very agile and intense – planning session using the “issues” component of CodeCatalyst. Given the nature of an hackathon, we created roughly the first ten issues in the system. Three of us were “new users” for CodeCatalyst, but they were able to quickly interact with the issues component – by default, this is a simple Kanban board and has a well structured user interface. One thing we directly missed was the ability to copy/paste images into comments within the issues…and to attach files to the issues themselves. Linking issues to one another is also only hardly possible right now. We used a simple workflow and did a regular planning (twice per week) and created/updated/closed more than 50 issues (of partly very small scope) throught the course of less than four weeks. Overall, the available functionalities were good enough for our project and supported our project.
Collaborating source code – working with Git repositories and pull requests
The four of us are (Community) Builders from the bottom and all we need to be happy is a Git repository that we can push code to 🙂 Of course this is not currect, but CodeCatalyst allows you to host you own, private Git repositories. You can have multiple repositories per project. In our case, we had decided to go with a mono-repo approach and so we strickly worked in a single repository. Working with the source code repository was “just fine” and working as expected. We also decided to merge as early and as often as possible to our “main” branch. The team however struggled in working with setting up Pull Requests and working in the UI when reviewing pull requests. The most problematic things: UI response time, working with comments on pull requests (e.g. threaded comments) and creating pull requests. The whole UI does not yet feel as “structured” as in other competitors (e.g. Github) – examples: PR title/description proposals when creating a PR, link in terminal/commandline that allows your to create a PR. Another thing we missed is “git blame” online – not to blame each other but to understand what changed and broke our “hacked” “hackathon” source code 🙂 Cool: the markdown rendering for files named *.md. Missing here: including local images is not possible.
Working with Workflows – Github actions for the rescue!
We started off building our workflows with natively available actions in Amazon CodeCatalyst – using the “cdk deploy” action to deploy out infrastructure and the “s3 deploy” action to deploy our front end – but quickly shifted away from that and switched over to using Github Actions within the workflow to allow building our Flutter application as a CDK asset. As our Continuous Integration pieces matured within our npm implementation, there was less reason to go back to the native actions as the CI part was handled through npm. Our current workflow and CI/CD pipeline:
The main preference here is to allow a continuous deployment to our production environment. This goes through an intial “sandbox” deployment, promotes to a “test” environment and then deploys to “production”. The pipeline is prepared to execute integration tests, but they are not yet implemented – which is the nature of a _hack_athon 🙂 This would definately one of the next things to change – adding real integration tests and also adding some security verifications.
Our workflow is, as I mentioned, pretty basic – here an excerpt of it until the deployment to our sandbox environment:
As you can see, a lot of the CD (and also the “deployment”) runs within the corresponding task definitions in package.json.
No “Open Source” or “Readonly” view
We would really love to open-source our whole projet, to give you the possibility to get involved and learn from what we build, but unfortunately Amazon CodeCatalyst does currrently not offer an Open Source or a “Readonly” view and forces you to create an account for CodeCatalyst – these are things that make it difficult for us as we cannot link to our sources. If you are an AWS Community Builder, please reach out to either of us in Slack and we can give you access to the project.
What did we really miss in the workflow component
manual approvals before promotion to our “production” environment
Notifications from workflows – could have been done through this implementation but we did not take time to implement that
Integration of actions deployed through “npm – cdk deploy” with making the changes in infrastructure visible
importing existing pipelines or render CDK pipelines to CodeCatalyst workflows The “killer feature” of the workflows, at least as far as we see it, is the possibility to use Github actions as part of the workflow. With only minor adjustmends an existing Github workflow can be transformed to a CodeCatalyst workflow. If either of you know about an existing tool that converts existing Github workflows to CodeCatalyst workflows in an automated way – please let me know.
The hidden gem: Development Environments
One of the “hidden gems” of CodeCatalyst are the Dev Environments – also known as Development Enviroments. Essentially, its a “service in the service” similar to Gitpod where you can host your IDE or the environment that you develop on. In our project we all didn’t use it as the architecture was small and we only had this single project. Also, we were developing mainly on our main machines without switching between machines too often. That’s why we all did not consider really using the DevEnvironments for this project. Last but not least, Flutter is not natively supported on the Dev Environments which would have forced us to prepare a devfile that includes the Flutter dependencies – and thats something that would have taken too much time out of the project. Still, the De Environments within CodeCatalyst are hidden gems that I believe should get more traction than they are getting until now.
A summary of what we think works and what doesn’t – and our biggest wishes
For a hackathon like this (3-6 weeks working time) CodeCatalyst seems to be a very good choice for a project that starts with a “new” or “fresh” codebase. The tool has most of the minimal requirements implemented to allow simple project management and good integration with AWS. It also allowed us to quickly get started and to collaborate on our sources.
We really missed workflow notifications as well as manual approvals in the workflows. As the tool matures I personally hope that more and more AWS internal teams will be using CodeCatalyst to develop, plan and deploy out their internal AWS services and with that the “flow” in the User Interface will definately be improved, too – as today there are some hickups in a real developers workflow.
And, last but not least – we would love to share this project with you, but given CodeCatalyst does not allow to “Open Source” a project, that’s not possible 🙂
Maybe something for the next hackathon?
If you would like to get involved in this project and help us to shape and build our AWS Speakers directory to make it easier for AWS User Group Leaders to find speakers – please reach out to either of us four about collaborating!
This blog post is part of a a series of posts that explains details and technical challenges that we (Danielle, Matt, Julian and myself) faced during an hackathon that was focused on the Transformers Huggingface tools – please read Matt’s article for additional information and details. In this post we are focusing on the experiences we made in the project using the Amplify SDK for Flutter.
Project Setup – architecture dependencies for project set up
Initially our project is targetting to be available to all AWS User Group Leaders on the web from any browser. As we mature the project, at least I am hoping to be able to also publish this application as an Android and iOS application using cross platform capabilities. Here is a birds eyes view to our UI architecture:
Why did we choose Flutter as UI?
Flutter is a trending cross-platform development toolkit, altough it is mainly sponsored by Google there is a vibrant open source community. I personally have had some exposure to Flutter in the past and I liked the developer experience and the short iteration cycles. Also the AWS Amplify Team has been investing into making their SDK available to developers, so we thought this is a good possibility to try out our use case and implement the Web app in Flutter using the Amplify SDK for Flutter to connect to the backend resources which we were planning to implement using the AWS CDK. With this approach, we also want to draw some attention on the fact that the Amplify SDKs can be used without an Amplify owned backend. This is cool and opens up a lot of possibilities – but also challenges as we will see later. I convinced Danielle, Matt and Julian to also use Flutter for our frontend. They also saw this as a good opportunity to learn a bit of Flutter by themselves.
Amplify SDK for Flutter – Developer experience and challenges
The Amplify SDK for Flutter recently announced the “GA” for Web and Desktop which is a milestone for the team to reach. As we outlined in Matt’s blog post, we are using Typescript in the backend. Amplify Flutter has native support for AppSync/GraphQL – we needed to connect the Flutter App to existing AppSync endpoints. The AppSync schema however was written manually. No, we needed to use “amplify codegen” to generate the Dart models for the GraphQL types – but we also needed to write a type model in Typescript to be able to work with the same objects on the backend. This turned out to be more difficult than expected: the [amplify codegen](https://docs.amplify.aws/cli/graphql/client-code-generation/#shared-schema-modified-elsewhere-eg-console-or-team-workflows) functionality is available for Flutter, too, but it was difficult to get this to work. We ended up creating a new Flutter application using amplify init and then copy/pasted our schema(s) into the expected location. Then we needed to manually copy the generated models into our project. Oh, I neqarly forgot: When using amplify codegen you need to ensure that your schema is anotated correctly, types e.g. need to have an @model annotation – but if you have this annotation in the schema when trying to deploy to AppSync, that deployment failed…so we needed to also manually adjust the schema before we were able to execute amplify codegen.
Using the Amplify SDK for Flutter
In our use case, all of the backend infrastructure is created using the AWS CDK. We are not using Amplify to create backend resources – and this use case has – until recently – not really been promoted by the Amplify team. Thanks to one of my last blog posts around the same topic, this new documentation page has been added which simplifies the setup of your amplifyconfiguration.dart – but we were missing the same documentation for the “Authentication” library. One again, we workarounded this problem by using a temporary Amplify Flutter project. That allowed us to copy/paste the configuration and adjust it to our needs.
Environment aware connections
In our current setup, we have at least three environments to test and promote our application. In order to be able to execute and test the Flutter application on all environments without code changes, we needed to make the maplifyconfiguration.dart environment aware:
Within our CI/CD workflow we then pass the correct values for the variables and during the flutter build web they are backed into the application.
After solving these challenges, using the Amplify Flutter library worked without noteworthy problems or hickups.
Using the Amplify Flutter Library to authenticate a user with Cognito
The documentation for Amplify Flutter is really good and we decided to also use the Authenticator Widget – at the end, this project was born through a hackathon – and we did not have much time to implement the authentication flow ourselves.
In our main.dart we needed to include the Amplify configuration:
Future<void> _configureAmplify() async {
final api = AmplifyAPI(modelProvider: ModelProvider.instance);
await Amplify.addPlugin(api);
// Add any Amplify plugins you want to use
final authPlugin = AmplifyAuthCognito();
await Amplify.addPlugin(authPlugin);
try {
await Amplify.configure(amplifyconfig);
} on AmplifyAlreadyConfiguredException {
safePrint(
'Tried to reconfigure Amplify; this can occur when your app restarts on Android.');
}
}
and then we only needed to adjust the build method to distinguish betwen AuthenticatedViews and “normal” views:
We would have loved to open source our code, but unfortunately that option is currently not available in Amazon CodeCatalst. If you’re an experienced Flutter developer and don’t like the code you see above – that’s fine, the four of us have been learning Flutter iwth this project – approach us and tell us how to do this better! 😉
Using the Amplify Flutter Library to execute AppSync / GraphQL queries
Accessing AppSync was also really easy by following the Amplify Flutter documentation. Here is our code for retrieving events from the backend:
Future<List<EventRow?>> _listEvents() async {
try {
String graphQLDocument = '''query GetEvents {
listEvents {
pk
sk
title
description
tags
length
}
}''';
var operation = Amplify.API
.query(request: GraphQLRequest<String>(document: graphQLDocument));
List<EventRow> events = [];
var response = await operation.response;
var data = response.data;
if (data != null) {
Map<String, dynamic> userMap = jsonDecode(data);
print('Query result: ' + data.toString());
List<dynamic> matches =
userMap["listEvents"] != null ? userMap["listEvents"] : [];
matches.forEach((element) {
if (element != null) {
if (element["id"] == null) {
element["id"] = "rnd-id";
}
var event = Event.fromJson(element);
events.add(EventRow(event, context));
}
});
return events;
}
} on ApiException catch (e) {
print('Query failed: $e');
}
return <EventRow?>[];
}
This uses the previously generated models. All of the queries are authenticated using the Cognito Authentication – and this is the identity that we can also use in the backend to authorize access. The Amplify documentation is pretty good, hence I will not be adding more details into this post. Please ask if you have any specific questions.
Wishes for the Amplify SDK for Flutter
We already mentioned most of them: better amplify codegen support (including the option to generate both Flutter and Typescript models at the same time), better documentation or support for setting up the Amplify configuration completely without an Amplify backend. Another thing we did not talk about here but Julian mentions in his blog post are the AppSync [merged APIs] which are currently not supported – thus we needed to execute amplify codegen for each of our microservices schemas and then for the merged APi – and then manually needed to bring the models into a usable structure (e.g. copy/pasting from the different ModelProviders).
What’s next for our project
So, what’s next? After the hackathon we are thinking about making this a “kind of” OpenSource, collaborative project. Here we are looking for contributors – please contact us if you are interested. Besides that, Matt covers a godd bunch of roadmap items already – and we definately need someone with more Flutter experience to review our UI code to e.g. introduce the BloC pattern for cleaner programming, adding some styling and to expand existing functionalities. And we still need to play the “cross-platform” card – we need a build step to generate iOS and Android versions of our application and need to make sure that the apps land in the Appstore(s)! Please reach out to us if you would like to get involved!
The new service announced by Amazon in Las Vegas at re:Invent 2022 which is an integrated DevOps service to empower development teams to develop and deliver software faster finally reaches the “general availability” status. As I have previously outlined, this achievement is very important for Amazon and the CodeCatalyst team. Congratulations to the team for reaching this goal, which I can imagine is not an easy step for this product. The tool touches a lot of very sensitive parts of a software project and I can imagine the security standards being really high.
A hugh achievement – thank you everyone in the team for investing into CodeCatalyst and for listening as closely to the customer feedback as you are!
What changes did get implemented for GA?
As part of the GA release we see a lot of minor improvements in the User Interface and color changes. In the last weeks, we have seen a few “bigger” changes – like the possibility to use Dev Environments for Github based projects. We also got “graviton based” execution environments for CI/CD workflows which, according to AWS, should reduce our costs.
It is still hard to track down all of the changes in CodeCatalyst, as there is – to my knowledge – no public or semi-public roadmap. This is one of the things that I’d love to see, as for an integrated service that is at the core of the Developer Experience for teams, any minor change can either improve or destroy the “usage experience”. If you as a team invest into adoption a new tool like CodeCatalyst, they will need to know how changes in workflow, features or user interface can influence their day-to-day activities. Let’s see, maybe the team can share “something” like a “changelog” with us (or even an RFC process like Amplify or AppSync)?
Reached “GA” – so who can start using it now?
As of today CodeCatalyst is only availble in US regions and this means that it can be adopted mainly by US enterprise customers. CodeCatalyst already gives you the possibility to set up different Spaces for your account and within a space you can manage multiple projects. So in theory, CodeCatalyst is “ready to be used” by everyone.
Practically speaking, it is easier to adapt the service for new projects than for existing projects , as there is no real “import” functionality. Yes, you can integrate existing Github projects, but that only integrates the source code. Unfortunately that does not make all of the “cool” things available right from the start of integrating the source: existing workflows (CI/CD pipelines) are lost and need to be re-build, issues/tickets are not imported into CodeCatalyst (tho they can be made available through the JIRA integration).
I have been regularly using CodeCatalyst (both for imported and “new” projects) – and I really think that the tool already works very well.
The “killer feature” that I see for new projects are the “blue prints” which essentially get you started within minutes, e.g. to deploy a SPA application, or to have a “true” CI/CD pipeline for a full stack application following the DPRA.
Right now I would recommend using CodeCatalyst for any new project that you start to start building out your workflows and best practices.
So what do I still need to recommend CodeCatalyst for existing projects?
There are a few things that I have already been writing about:
“Import” of existing CI/CD workflows (e.g. Github actions, CDK Pipelines or CodePipelines)
Fully import projects
existing issues from Github or JIRA
Git-based projects including the history
Tighter security settings and permissions
Fine granular roles to allow or forbid access to specific parts of a project
Options to allow or forbid execution of workflows (or to deployments)
Additional workflow options
Manual approvals are very high on my wish list
Integration of other AWS services natively
A question for the readers: What do YOU think that you need to adopt CodeCatalyst?
A big question for the CodeCatalyst team – HOW MANY AWS TEAMS ARE USING CODECATALYST FOR PRODUCTION DEPLOYMENTS TODAY?
Where do I see the potential for CodeCatalyst?
CodeCatalyst is a big bet by AWS. There is a big potential that can really improve the life of development teams and these are the main things that I believe that can out-grow other existing solutions:
Integration of AWS Services / deployments metrics
the true integration with AWS APIs
Integration into “post-deployment” verifications (e.g. auto roll-back after failed CloudWatch metrics)
“At-hand” developer support to improve efficiency
with CodeWhisperer (who recently reached GA) AWS already aims to support developers during the development phase, but with CodeCatalyst AWS can take this to the next level:
AI support during Pull Request Reviews (or automated approvals for PRs – e.g. by including CodeGuru, etc., automated merges, etc.)
AI support during workflow executions (when to approve, when to deloy, when to promote, etc.)
With improvement proposals for workflows if the “AI model” recognizes patterns (in issue workflows or CI/CD workflows)
With automated improvements for existing projects based on blue prints
Best practices change – and so blue prints change – and if the CodeCatalyst team can automatically apply them for existing projects, customers will benefit from it
And last but not least:
I trully believe that every software project should start with a CI/CD pipeline – and with the Blue Prints including the CI/CD workflow that follows DPRA and other AWS best practices, we can trully make this possible: Empower developers to deliver their software projects in minutes right after starting their project.
Do you see the potential in CodeCatalyst? If you do not see any potential in the tool – why not?
Both Christian and I have been writing about our “Football Match Center” project – and as part of this project we obviously also needed a CI/CD (Continuous Integration and Continuous Deployment) pipeline. Our aim was to be able to integrate changes that we do regularly and see commits to the main branch being directly and automatically deployed to our environments.
I will first try to define some pre-requisites and then talk about learnings and experiences.
What is a mono-repo
A mono-repo is an abbreviation of a “mono repository” which I understand as being a single repository, where different microservices or components are stored and saved in the same git repository. This can be various different services, infrastructure or user interface components or backend services.
A mono-repo has special requirements when building the CI/CD pipeline.
Expectations for our CI/CD pipeline
For our CI/CD pipeline we wanted to be able push changes to production quickly and be able to iterate fast. We wanted to achieve 100% automation for everything required for our project. As we have been writing, we completely develop this project using Amazon CodeCatalyst and thus the pipeline also should be build using the Workflows in CodeCatalyst.
Going forward we want to ensure that the pipeline also includes all CI/CD best practices as well as security scans and automated integration or end to end tests.
How to structure your pipelines
In this article we will purely focus on the CI/CD pipeline for your “main” or “trunk” branch – the production branch that will be used to deploy your software or product to the production environment.
We will not consider pipelines that should be executed on feature branches or on pull request creation.
The “one-pipeline-to-rule-them-all” approach
In this approach all services are deployed within the same pipeline. This means that there is only a single pipeline for the “main” branch. All services that are independed rom each other can deployed in parallel, services that have a dependency need to be deployed one after another. Dependencies or information from one to another service can be pushed through the pipeline using environment variables.
This can lead to longer deployment/execution timelines but ensures that “one commit” to this “main” branch is always deployed completely after a commit. If tests are included in the pipeline, they will need to cover all aspects of the application.
The “context-specific” or “component-specific” approach
Different components or contextes get a different pipeline – which means that e.g. the backend-services are deployed in one pipeline and the frontend-services in a different pipeline.
In this approach, you automate the deployments for components and need to ensure that, if there are dependencies between the components, the pipeline verifies the dependencies. If one component requires information from another one you need to pass these dependencies using other options.
This can lead to faster iteration cycles for specific components but increases the complexity of the pipeline dependencies. You can also do not directly see if a specific commit has been deployed for all components or not.
The “one-pipeline-for-each-service” approach
This is the most decoupled option for building a CI/CD pipeline. Each service (lambda function, backend, microservice) gets its own pipeline. For each service, you can implement service specific steps as part of the pipeline.
One of the main requirement for this is that the services are fully decoupled, otherwise managing dependencies can get very difficult. However, this allows a very fast iteration and development cycle for each microservice as the pipeline execution for each service is usually very fast.
The pipeline needs to verify the dependencies for each service as it executes the deployment.
Football Match Center – our experiences with building our CI/CD pipeline in Amazon CodeCatalyst
For our project we decided to start with a “mono-repo” – in our case today, we have a CDK application (written in Typescript) that describes the required infrastructure and includes Lambda functions (where required) and a user interface which is written in Flutter.
From a deployment perspective, the CDK application needs to be deployed on AWS and the Flutter application then needs to be deployed on a S3 bucket to serve as a Single Page Application (SPA) behind Cloudfront. Obviously this deployment/upload has the pre-requisite of the S3 bucket to be already available.
How we started
We started, very classic, with the “one-pipeline-to-rule-them-all” approach. We had one single pipeline that was used to deploy all services that are part of the infrastructure.
This pipeline started with “cdk synth” using the “CDK deploy” action in CodeCatalyst and then had other steps that depended on the first one – to executing the “flutter build” and later the “UI deploy” (using the S3 deploy action).
In this first version, the CDK deploy step had variables/output with the name of the S3 bucket and the CloudFront distribution ID passing it it to the next step where the output of “flutter build” was then uploaded and the CloudFront distribution invalidation request was triggered.
In this approach a commit to the “main” branch always triggered the same pipeline and this pipeline deployed the complete application.
We also used only natively available CodeCatalyst actions for deployment – “cdk deploy” and “build”. For the Flutter action we used a Github Action for flutter.
Experiences and pipeline adjustments
With this approach we had the problem that the Flutter build step took ~8 minutes and blocked a new iteration of changes in the CDK application or the lambda function. Thus, this slowed down our development cycle.
In addition to that we found out that there was no possibility to influence the CDK version with the CDK deploy action – but we wanted to be able to use the version defined in our Projen project – to be able to deploy to development environments from our local with the same version as from the CI/CD pipeline.
Both of these findings and experiences brought us to implement some changes to the pipeline:
We separated the UI build from the CDK build
We moved away from using “cdk deploy” and replaced it with a “build” step – to be able to trigger “projen” as part of the pipeline
So now we have two pipelines:
CDK deployment
Triggered on changes to the “cdk-app/*” folder
Executing CDK synth, build and deploy steps – but not using the “cdk deploy” action but a normal build step instead
We adjusted the CDK app to include Cloudformation exports that exports the S3 bucket name and the Cloudfront distribution ID
Ui deployment
Triggered on changes to the “ui/*” folder
Reads the values for the S3 bucket and the CloudFront distribution ID from the CloudFormation exports using the AWS cli
Executing the Flutter build steps and the S3 deploy action
These changes reduced in faster iterations for the development cycle of the CDK app and allowed decoupling the backend from the UI part. We were also able to fix the CDK version to the version we have selected in Projen.
In our project we have chosen the “context-specific” approach for the pipeline.
My recommendations for building CI/CD pipelines for a mono-repo
Our CI/CD pipeline is not perfect yet and we’re yet to add some important things to our pipeline.
From the experiences we have made I am still not convinced that our “context-spefic” approach is the right path.
As of writing this post in early April 2023 I’m inclined to move towards a model where we combine the “context specific” and the “one-pipeline-to-rule-them-all” approach: context-specific for “lower”, non production environments and then a single pipeline that does the promotion to our production environment.
Today we do not yet have a production environment, so we did not answer that question yet 🙂
How do you solve this challenge around building CI/CD pipelines for mono-repos?
At re:Invent 2022, as usualy, different new AWS Services or functionalities have been announced in Preview. Now, at the beginning of April 2023, a few of them have already reached the “General Availability” (GA) status – Application Composer (in early march), Latice (in late march). My favourite new service, Amazon CodeCatalyst, has not yet reached this goal – but I have a feeling that now is the right time to think about what and when we can expect this status.
You wonder what CodeCatalyst is? Watch this video on my YouTube channel or read my two initial posts about it.
Why is reaching the “GA” milestone so important?
Before starting with my assumptions on what we can expect for GA, lets clarify why reaching this milestone is so important. Being “in Preview” can mean a lot of different things. In a lot of organizations this usually translate to “limited availability”, a service not being available in all regions or not being reliable or scalable. For other organizations, it means that specific aspects of the product can be immature or not reliable. It can also mean that bigger API change are yet to be implemented or missing security guardrails.
In general, this can be seen as a “beta” offering which is not appropiate to use for productive workloads.
Because of these reasons and maybe others, a lot of organizations (especially US based) do not allow using or adapting services that are in “Preview”.
For all of my experiences, tests, videos and projects I was so far able to only be on the free tier. And I assume that this will also be the truth for most of my readers: You can get a long way using the Free Tier that Amazon CodeCatalyst offers today.
So thats another big reason for AWS to push this service out of “Preview”: It gives organizations that are forbidden to use the service in “Preview” the possibility to start using and adopting the service – and with that Amazon can start earning money with the service which unil now might be difficult.
And as we know, AWS tries to “work backwards” from client requirements and the early usage of CodeCatalyst will drive further investments into the service.
What to expect for GA of CodeCatalyst?
Simple: Nothing big – most probably only regional rollout.
I do personally not expect any major new features for the service as the team has been constantely releasing new features and functionalities to the service on a regular cadance. There was simply not more time to work on bigger features while preparing the “General Availability” (GA).
What the CodeCatalyst already has delivered until today…
Let’s look at what has been added to CodeCatalyst since its official release in december 2022:
Additional Reporting auto-discovery
Change Tracking – the possibility to see which changes have been deployed to a certain environment
Additional Workflow native actions and improvements, E.g.
a problem with the CDK action to be able to define the “workpath” of a CDK app
Additional native actions
Linked issues to Pull Requests – you are now able to link issues to a pull request
UX improvements
Log files wider accessible in UI – at the beginning you where not able to make the log view larger, now this is possible
This is not a complete list, but the things that I personally noticed and that I liked to see.
So…when is “the date”?
Hard to guess, but I would expect “soon”. Ideally right before a month starts, which will make the billing cycle easier 🙂
So I would guess “end of april” which would bring the service right in time for the Berlin Summit (3rd of May).
Next steps for CodeCatalyst
In my last posts I have already been communicating my thoughts and features that I would love to see. But what will AWS implement?
Given reaching the “GA” status opens the way to “enterprise clients” I would expect that one of the first features will be Single-Sign-On functionalities, maybe with an integration to Okta, Ping, Azure Active Directory or other already existing IdPs.
In addition to that I believe that the User Interface needs to get some tweaks to streamline the navigation and workflow – that’s something that I personally experience every day: not knowing when and where to click to get on the rigth place. Also I think that additional service integrations will be added – e.g. StepFunctions or SNS, maybe SQS – see also my post about sending notifications from workflows.
And then there is one last thing which has been getting only limited attention so far: APIs and CLI integrations that can be used – so I would expect a major update there.
I’m really looking forward to see CodeCatalyst reaching GA – I’ve had various conversations with the team in the last months and I know that they have a true vision to make CodeCatalyst successfull as a trully AWS integrated and fully functional DevOps tool.
Are there features you are missing? Please let me know and I will forward them to the team.
This article starts at the very beginning of my own and personal story to the cloud and to where I am today in my career: Back in 2015, when I barely knew that something big as “AWS” existed.
Of course, being a tech nerd since I started my career, I knew what “AWS” was and that I also had glimpse at guessing its tremendous power and opportunities, but I was not aware of the details and of the possibilities I would see.
How we started
At that time me and my team had built out a lift and shift solution on EC2 instances, where our product was manually deployed on. We aimed to grow our business, but we knew that this would not be possible without automation. Our product had different automation requirements and we did automate these running the required jobs through the Windows task manager.
Now, as we decided to be able to offer our service to other and accitional customers, we needed to find additional possibilities for automation and better operational support.
The first thing that we did was to move away from manually provisioning EC2 instances towards using Cloudformation for bringing up the instances and all of the required infratructure (VPC, sub-nets, load balancers, etc.). This already helped us a lot towards being able to deliver our solution faster. But we still had these Windows tasks which needed to be set up on the different instances. And this is where we looked at additional services in AWS to be able to replace these windows tasks with other possibilities.
Adding Serverless capabilities to the mix
At that time, we looked at using AWS Lambda and Stepfunctions – where were a “brand new thing” at that time – to be able to automate and orchestrate our workflows. With StepFunctions being really new, we recorded a “This is my architecture” video at re:Invent 2018. To be honest, when we started to look at Lambda and StepFunctions, I personally was not very convinced. Coming from a Windows / Java background, moving orchestration capabilities “outside” the “server” (=EC2 instance) feeld wrong as I had not thought about orchestrated workfloads which could run across multiple infrastructure components before.
Through StepFunctions, we orchestrated retrieving data, starting a new EC2 instance, automatically installing our software on it and then running the required workfload. AWS Lambda helped us in this case to be able to start EC2 instances programatically. At the same time StepFunctions gave us the possibility to get an overview on the current status of the executions through the AWS Console. The integration with CloudWatch, which was already available at that time, allowed us to implement alarms and enabled monitoring of execution time.
During the process of testing and implementing this orchestration, we regularly hit new obstacles – e.g. a specific instance type not being available in an availability zone or a different error while reading data from S3. We often thought about giving up our approach, but moved on after seing the benefits of automation and being less dependend on a specific EC2 instance.
Orchestrating workflows using AWS StepFunctions is way easier today than at the time that I was part of this project. In 2017, there were only minimal possibilities of direct integrations with other AWS Services from AWS StepFunctions (like Lambda). Today, AWS StepFunctions offers more than 200 service integrations, “normal” workflows (which are quite expensive) and “express” workflows.
I have been using StepFunctions in different projects lately, also in my project around building my own online & mobile game using serverless technologies at pegasus-galaxy.net.
How do you orchestrate your serverless workflows?
What are your experiences with AWS Step Functions?
As Amazon CodeCatalyst is still in Preview, it has only limited integration possibilities with other AWS services or external tools. Sending notifications from a Workflow execution is something that I believe is critical for a CI/CD system – and as I focus on CI/CD at the moment I’ll focus on the notifications on Workflows in this article.
What kind of notifications do I need or expect?
As a user of a CI/CD and Workflow tool there are different levels of notifications that I would like to receive:
Start / End and Status of Workflow execution
State / Stage transitions (for longer running workflows)
Approvals (if required)
In addition to that, based on the context of the notification I would like to get context-specific information:
a) For the “Start” event I would like to know who or which trigger started the workflow, which branch and version it is running on, which project and workflow has been triggered. If possible getting the expected execution time / finish time would be good b) For the “End” event I would like to know how long the execution took and if it was successful or not. I would also like to know if artifacts have been created or if deployments have been done. If the “End” is because of a failure, I would love to know the failure reason (e.g. tests failed, deployment failed, …) c) For the state transitions I’d love to know the “time since started” and “expected completion time”. I would also like to, obviously, know the state that has been completed and the one that will now be started. d) For approvals I’d love to be able to get the information about the approval ask and all required information (commit Id, branch) to do the approval
What does CodeCatalyst Support today?
Right now CodeCatalyst allows to set up notifications to Slack. Please see details on how to set this up here. This notifications are also minimal right now:
In Slack this looks like this:
How can I enhance the notification possibilities?
Luckily one of the “core actions” is the possibility to trigger a Lambda function and this is what we are going to use here to be able to trigger advanced notifications using Amazon SNS. In our example we are going to use this to send an eMail to a specific address, but you can also use any other destinations supported by SNS like SMS or AWS ChatBot.
Setting up pre-requisites
Unfortunately we will need to set up an SNS topic and a Lambda function in a dedicated AWS account in order to use these advanced notifications. This means that we are “breaking” the concept of CodeCatalyst not requiring access to the AWS Console, but this is the only way that I found so far to be able to send additional notifications.
Ideally we would be setting up the SNS topic and the lambda function using CDK, but that increases the complexity of the workflow and of the setup and because of that I’m not including that in this blog post.
Setting up the SNS topic
Please create a SNS topic following the AWS documentation through the console. We assume the topic to be in “eu-central-1” and the name to be “codecatalyst-workflow-topic“.
You can follow this blog post to manually set up the lambda function through the AWS console, please ensure to give the Lambda functions permissions to use the SNS topic. The required code using Python will look like this:
Obviously the same can be achieved using Typscript, Go or any other supported function. Please adjust the topic_arn to match the topic that you just created. After creation this Lambda function will now have an ARN which should look similar to this: arn:aws:lambda:eu-central-1:<accountId>:function:send-sns-notification-python
We will need this ARN when setting up the notification in our Workflow.
Integration into the workflow
Integrating this Lambda function into a workflow is easy:
As you can see, we are integrating an “aws/lambda-invoke@v1” action which then points to the lambda function that we just created.
In the “RequestPayload” we are passing a few information to the Lambda function which will then be passed to the SNS topic as part of the message. This is how the message will look when received as eMail:
Missing information and next steps for enhanced notifications
As you can see we are able to send notifications from CodeCatalyst to multiple targets, including eMail with this option.
What we are missing is – and I am not sure if thats possible or not – is all of the “metadata” of the workflow execution like:
Workflow-Name
State-Name
Project Name and additional information
In the documentation I was not able to find out the available environment variables about these information…. If you do have any ideas on how to access this metadata, please let me know!
In the last weeks – or already months – I’ve been working together with Christian, also an AWS Community Builder, on our project named “Football Match Center”. Christian has already been writing a lot about our project on LinkedIn:
Today, I want to put the attention on our chosen framework for the UI and the way that we are connecting from the UI to the backend. Our backend in this project is a GraphQL API endpoint hosted on AWS AppSync.
Building our UI in Flutter
Since last year Amplify Flutter includes support for Web and Desktop. As we are looking to reach users both on mobile as also on the desktop, choosing a cross-platform development tool like Flutter seemed to be an obvious choice. Christian and I are a small team, and we want to focus on building a simple UI quickly without the need to implement for multiple platforms and Flutter allows exactly that.
Flutter provides easily extendable widgets that can be used on all major platforms.
Connecting to our GraphQL backend
Our project is not based on an Amplify backend, but on AWS infrastructure written in AWS CDK. This made it rather difficult to use the Amplify Flutter SDK as most of the documentations and blog posts expect you to connect the Amplify SDK with an Amplify backend (which can then include a GraphQL API).
But that’s not only what made it difficult – I also had very little experience with Amplify or the Amplify SDK when starting to work on the connection.
Using the Flutter SDK for Amplify we will be connecting to our Cognito instance for Authentication and to our existing GraphQL endpoint. In this post I am going to look at the GraphQL connection and not on the integration of Cognito as an authentication endpoint.
Setting up Amplify SDK for Flutter can be done through the amplify cli if you are starting a new project.
This will then also create the required amplifyconfiguration.dart and some example code through amplify init.
You can then set up the Amplify SDK for Flutter from within your main widget using this code:
import 'package:amplify_flutter/amplify_flutter.dart';
import 'package:amplify_api/amplify_api.dart';
import 'amplifyconfiguration.dart';
import 'models/ModelProvider.dart';
….
Future<void> _configureAmplify() async {
final api = AmplifyAPI(modelProvider: ModelProvider.instance);
await Amplify.addPlugin(api);
await Amplify.configure(amplifyconfig);
try {
await Amplify.configure(amplifyconfig);
} on AmplifyAlreadyConfiguredException {
safePrint(
'Tried to reconfigure Amplify; this can occur when your app restarts on Android.');
}
}
While this looks easy when reading the documentation (and a lot of very good blog posts), this was rather difficult for me as I was not able to use the amplify init command. Finding out the structure of the “amplifyconfiguration.dart” and the implementation for the “ModelProvider” were my main challenges.
Lately, the related documentation has been updated and it is now easier to work with existing resources.
The Amplify Configuration file
The Amplify Configuration (amplifyconfiguration.dart) configures all of the required Amplify Plugins. In our implementation we started with the GraphQL backend:
This tells the Amplify SDK to talk to a specific API endpoint when the “Amplify.API” is invoked. As far as I understand this Github issue, right now only one API can be queried from a specific Amplify instance.
When using the apiKey to do the authentication with the API, we will need to regularly update the Flutter application as the default API expires after 7 days.
This documentation was not available when we started to work on the project and I have the suspicion that Salih made this happen 🙂 (if not, still THANKS for the help you gave me! 🙂)
The ModelProvider
The ModelProvider should be a generated file, which you can generate from an existing GraphQL API. If you are using a schema that is not managed by Amplify, you will need to use “amplify codegen” based on an existing schema file.
The command expects a schema.graphql to be available in the “root” folder of the Amplify Flutter project. If you execute “amplify codegen models”, required Dart files will be generated in the “lib/models” directory.
The result should be a file similar to this one:
import 'package:amplify_core/amplify_core.dart';
import 'Match.dart';
import 'PaginatedMatches.dart';
import 'PaginatedTeams.dart';
import 'Team.dart';
export 'Match.dart';
export 'PaginatedMatches.dart';
export 'PaginatedTeams.dart';
export 'Team.dart';
class ModelProvider implements ModelProviderInterface {
@override
String version = "4ba35f5f4a47ee16223f0e1f4adace8d";
@override
List<ModelSchema> modelSchemas = [Match.schema, PaginatedMatches.schema, PaginatedTeams.schema, Team.schema];
static final ModelProvider _instance = ModelProvider();
@override
List<ModelSchema> customTypeSchemas = [];
static ModelProvider get instance => _instance;
ModelType getModelTypeByModelName(String modelName) {
switch(modelName) {
case "Match":
return Match.classType;
case "PaginatedMatches":
return PaginatedMatches.classType;
case "PaginatedTeams":
return PaginatedTeams.classType;
case "Team":
return Team.classType;
default:
throw Exception("Failed to find model in model provider for model name: " + modelName);
}
}
}
Querying our GraphQL API
Now that we have been able to connect to our GraphQL AWS AppSync endpoint, we can start querying data.
Luckily, the preparations we made and the Amplify for Flutter SDK provides convenience methods that returned typed data structures that we can directly interact or work with.
You only need to write the GraphQL query that you are interested in and you can directly read data from the endpoint. In my example below I’m creating a Flutter Widget out of the returned elements and then I am adding them to a list of Widgets that I can display in a Column Widget:
Future<List<TeamWidget>> _getMatchesByCountry(String country) async {
List<TeamWidget> teamsWidgetList = [];
try {
String graphQLDocument = '''query ListTeams {
getTeamsByCountry(country: "${country}") {
nextToken
teams {
PK
PrimaryColor
SK
SecondaryColor
TeamName
}
}
}''';
var operation = Amplify.API
.query(request: GraphQLRequest<String>(document: graphQLDocument));
var response = await operation.response;
var data = response.data;
if (data != null) {
Map<String, dynamic> userMap = jsonDecode(data);
List<dynamic> matches = userMap["getTeamsByCountry"]["teams"];
matches.forEach((element) {
if (element != null) {
if (element["id"] == null) {
element["id"] = "rnd-id";
}
var match = Team.fromJson(element);
teamsWidgetList.add(TeamWidget(match));
}
});
}
} on ApiException catch (e) {
print('Query failed: $e');
}
return teamsWidgetList;
}
Just today, we have merged a feature that adds a “subscription” to our AppSync endpoint – as as next step we plan to integrate this within the Amplify Flutter Application which will then allow us to implement notifications to the end users. Unfortunately, the Amplify SDK for Flutter does not yet support in-app messaging as it does for Javascript.
What YOU learned – and what I learned
Through this blog post you have learned how to connect an Flutter application with Amplify using the Flutter SDK for Amplify. You have also got to know our project, the “Football Match Center” – and you’ve seen some code to make your start easier when talking to a GraphQL (AppSync) backend.
I have learned to work with the Amplify for Flutter SDK and also how code generators can help you to speed up your implementation. I’ve also gained experiences in accessing data from AppSync and on working with the returned data in Flutter.
Unfortunately, I have also found out that using the Flutter SDK for Amplify I can right now not implement the planned in-app notifications that Christian and I wanted to build for our Football Match Center to notify users about upcoming or recently completed games.
At re:Invent 2022 AWS announced Amazon CodeCatalyst and as you might have read on my blog or seen on my YouTube Channel I have been playing around with the service a lot. A few days ago, Brian asked me a few interesting questions, one of them being:
What’s the diff between CodeCatalyst and AppComposer?
Lately we had a Community Builders session with the Amazon CodeCatalyst team and similar questions came up in regards to comparing CodeCatalyst with other, already existing services.
And to be honest, the amount of AWS services that are related to building, managing or deploying software projects on AWS has grown a lot in the last years and it gets difficult to keep an overview of how these services play together and which tool has which functionality.
In this post we are aiming to compare and place CodeCatalyst in relation to other (new or already existing) AWS Services. We are also going to look at missing functionalities that are currently available in other services but not in CodeCatalyst.
Please be aware that these are all our personal opinions and based on our own understanding – some of it being assumptions.
This post was Co-Authored with AWS Community Hero Brian Tarbox – Thanks for your support!
AWS Services that we are going to compare CodeCatalyst with:
Amplify was released at re:Invent 2018 and has since then been improved gradually.
Amplify is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve.
With that AWS positions Amplify as a service that is able to reduce the heavy lifting on web or mobile developers that want to get started on AWS. AWS has extended Amplify into being a service that offers nearly all building blocks required as part of your SDLC process. It does not offer source code repositories, but CI/CD capabilities. You are able to configure the CI/CD pipeline and also provide your own build images. With the release of Amplify Studio in 2021 AWS extended the capabilities to include a “No-Code/Low-Code” capability that allows rapid-prototyping for web and mobile applications. The target audience for Amplify are Front-End and Mobile developers with no to minimal experience on AWS.
Application Composer
This is a new AWS service announced at re:Invent 2022 mainly focused on “rapid prototyping” helping you to quickly “paint” serverless applications – build our your architecture out with visualizations, Application Composer will create the required “starting code” (Cloudformation, but also Lambda code) in the background. As output you get a project in code that you can then commit to a Git repository or deploy out to AWS. Application composer enables Serverless developers to quickly prototype serverless applications and convert them into code that can then be used as a starting point for your project. Application composer does not provide Source Code management or CI/CD capabilities.
The service, which reached GA on March 8th of 2023, points at developers starting new serverless projects that quickly want to get both an architecture diagram as well as a starting point for further developments.
App Runner
This is a AWS service announced in 2021 and it can be used to build, deploy and run web applications based on containerized workloads. It allows you to stay focused on your application with the service taking responsibility to provision and host your application. It also takes care of creating a container from your source code. You can connect App Runner either to your source code management system or to a container registry.
Beanstalk
This is one of the “ancient” AWS services – it was announced in 2011 and has since then been around. In the community I have more than once heard that “Beanstalk is dead” and not being actively developed anymore, but still – it works and can be used to provision your web applications. At the same time, you will still be able to access the infrastructure that is required to host your service. The “message” is similar to App Runner – it helps developers to focus on writing business code and ignore the deployment strategy. Beanstalk supports Java, .NET, PHP, Node.js, Python, Ruby, Go and Docker web applications. In order to use Beanstalk, you will need to upload a source bundle – it is not possible to connect beanstalk automatically to a Git repository, but you can update the source bundle automatically using APIs.
We treat these services in one group as they belong together from a strategic point of view. They have been around for a few years and the teams that built these are now involved in CodeCatalyst. CodeCatalyst partly uses them “under the hood”. CodeCommit is a managed git hosting, CodeBuild is a managed “build” system, CodeStar is a “project management” tool. CodePipeline allows combining multiple CodeBuild steps to form a pipeline. CDK Pipelines integrate with CodePipeline today. With CodeArtifact users are able to store artifacts and software packages.
All of these services are tied to a specific AWS Account and live within the AWS Console. This has forced organizations and AWS customers to create “toolchain accounts” that centrally host these services. These tools might be considered as building blocks rather than a full solution.
CodeCatalyst
As we are comparing the other services with CodeCatalyst, we also need to define what CodeCatalyst is: a new AWS service announced at re:Invent 2022 that will cover the full lifecycle of product development on AWS, starting from the source up to the deployment part. It is an “All-in-one” solution to help you build software on AWS efficiently. You can manage your planning and issue tracking in it as well as your source code and your CI/CD workflows. I have a few introduction videos recorded available on YouTube. CodeCatalyst lives “off” the AWS Console and this means that you do not need to be logged in to use it – and it can access multiple AWS accounts by an integrated authorization process.
Proton
This is a AWS service announced in 2020 – and AWS describes Proton as a service to allow central teams to build and provide central infrastructure components that can easily be shared with users while at the same time maintaining the integrity of the deployed infrastructure. With that, the tool is focused on infrastructure provisioning (=deployment) pipelines. Proton allows the central “platform team” to provide templates to be used by application teams – with only minor changes or configurations to it.
Which problem(s) does CodeCatalyst address?
CodeCatalyst addresses the need of developers or of development teams that need to cover all parts of the product life cycle or parts of it with a tool natively built on AWS. It can be used for issue management and planning as well as source code management. It has natively built CI/CD capabilities with workflows for Continuous Integration and Deployment. CodeCatalyst offers an opinionated solution for addressing software development best practices on AWS. It also allows online-editing of source code with the Dev Environments and supports the management with reports on resources and workflows managed as part of CodeCatalyst.With Blue Prints it allows developers to quickly start a new project and reduces the time to get a new project started. It can be seen as an opinionated approach to development.
So, how does CodeCatalyst relate to the other services?
Out of the six services we looked at, a few can at first glance not compete or be compared with CodeCatalyst as they target a different audience or address different problems as CodeCatalyst:
Proton – does not help with building or deploying code, it is targeted towards “composing” an application from various pieces. As such, it might be part of a solution but not the whole solution
Application Composer – while this service can be used to do a rapid-prototyping for serverless architectures, it does not allow source code management or deployment of the built architecture. I hope that we will see Application Composer as a new option for starting off a new project in CodeCatalyst going forward
Beanstalk – is not a “developer focused” tool as it comes with pre-build environments and CI/CD pipelines and expects you to manage the source code externally
Based on this, the services we want to look at in more details are:
While Amplify allows to build CI/CD pipelines and manage deployments for both Front-End and Back-End components of an application, the pipelines and deployments are limited to the services supported by Amplify and the capabilities of the automatically generated CI/CD pipeline. There is not much flexibility to adjust the pipelines. In addition to that, Amplify does not allow you to store your source code or to manage your software project. It has no build-in issue management or tracking system.
With Amplify Studio and the corresponding tutorials you get the possibility to quickly get started on specific use cases. This is not as flexible as the CodeCatalyst Blue Prints but gets you started pretty quickly. Amplify Studio is awesome as a “low-code”, getting you started tool – it allows you to quickly build full-stack applications through a User Interface and for that use case it is definitely better than CodeCatalyst. At the Berlin Summit in 2022 I attended a Live Demo of Rene Brandle and was amazed by the functionalities.
Amplify Studio lives “outside” the AWS Console in the same way as CodeCatalyst and it also requires an AWS account to be connected for deployments. Each Amplify project can be connected to one AWS account. This is more flexible in CodeCatalyst.
Still, Amplify misses a lot of things that are required for an end-to-end “DevOps” tool to manage all processes and requirements of an agile software development project.
Comparing CodeCatalyst to the Code* services (CodePipeline / CodeCommit / CodeBuild / CodeStar / CodeArtifact) feels a bit like comparing a Tesla Model 3 with Karl Benz’ Patent-Motorwagen 🙂
The Code* services feel complex to use, although they provide similar functionality than CodeCatalyst if you combine them together. They are “building blocks” that you as a developer can use to build “your own version” of an integrated Developer Toolchain.
In addition to that they live in a specific AWS account, as mentioned above, which makes the handling of access complicated and requires you to have an IAM user that is allowed to access them.
The user interface and possible integrations are minimal and feel “developer unfriendly”. CodeCommit has the CodeGuru Reviewer integration which is currently not available in CodeCatalyst.
CodeBuild (and with that CodePipeline) is very slow in bringing up new, fresh “build instances” – so starting a new pipeline execution can take minutes which is bad for developer productivity. This is something that CodeCatalyst is addressing with the “lambda” execution environment.
Summary, takeaways and our wishes
As per the messaging, blog posts and announcements from AWS around CodeCatalyst, we believe that the service today aims to offer an opinionated tool for development teams that want to practice “You build it, you run it” – in line with the DevOps mentality. It also means that AWS shows the courage to not only give builders a tool at hand but also “influences” what they build with Blue Prints that include best practices. The vision for CodeCatalyst however could be even more than that: a tool, powered by KI capabilities that empowers builders to efficiently develop and build high quality software by reducing the manual work and efforts through automation.
However, CodeCatalyst is not yet there and it’s going to take some time and effort from the team to reach this.
Wishes for Developer Tooling in General
This post has shown that AWS offers a lot of different possibilities to handle software projects on AWS. We made clear that all of the available tools serve a different purpose and target a different audience. While Amplify focusses on Web or Mobile developer and Application Composer targets Serverless developers, Code Catalyst takes a more generalist approach.
Overall, the “Developer Tools” landscape on AWS needs:
More and better guidance on WHEN to use WHICH service
Better “HOW TOS” instead of hard-to-read documentation or specification
Wishes for CodeCatalyst
Compiling a wish-list for CodeCatalyst can be a big effort as there are still a lot of features that we would like to see. We’ll touch on a few ones here:
General
Single Sign On without Builder ID – Okta/Ping/etc.
Other regions support
Allow “Open Source” projects
Issues / Tracking
Epics
Roadmap / Timeline
Integration with Workflows & Automation
Source
Import projects from Git providers
Automations on Pull Request
CodeGuru
Security Review
Best Practice Review
Support of pre-commit hooks when editing online
Verifications, linting, etc. automated
Workflows
More triggers (e.g. by PR, by schedule, by API)
Conditional Steps
Manual approvals
App Store / Play Store deploy actions
Projen Action
Better integration with AWS services
Import existing CodePipelines
Pipeline as Code – CDKPipelines like option to create workflows from code
What wishes do YOU have for Code Catalyst? What is your “most hated” or “most loved” feature today?