re:Invent 2022 is about to start in Las Vegas and I am really looking forward to all of the sessions, the gamified learning possibilities and all of the other things that make the conference great. But more than that, AWS re:Invent this year feels for me like “coming home” instead of “going to a conference”.
After that a lot of things changed for me and I would like to share some of them with you!
I got added to the Community Builders program and with that I gained access to great networking opportunities, sessions, talks and events and also some information that are not publicly available. Next, I introduced myself to a bunch of folks and people and quickly interacted and connected with other Builders around the globe. I saw the Call for Papers (CfP) for CDK day and we had a great panel discussion with a few builders – Danielle, Saima, Christian and Matt – about “The local cloud” at CDK Day 2022. Afterwards I attended the AWS Summit in Berlin and got to know a lot of great people of the AWS Community DACH in person (just to mention a few: Linda, Markus, Thorsten, Aaron, Stefan, Nora, Henning…). This summit made me understand how important community work is for me and how much I gain from talking and networking – re:Meet, as Christian recently said.
I kept enjoying conversations with a lot of builders, getting to know a lot of them better. Later in the year, I kicked off the “AWS UserGroup Bergstraße” and we started having regular meetups. I also joined the “AWS Community Day DACH” organizational group and helped to found the “AWS Community Support Organization” for the AWS DACH community…and was able to give a presentation at the AWS Community Day 2022 in Dresden. I met more great members of the AWS community, got to know them in person and spend time with them.
As part of the Community Builders program there also was a CfP for talks & sessions at re:Invent 2022 – I submitted four talks and as I already mentioned I was fortunate and one of them has been selected as a DevChat for this year’s re:Invent.
At that time I decided that I would be attending re:Invent in person, to get the chance to give the talk and share my experiences. I did not know that no one from my company or close friends would be joining me in Las Vegas. I was expecting to take along a few of my close friends and colleagues. Instead, I’m on my way to re:Invent and no one else from my team or organization is attending.
And still, I’m coming home and I have the feeling its going to be the best re:Invent I have ever attended.
I’ll be meeting a lot of Community Builders I have never seen before – even on the flight today there were a few people I knew “from the community” and from my other investments into AWS (Tobias, Oliver, Henning, …). On Sunday, we’re going to be doing the ever-first pre:Invent Community Builders Hiking event with more than 10 builders I’ve never met before. Afterwards, we will be meeting up with more than 20 builders for a self-organized dinner event.
And then, on Monday, the conference will start where, I will feel like being part of the “big AWS builders family” that Werner was talking about at his keynote a few years ago.
The whole week is filled with meetings, 1on1, sessions – and dinners, parties,…
These things make me feel at home in Las Vegas!
Looking forward to meet all of you in person and talk, learn and have fun. Reach out if you want to meet me “live”. 😊
I can’t be more thankful to be part of this great community.
As I’ve been sharing before, I am very fortunate this year and will be giving a DevChat at the biggest AWS conference of the world – at re:Invent 2022 in Las Vegas.
AWS offers different tools for all parts of your CI/CD lifecyle. In this post I am going to cover the set of Code* tools that are available on AWS today – and will share my thoughts about what these tools are missing.
As part of the preparation for the talk and as part of both my private project (code-name: MPAGA) and my main job @ FICO I have been researching and learning a lot about CI/CD (Continuous Integration and Continuous Deployment) – and for the private projects especially around CI/CD that natively runs on AWS. I’ve found out that not everything that these tools offer today is perfect and wanted to share some ideas on what could be improved. Where possible or applicable, I will also propose workarounds or alternatives.
We will look at a few of the tools in the order of the “product lifecycle”: 1. Code 2. Build/Test 3. Deploy 4. Release
Tools that are part of the “Code” phase
For the purpose of this post we are going to focus on tools that are natively offered by AWS as already mentioned and part of your CI/CD pipeline.
AWS CodeStar – Integration of projects
AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS and provides a unified interface for your project. It provides you different templates that you can choose from to quickly start your project.
It allows you to manage your team, with permissions and integrates with your existing JIRA for issue management. It also integrates with your IDE (or with Cloud9). You can also integrate with an existing Github repository.
AWS CodeCommit – hosted Git
AWS CodeCommit is a managed service for Git (just like Bitbucket, Github, Gitlab, …. It provides a hosted “git” environment that is encrypted at rest and can be accessed using usual Git clients.
Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve code quality and identify an application’s most expensive lines of code. Integrate CodeGuru into your existing software development workflow to automate code reviews during application development and continuously monitor application’s performance in production and provide recommendations and visual clues on how to improve code quality, application performance, and reduce overall cost.
Tools that are part of the “Build” or “Test” phase
AWS CodePipeline – Tool to manage your CI/CD pipeline
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.
AWS CodeBuild – Build tool based on containers
AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages.
AWS CodeArtifact – artifact storage
AWS CodeArtifact allows you to store artifacts using popular package managers and build tools like Maven, Gradle, npm, Yarn, Twine, pip, and NuGet.
Tools that are part of the “deploy” phase
AWS CodeDeploy is a fully managed deployment service that automates software deployments to various compute services, such as Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Service (ECS), AWS Lambda, and your on-premises servers.
AWS AppConfig makes it easy for customers to quickly and safely configure, validate, and deploy feature flags and application configuration.
I’ve been able to gain some experience with the tools while working on a few projects, including cdk-codepipeline-flutter and here is a list of things that I believe could be improved. My main focus here is on CodePipeline, as it serves as the glue between all of the other tools.
Native branch support for CodePipelines
Working with Jenkins and the MultiBranch plugin makes it easy to allow developers to quickly test and deploy code that they are working on using the CI/CD pipeline. Unfortunately, CodePipeline today does not allow automated branch discovery, so if you want to enable the automated execution of a pipeline for a branch, you will need to manually configure webhooks and then create a new pipeline (or delete an existing pipeline) when branches are created (or deleted). This is not easy to implement and it would be great if CodePipeline should natively allow creating a pipeline automatically for all branches of a linked Git repository.
Additional Templates and Best Practices
When setting up a CI/CD pipeline on AWS CodePipeline, this would be easier to use if additional best practices and templates would be available as part of the tool itself. AWS is starting to promote a new Open Source project called “Deployment Pipeline Reference Architecture“. this is a step in the right direction, but it needs to be expanded by other flavours of a deployment pipeline. Also the code examples need to be improved, made up to date and needs to include all languages supported by AWS CDK. This is critical to allow an efficient adoption of the different tools.
Native integration of 3rd party tools
AWS CodePipeline should natively support integrations to other 3rd party tools that should be part of your CI/CD pipeline – e.g. security scans like Aquasec and Checkmarx.
Remove dependency for a specific AWS account and support Cross-Account deployments natively
As indicated in this AWS Blog post, the best practice for setting up a CI/CD pipeline and for managing your deployments is to use multiple, different accounts to manage your deployments. CI/CD should not be bound to an account level and this includes the management of your accounts that are able to access and configure the CI/CD tools. Maybe a good option here would be the integration with the AWS Identity service. That might allow decoupling the CI/CD toolchain from the AWS account.
Up to date CodeBuild images
Docker Images provided by the CodeBuild team should be updated regularly and should support all “modern” toolkits. The open source project has some activity, but an issue for supporting newer Android versions is now open for some time…
Publishing options to the different mobile stores (AppStore, Play Store, Windows Store, etc….) should be possible
I’ve been looking at developing a mobile app using Flutter, but what I have not yet been able to achieve is pushing the created and build applications to the different app stores. Today, AWS does not support this natively. You CAN integrate this with 3rd party tools like CodeMagic, but natively there is no option on AWS to publish your application.
This concludes the wish list that I have today for the existing AWS CI/CD tools.
Did I miss anything that you believe should be added?
Use the comments to give feedback or reach out to me on LinkedIn or by E-Mail!
re:Invent 2022 is approaching FAST, faster than you can actually take screenshots from the official homepage with the counter on it 🙂 We just crossed the “less than 20 days to go” and a lot of AWS community members are as excited as I am for the conference to begin.
In this post your are going to learn some tips & tricks from a few AWS Community Builders, AWS User Group Leaders and Heros (and of course from myself) about how to “pre:Invent” – “prepare for re:Invent” in order to best use the conference. I have attended re:Invent remotely as well as in person – and this year I am going to be back there in person.
Some of the heros, user group leaders & builders have attended re:Invent more than 10 times (and it only happened 11 times!) – so this postis a “source of experiences” – just as Corey Quinns post 🙂
The information that are included here were collected in less than a day – and this shows how #AWSome the AWS community is – thanks for your contributions (in no particular order):
What’s the most exciting Keynote that we are expecting to see?
As all of the Keynotes are live-streamed in the attendees portal (and later made available on Youtube), this applies both to in-person and to remote attendees.
Personally I know that all of the Keynotes presented at AWS will be great and will have a lot of interesting content, as I’ve been fortunate to meet Nick Walsh at the AWS Summit Berlin. Now that I know one of the persons behind the Keynotes, I do understand how they are crafted, scripted and prepared with a high degree of customer input.
Still, the “most loved” Keynote with by my interview partners is the Keynote that Werner Vogels delivery on Thursday morning – and this is in line with my personal experiences. Werner always has the more “developer” and “builders” oriented keynote with more technical details, while at the same time putting his insights (and announcements) in context to the “broader” industry experiences and best practices.
The Adam Selipsky is second on the leaderboard, especially for the announcements that he, as the current CEO of Amazon Web Services (AWS) usually makes.
Last on our top 3 we have Peter DeSantis keynote – he delivers his Keynote on Monday evening (strange time for a Keynote) but its usually great fun to watch!
Most important information for builders attending re:Invent virtually / remotely
In general I need to admit that I’ve talked to a “biased” community – most of the persons I talked with are actually attending in person. But the pandemic and the “all-virtual” re:Invent in 2020 have proven that AWS is able to deliver a “great” virtual experience aswell. In 2021 I attended the “hybrid” re:Invent (where it happened in person in Vegas again) and was able to gain a lot of value out of it for myself.
Of course, not being in person reduces the “networking” possibilities that you have with other AWS enthutiasts and community members. But you can still learn a lot and invest into your professional career – and there are great reasons, like the aim to reduce your carbon footprint (@Brian) or maybe just the long travel time. Don’t feel left our – most of the sessions are going to be available online later.
The available sessions in the session catalog are always with a very high quality and share important insights into best practices as well as implementation details.
What are your most important tips for preparing for the best re:Invent experience?
Plan your week – in the right time zone! All of the “live” sessions (last week it was only the “keynotes” and a few “leadership sessions”) are going to be taking place in the Pacific Time Zone (PST). This means, 8 am PST translates to 5 pm CET – and thats a great possibility to meet “in person” within other members of your User Group and watch the Keynotes with a few pizzas & drinks.
If you are planning to attend recorded sessions, they are usually available the day after they have been given “live” – so the monday will be “quiet” for you.
Pick a focus topic, project or product that you are interested in – and find the topics in the session catalog that match your “skill level”, this is the session “classification” table:
Sessions are focused on providing an overview of AWS services and features, with the assumption that attendees are new to the topic.
Sessions are focused on providing best practices, details of service features and demos with the assumption that attendees have introductory knowledge of the topics.
Sessions dive deeper into the selected topic. Presenters assume that the audience has some familiarity with the topic, but may or may not have direct experience implementing a similar solution.
Sessions are for attendees who are deeply familiar with the topic, have implemented a solution on their own already, and are comfortable with how the technology works across multiple services, architectures, and implementations.
What are our most important tips for the best re:Invent experience?
Meet in person for the Keynotes live stream if you can. That is really more fun than watching them alone remotely.
Don’t sweat it (thanks Edward!)- most of the content will be available on demand, you just need to find time to watch it – so talk to your team at work and to your manager to block some time thoughout the week for the talks you are really interested in.
Write Blog posts or Social postings with questions or remarks – and talk to Builders, Heros and other AWS Community members that are attending in person if you want specific questions to be answered by a service team!
Most important information for builders attending re:Invent in person in Las Vegas
What are our most imporant tips for preparing for the best re:Invent experience?
Bring good and comfortable shoes.
Know the campus. You are going to walk “A LOT” if you switch between venues in the campus. The shoes that you were need to make you feel good!
Be “venue aware” when choosing the sessions – in 2018, I had a day where I needed to walk from the Venetian to the Aria and back three times on the same day – thats about 14 km on a single day!
Time is precious and limited.
Plan every day wisely: – make time for the “hallway track” (Thanks Jennifer for explaining that saying to me!) -which means being spontanous and talking to other attendees. – plan to be staying in one or maximum two venues per day – plan your breaks, hydrating and meeting folks you would like to meet – a few of us are prioritizing Chat-Talks and Workshops over sessions, others do not attend much sessions at all
Pack light. Expect a lot of (cool) SWAG like this one or bigger things. You might need a lot of room in your suitcase 😉
What are our tips for the best re:Invent experience?
Prioritize networking possibilities over sessions. re:Invent is once in a year the best networking opportunity that you will get. Don’t expect too much from youself every day – if we you meet someone to talk to, don’t feel forced to rush to the next session you had planned!
Attend the Keynotes. At least Werner’s (thursday morning) and Andys (tuesday morning).
Type down “things to look at later” on your phone – or you will actually forget the “most important thing” that you have learned during re:Invent 🙂
Regularly re-view the session catalog as new sessions are added on a daily or even hourly basis. Otherwise you might miss out on the most important one for your future career 😉
Which session are you most interested in/looking forward to?
This question was the most interesting one for me – as there is no “consens” across the group of Builders, Heros and UG Leaders that I talked to. Everyone is different and has different interests – a few of us are going to not attend a lot of sessions and rather meet other builders and talk to them, a few are signed up for >10 sessions and can hardly choose their favorites – and others are focused on AI/ML sessions.
This is one of the things I really like about re:Invent – everyone attending will find “something” to learn, experience and take away – regardless of your skill level or role.
I was a little bit sad that none of my interview partners actually mentioned my own session, a DevChat, as her or his favorite session they are most interested in 🙂
I hope to see a few of you there!
Let’s meet up in person!
For all of you that you are attending re:Invent in person – let me know in the comments or by mail – or by LinkedIn if you want to meet up.
I’m looking forward meet in person and have great conversations!
In this article, you are going to learn about “HOW” to start and create your own, personal AWS User Group. You are going to learn about resources that will help you to get started and a few tips and tricks to get through the first few meetups. The best thing: These are all “real world” information that I’ve encountered myself in the last months when starting the “AWS User Group Bergstrasse“.
What is an AWS User Group / Meetup?
An AWS User Group – also known as “AWS Meetup” is a losely-coupled group of individuals that are intersted to connect, network, have fun together and…maybe…also talk a little bit about AWS, Amazon Web Services, Cloud Computing, Serverless, re:Invent and millions of other topics! Usually, one to two “talks” (20 – 30 min, technical sessions) are presented in an event with AWS specific topics or experiences.
Most of the User Groups meet regularly in a 4-8 weeks cadence. Pre-COVID19, the User Groups where mostly “in-person” events. With COVID19, a lot of the User Groups moved to “virtual” events and not all of the User Groups have re-started “in-person” events.
Why should you start an AWS User Group / Meetup?
AWS User Groups are a great possibility to build up your professional network, to talk to other people that have the same interest or passion that you have – for AWS, AWS Services or any other topic you are interested in. In the User Group meetings, you will be able to learn from other builders or engineers in your area (or maybe also from further away) – and you will be able to share experiences and improve your day-to-day work.
Wait..there is more…
Usually meetups are accompanied with drinks & food!
…and if you, as an User Group Leader, do things right, they might be “for free” because you have found a sponsor or host for your User Group 🙂
So how do you get started?
Just DO it!
Don’t wait or ask for permission. Talk to your co-workers or maybe friends or other people in your network and the “Kick-Off” your User Group on a platform that you want to use to host your event. I personally use Meetup.com and a lot of other User Group Leader do the same. When you created you “public” group, start sharing and promoting it in your network on Twitter, LinkedIn or other channels.
Now, the users should start “registering” or “signing up” for your User Group.
Ideally, you would have a few people registered to get informed about new events by this stage.
After promoting your group, the next thing is to start planning your FIRST EVENT!
Your first event
Before inviting your User Group members to your first event, it might make sense to use a questionnaire with the “best day” to meet. I did that and got interesting results – so now, our User Group has chosen “Monday” as our “normal” day to meet. Ask your User Group members about “interests” and “cloud usage” experience – that will help you to choose topics for the first sessions.
For your first event, the most important thing that you need is a location. Ask your employeer, a co-working space in your area or other locations that you can use to “host” your event. The AWS Community page has an FAQ that covers a bunch of additional ideas.
Don’t over-prepare -you can just get started “as easy as possible”: – get “one topic” that will be discussed at the first event – think about a “cool” way of bringing your members together – we did an “agile game” in our first event and that brought a lot of fun for all of us.
Cover some introductions in the first event, name tags, so everyone feels comfortable talking and approaching others. But the most important thing is, as I already mentioned:
Just DO it!
Tips & Tricks for the first few meetups
Use the first event to find out the best cadence for your group and additional topics that the group members want to talk about. We were able to find speakers within the group to talk about “starting” things and topics: AWS CDK, Terraform, Projen – the most important thing of these talks is to “start discussions” and get attendees to talk about their experiences as part of the “networking” session 🙂
Our second event was in a “Biergarten” and 100% focused on a good networking experience and building relations – that was a great evening! 🙂
Don’t over – plan, if you are able to secure the location for the next two to three events and have at least one speaker, you should be good to go.
Resources to help you
Me 🙂 – feel free to reach out to me on LinkedIn or through E-Mail.
…and tons of other very helpful User Group leaders on LinkedIn, Twitter or other channels.
What do you do next?
After you started your User Group succesfully, please LET ME KNOW – I want to hear about your success story and how I can improve this article.
I will also help you to get connected to the AWS Community Management team which will then onboard you to the AWS User Group Leaders Slack, will be able to support you with potential speakers, AWS credits and SWAG! 🙂
This post is a follow up to the last one where I showed a CDK project that can be used to build a Flutter application for Web.
In this post, we are going to expand our existing project on Github to be able to build an “apk” file for Android and a zip file for iOS. Before I can show you how this is possible, let’s start with some challenges that I’ve faced 🙂
The aim of this CI/CD pipeline is (not yet) to be able to push the apps into the AppStore / PlayStore for testing. That’s something we can add later 😉
Challenges on the way to a full CI/CD pipeline for Flutter on CodeBuild
While preparing this post I unfortunately faced more problems building up the pipeline than expected. Several problems.
AWS CodePipeline does not support M1 / macOS build images
Currently, AWS CodePipeline unfortunately does not provide the possibility to use the famous M1 minis on AWS as CodeBuild images. Tis his a real problem, as is makes it impossible to use CodeBuild for building iOS apps. Running XCode on macOS is a requirement for building a Flutter app for iOS. The M1 minis on AWS are currently pricey as hell for this use case – if you start ONE build you are directly charged 24 hours, even if the build takes only a few minutes! You need to actual get a dedicated instance, … – not usable for our use case of quickly building something for a side-project. So we needed to find an alternative… read below! 😉
The current AWS CodePipeline standard runtimes are not able to build (modern) Android applications
The runtimes available and exposed by CodePipeline support Android runtime 29 – and the Docker images are provisioned using Java 8. Unfortunately, as of July 2021, the Android gradle tools (used by Flutter) require Java 11. I have created an issue in the corresponding Github (see here) but needed to find a workaround to move on – I think I’ve found one, but I hope that anyone reading this might have a better way or idea?
TypeScript dependencies on AWS Lambda can take your sleep
When implementing the trigger for the iOS App build (see more details below) I decided to “quickly” implement the HTTPS POST call using TypeScript – which turned out to be a bad decision 🙂 I had trouble getting the “axios” dependency that I am using installed correctly. I asked around, especially my fellow AWS Community Builders and got a lot of great tips and ideas (kudos to Martin and Matt). Martin had the right “stomach feeling – I was missing a “npm install”.
Matt enlightened me with the three different possibilities of making Typescript Lambda functions understand their dependencies:
1. Bundle dependency with your source code (can be achieved using esbuild)
2. Add a package.json and node_modules to the lambda function source – only a good idea if dependencies cannot be minified
3. Put the dependencies in a lambda layer
At the end, this challenge was especially difficult because I needed to add the required “npm install” in two places: In the “installCommands” for the CodePipeline itself and in the “installCommands” for the Flutter build step.
CodeBuild is slow, misses conditional steps and misses integrations – and does not easily allow multi-branch pipelines
While implementing the pipeline and solving the different challenges mentioned above, I lost some time because of CodeBuild being “slow” (>1min wait time during provisioining of the build containers). Thats understandable given the nature of the service, however it would be cool to have something like a “warm start” for a pipeline where the containers are re-used instead of re-provisioned.
There are no conditional steps – no chance to run a job only based on environment variables or anything similar. That made me implement a workaround. It would be cool to be able to use something like “branch-conditions” in the way Jenkins offers it.
CodeBuild offers only basic integration to SNS, but you cannot integrated a “lambda build step” to run the CodeMagic integration i nparallel to the flutter build job, but that is not possible, so I needed to run this “at the end” of the pipeline.
Another thing I’d love to have: multibranch pipelines. I needed to merge everything to main directly in order to test, because I couldnt figure out how my CDKPipeline would be able toe support multiple branches.
Reaching the goal: a full CI/CD pipeline running on AWS CodeBuild to build a Flutter app for Web, Android and iOS
Here is a diagram of the “final result” that I am presenting today:
The “output” artifacts of our pipeline are: – Flutter Web application (located on S3 and reachable through HTTP call) – Flutter Android APK (that can be side-loaded on Android phones, located on S3 bucket) – Flutter iOS App (that can be side-loaded on iOS phones, located within CodeMagic)
As the diagram shows, we needed to fall back to an 3rd-party, non-AWS service to be able to package the iOS application. After doing a quick “vendor selection” and a shortlist that included Bitrise and CodeMagic I decided to integrate CodeMagic in this example – because I liked the API more and it offers more free build credits/minutes. Setting it up took less then 5 minutes – it connects natively to Github and the set up of the Flutter pipeline is very easy. The integration is set up using a Lambda function that calls the “start build” API.
How did I solve the challenges mentioned above?
The problem building the iOS image was resolved by integrating the external Service CodeMagic.
The Android Runtime Dependencies problems with Java 11 was resolved by switching to a custom docker container (Open source) – and then installing the requirements on top of it (npm/node, awscli, etc.).
What did you learn in this post?
In this post you have learned on how to expand the implementation of our CI/CD pipeline for an example Flutter application to not only building a “web” application, but also building an Android APK and an iOS zip file. You have also seen an extension and integration of the Codepipeline with SNS for notifications and those events being picked up by a lambda function to trigger an external HTTPs API. This is a major step – with this pipeline we are able to publish our application for three different “platforms” without a single code change – and it will all happen completely automatically!
I’d be glad to get your inputs into my Github repository as a pull request or just as comments on the project itself.
Further expansions needed to this project: – CodePipeline already has a SNS topic that it reports to – but right now the build iOS / Android App packages are not exposed anywhere – the idea would be to publish to an SQS queue the name of the APK file and the CodeMagic Build Id – and have a lambda function that is triggered by the queue update a link on the example application to download the newest version of the app 😉 Today, we need to retrieve both from S3 / CodeMagic itself – use the CloudFormation Exports of the Lambda functions in the Flutter application instead of hardcoding the URLs for the Lambda Function URLs- enhance security for Lambda Function URLs – add CloudFront in front of S3 to allow HTTPs connections to the Flutter App – enhance CI/CD pipeline to package Windows App using Flutter – enhance CI/CD pipeline to push created apps to App Store / Play Store
Feel free to contribute and add your contributions to this project into my Github repository.
In this post I am going to use CDK Pipelines to build a demo Flutter application hosted on AWS S3 with a Backend powered by AWS Lambda (using Function URLs). The CDK code will be in Java, the Lambda functions in Typescript and the WebApp in Dart. Why? Because I love trying out things 🙂
The code used here is not production ready and does not fulfill required security best practices! 🙂
The CI/CD pipeline
The CI/CD pipeline for this project uses CDK Pipelines and that means it is build on top of AWS CodePipeline (which under the hood uses CodeBuild, CodeStar and other services to be functional).
It consists of different stages that build and deploy the corresponding part of the application: – one stage for each lambda function – one stage for the build and deployment of the Flutter application
The stages required by the CDK Pipeline to update itself are automatically added but not part of our code:
This is the definition of the CI/CD pipeline, it uses our Github repository as the source of the code and automatically starts after a push to the repository on the main branch. The “Flutter Build Stage” is the one that currently builds the Flutter web application, deploys it and makes it available to the end user. Going forward, to make best use of Flutter, we would need to expand this stage to also be able to build an iOS application, an android application or an application for any other platform supported by Flutter. As a “goal” I would personally also want to extend this stage to be able to publish the apps to the corresponding stores (App Store, Play Store, Windows Store, …) – Thanks to my friends at cosee for the help and guidance around this process!
The architecture diagram of the application
What we are using to show-case the usage of AWS CodePipeline, Flutter as an application and AWS Lambda Function URLs as backend is not really an “application” – but it can do dynamic things and it can easily be extended to include a database backend, etc.
Infrastructure as Code using AWS CDK in Java
In this section you are going to have a look on the CDK code required to provision the infrastructure on AWS. We are using AWS CDK written in Java – and because of that Maven as a build tool (Maven is the default tool for CDK projects – I’ve already used Gradle as build tool and that works in the same way.
The possibility of writing Infrastructure Code in Java is a great thing because it gives us the option to build on top of our existing skills – and I’ve written enough lines of Java in my career to feel comfortable using it to provision the infrastructure. This is one of the best things in CDK: You can write your Infrastructure code in Java, Typescript, Python, … – and that helps us to build teams that only have “one” language as a “core skill” – one team might choose to develop in Java, anotherone in Typescript, another team could use Go – this allows the teams to build up mastery in a specific language!
In our example however, we are not making use of this possibility – I’ve choosen to rather go the opossite way: Combine a few languages, just to show that it works 😉
The CDK code consists of four “stacks” – each of the Stack representing one “component” in this application. In our example these four stacks are part of one CDK Application & one CodePipeline. In bigger projects, you might want to split these out into seperate applications – which introduces a lot tof things to consider (e.g. how do they fit and work together, etc.).
While writing this post, the four stacks combined are 129 lines of source code. With the help of the CDK Constructs that are being used this translates to over 1k lines of code in CloudFormation. We are only using L2 constructs here – there is way more constructs available that you can use in the Constructs Hub and also a lot of guidance regarding the usage of CDK over at CDKPatterns.
Making a S3 bucket available to host our Flutter application becomes a very short piece of Java code as shown in the picture above.
Lambda functions behind Function URLs in Typescript and Python
This was definately one of the most awaited announcements in the Serverless space this year: The GA of the “Function URLs” for Lambda on AWS – and this is the reason for me using these as a backend in this showcase. With this announcment, it is possible to directly expose your application running on AWS Lambda behind an HTTPs endpoint! Without an ApiGateway, Proxy, … Provisioning the infrastructure for the lambda function with the function URL functionality activated is only a few lines in CDK:
These Lambda functions will now horizontally scale without us as a developer getting involved. More details on function URLs – there is a lot of good posts on it around like this one or this one. Around scalability, fellow community builder Vlad has written up a great guide around scalability for containers on his website.
Web Single Page Application built using Flutter and the benefits of using Flutter
Flutter as a multi-platform, developer driven tool gives a lot of flexibility. With a single code base, you are able to publish your application to various targets. In this example, we are using the target platform “web” – which compiles the Dart code to a single page application.
This application is automatically aware of the size of your screen and is interative – which is another cool thing that Flutter takes away from you as a developer. A lot of organizations use Flutter today and the cookbook gives you an easy and good start into developing Flutter applications.
Our example application has three input fields that take input and pass it back to our Function URLs – and then automatically update a Text Widget with the results of the Function URL call. This implementation will work on all platforms.
What did you learn in this post?
You have learned how easy it is to provision AWS infrastructure using AWS CDK. You have also seen that you can easily combine different programming languages in a single CDK application and got an experience on how a CI/CD pipeline can help to automatically deploy our application using AWS CodeBuild. In addition to that, you have looked at Flutter as a multi-platform development tool.
I’d be glad to get your inputs into my Github repository as a pull request or just as comments on the project itself.
Further expansions needed to this project: – use the CloudFormation Exports of the Lambda functions in the Flutter application instead of hardcoding the URLs for the Lambda Function URLs- enhance security for Lambda Function URLs – add CloudFront in front of S3 to allow HTTPs connections to the Flutter App – enhance CI/CD pipeline to package Android App using Flutter – enhance CI/CD pipeline to package iOS App using Flutter – enhance CI/CD pipeline to package Windows App using Flutter Feel free to contribute and add your contributions to this project.
Listen to Werner Vogels talking about the benefits of CDK
Ever since I’ve attended my first AWS re:Invent in 2017 one of the SWAG items that I received is part of close to every trip that I do, regardless if its a business trip or a personal trip, this AWS bottle always joins me in traveling:
This bottle does not only look good, it also keeps the water cold when we go on longer, warm hiking trips. What is your favorite SWAG item? Do you have any SWAG that you regularly use?
I have a lot of other SWAG, but this is really the only item that I regularly use. Do you have some SWAG that you like to carry on?
Software Engineers, Developers, etc. are all “Builders” in my mind. Builders try out a lot of things and most of them are eager to try out new technologies and possibilities. While doing that, a lot of them behave like these engineers “in real world”:
What does that mean?
They go to the top of something, climbing somewhere and taking risks, but a lot of times they forget what is “below” what they are building.
For these workers in real life, it will most probably obvious that they are not risking their life by climbing up there – because they can see what is below and are aware of the groundwork below the wall they are climbing on.
How is that different in our “Cloud-Software/SaaS-industry”?
I believe that the main difference to day is, that most of our “Engineers” (= Software Developers) are not aware of the infrastructure components that are required to bring their application or microservice up and make sure that it can consistently run.
Why are they not aware?
One of the challenges that I am seing in my day to day job is that we have a lot of “abstractions” that we have build for software developers to make it “easy” to develop and test software. Think of “Docker” or “Kubernetes” (k8s) as making it easy to test applications or microservices locally and make them look, feel and behave the same as in the “target environment”. However, that is not essentially the truth. During the development cycle, the engineer will test locally – or maybe within a Continuous Integration environment – but both of these environments will usually not have “production like” data assets and thus will never be comparable to a production environment.
So – it is a real problem, because engineers test against infrastructure (and maybe even deployment strategy) that is not even close to how the service will run in a production environment.
How do we change that?
It should all start with a plan…and everyone that is part of a products lifecycle should be part of it.
CDK changes everything
CDK – and for me this includes awscdk, cdktf, cdk8s – gets the engineer where they feel “home”:
We can describe and write infrastructure in “the developers native language” – Java, Typescript, Go, .NET.
With this, everyone can be empowered to write infrastructure code and feel responsible for it. No more excuses: I don’t like YAML / JSON, I dont know HCL, I don’t know the services, etc. If you are a developer, you can now write infrastructure code.
This opens up new possibilities for building DevOps teams
Now, with CDK “in the game”, we can empower “developers” and “operators” to talk to help each other “in one joined language”. Operators can help Developers understand the infrastructure required to bring their service up to speed – and Developers can help Operators to develop infrastructure code.
On the other side, if you start a new DevOps team, you can directly start building out the infrastructure “as it would look like in production” using CDK! This really makes the developers think about how the service should be running in the production environment later and that will help to drive the correct architecture decisions right from the start.
In my last post I wrote about the “T-shape” model of a “great DevOps engineer” – but does that person actually exist?
I understand the DevOps team that builds & operates “something” in a complex environment. This includes the required software development aspects of it, the CI/CD pipeline, the monitoring tools required, the database or persistence layer, the infrastructure, … – everything that you need to be able to successfully operate what you have built as a team.
Wait… is it only that?
Everything mentioned above is technical, isn’t it? The “software development” might be Java, Typescript or Go code (or anything else), the CI/CD pipeline is a technical thing. – but is this “enough” for the DevOps team? I think that we need to add a business view into the team aswell – at the end, anything the DevOps Team builds needs to produce business value or needs to be compared to it. A “defect” needs to get a dollar value, a new feature needs to produce new revenue and an update to our tools (e.g. monitoring) needs to be translated to costs or time&effort saved.
So what is the perfect DevOps engineer?
A “T-Shaped” DevOps engineer would need to change his skills to have a “Square-Shaped” Skills matrix!
What else does he need?
In additon to the “technical skills”, he will also need to have the business view on the microservice or component he and his team owns. In an outage, he also needs the communication skills to be able to talk and communicate to clients and a sense of the urgency of the problem.
While I believe that there are a lot of great DevOps engineers around, I have not yet met one that was close to the “target” of being “perfect” in all of the points mentioned – and I am sure I forgot a few…. Feel free to comment!
A perfect DevOps engineer does not exist. It’s the perfect combination of different T-Shaped engineers that act as one team that makes a great DevOps team.
Johannes, March 2022
What are key enablers for a great DevOps team?
Everyone in the team has its skills he is really good at. The really good DevOps engineers are able to “bend” their T-Shaped skills to something that is closer than to the square – maybe not in “all” parts of it but in a few of them.
But what a “great” DevOps team needs, more than anything else is Vision, Trust & Collaboration.
Recent experience from my professional career
In my professional career I recently joined a newly formed Development Team as part of a project. We quickly got up to speed with each other and created an atmosphere of trust in all of our (remote-only) meetings. This made it easy to start collaborating and jointly work on the tasks we had in our backlog. What we did not have right from the start was a vision, on what we wanted to achieve as a team. Once we had one, or at least a sprint goal for the next two sprints, we where able to quickly deliver value.
A team needs visionary engineers
If you want to create a great DevOps team, you will also need visionary engineers, that are brave enough to try out new things by themselves and empower the rest of the team to follow them towards their ideas.
What do you think about how to form a great DevOps team? Please share your throughts in the comment section!
Last year I attended and internal meeting with our UX team and while talking to the team, we touched on a very interesting question:
How do you define a “great DevOps engineer”?
If you ask five different people, I am sure you would get at least six different answers 🙂
So, without trying to “answer” the question but still covering parts of it, let’s try to look at what DevOps actually is and means.
The conversation on the meeting started with a colleague asking for some support around “DevOps” tasks that were needed to perform certain activities around release activities. I pushed back on him, pointing out that for me, everyone should have a little bit of “DevOps” knowledge – but re-defining these “DevOps” tasks as being “Automation Tasks”.
A “great DevOps engineer” has a T-shaped skills profile.
So what does that mean?
A “T-shaped skills profile” is easily explained: Think about the “old” traditional way of combining an engineering team, you had “Analysts”, “Programmers” (Developers), “Test Engineers”, “Web Designer”, “System Engineer” (build automation, scripting, etc.).
In that case, you had the problem that you hit the “bottle neck” with certain skills during different phases of the project, e.g. the “Test Engineers” needed to work vey long hours right before the next release of your piece of software. That was obviously bad for the overall outcome of the team.
However, if you manage to compose your team with people that have a very broad knowledge base for different skills and a small amount of skills where they are experts, your team becomes more efficient because you can support each other in these “clunchy situations”, e.g. “developers” can pick up a bit of “QA work” right before the release date.
So, what does this mean for you?
Be aware of your “expert” skills
Practices the skills, where your team is “weak” at, to become better at it and broaden your teams capabilities
Why do we need teams that consist of “T-shape-skilled” engineers?
Because in the “DevOps culture”, its all about “collaboration” – and that is easier, if every team member understands what the “expert” is talking about, at least high level.