Another year in the community – Thank you, AWS community team #thankfulforest2024 #firevalleyrocks

The year 2023 is close to its end and we’re approaching “Holiday Season” – which is one more reason to take a few minutes to say THANKS to the ones that work every single day to empower the AWS Community.

We did this last year, too – so it was about time to try something else – and the community did it again: All the trees of the Community forest

Saying “thanks” with my speciality service

I’ve found a way to make a tree shine in CodeCatalyst using the Workflows – its not as colorful as that Jenn did and not as detailed as Brian’s approach … and it should definately not try to replicate the team structure or org chart, but it shows that all of the work that the AWS Community team does, all of the support, guidance and investments make the AWS Community a strong foundation of everyone that wants to be part of it!

The community is open for everyone, you can even start your own Meetup easily.

Thank you, AWS Community Team

I am really thankful to be part of the AWS Community and it’s energizing to see the ideas, the sessions, the discussions that we all have together. You, Ross & team, make this possible every single day. Thank you for empowering us, for guiding us and for enabling us to be successful.

Here’s the code for my CodeCatalyst workflow:

Name: firevalleyrocks
SchemaVersion: "1.0"
Triggers:
  - Type: Push
    Branches:
      - main
Actions:
  Ross:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Ross!"
        - Run: echo "Thanks for all of your support in 2023!"
  Taylor:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Taylor!"
        - Run: echo "Thank you for making the Heroes a true community!"
    Compute:
      Type: Lambda
    DependsOn:
      - Ross
  Jason:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Jason!"
        - Run: echo "Thank you for making me start my Community Journey and for making the Community Builders what they are!"
        - Run: echo "Sorry, but you're red!"
        - Run: xxx
    Compute:
      Type: Lambda
    DependsOn:
      - Ross
  Maria:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Maria!"
    Compute:
      Type: Lambda
    DependsOn:
      - Ross
  Ernesto:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Ernesto!"
    Compute:
      Type: Lambda
    DependsOn:
      - Taylor
  Farrah:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Farrah!"
    Compute:
      Type: Lambda
    DependsOn:
      - Taylor
  Lily:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Lily!"
    Compute:
      Type: Lambda
    DependsOn:
      - Jason
  Thembile:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Thembile!"
    Compute:
      Type: Lambda
    DependsOn:
      - Maria
  Susan:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Susan!"
    Compute:
      Type: Lambda
    DependsOn:
      - Maria
  Albert:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Albert!"
    Compute:
      Type: Lambda
    DependsOn:
      - Ernesto
  Shafraz:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Shafraz!"
    Compute:
      Type: Lambda
    DependsOn:
      - Ernesto
  Wesley:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Wesley!"
    Compute:
      Type: Lambda
    DependsOn:
      - Lily
  Ben:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Ben!"
    Compute:
      Type: Lambda
    DependsOn:
      - Lily
  Will:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Will!"
    Compute:
      Type: Lambda
    DependsOn:
      - Susan
  Nelly:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Nelly!"
    Compute:
      Type: Lambda
    DependsOn:
      - Susan
  Community:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Community!"
    Compute:
      Type: Lambda
    DependsOn:
      - Nelly
      - Will
      - Ben
      - Wesley
      - Shafraz
      - Albert
  COmmunity:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Community!"
    Compute:
      Type: Lambda
    DependsOn:
      - Community
  Commonity:
    Identifier: aws/build@v1.0.0
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: echo "Hello, Community!"
    Compute:
      Type: Lambda
    DependsOn:
      - COmmunity

CodeCatalyst at re:Invent 2023, Youtube and a Speakers Directory

In 2023 I’ve become lucky. I’ve started my own YouTube channel, where I present all of the release highlights of re:invent 2023 for CodeCatalyst, I’ve become an AWS Hero but more important than that, I’ve made a lot of friends around the globe. I’ve empowered others to become part of the community and I’ve challenged others with questions, tasks and ideas like the Speakers Direcory.

Thank you for making my year 2023 unforgettable and for making me smile when I think about what we achieved together!

Views: 1700

Application Composer levels up a lot and adds amazing IDE integration capabilities

In this post we’re going to look at the new functionalities that have been added to Application Composer

In this post we’re going to look at the new functionalities that have been added to Application Composer by re:Invent 2023. After announcing the support of all CloudFormation resources earlier in the year, Application Composer now allows editing StepFunctions within the same user interface and – even cooler – announces the integration of an IDE plugin that allows developers to build serverless functions locally.

Application Composer as a serverless, rapid prototyping service adds additional capabilities to empower developers building serverless applications

Application Composer, that was originally announced last year at re:Invent 2022, has gotten a lot of major improvements thoughout 2023. As we are right at re:Invent 2023, its time to look back on which new capabilities have been added and how they influence building serverless applications using AppComposer.

Supporting all CloudFormation resouces

Already a few weeks ago the team announced that all over 1000 CloudFormation resources are now supported by AppComposer. This really gave a big update and make it simpler to build all kind of serverless applications. However, as this only alled AppComposer to expose the resources, this still requires the developer to know all required connections between the different resources. I personally would love to get more “supported” resources (just like L2 resources in CDK) to be made available as part of AppComposer. I would hope that this will be an additional functionality soon.

Integrating additional services

With the integration of the Stepfunctions Workflow Studio within the same interface, developers can now build and end to end application within composer before using the generated SAM or CDK templates to trigger the deployment. As a next step I think it would be great to also be able to define Eventbridge Rules & Pipes within the same interface.

Local development and IDE integration

AppComposer announced a Visual Studio Code integration that makes it possible to build and design serverless applications right from your IDE!

With this feature, you can visualize your serverless applications without being within your browser or the AWS console – start building, whereever you are and whenever you want!

I have not been able to try out this functionality yet, but especially the integration with sam connect that allows to also directly deploy the changes you made to your picture / template will make a big different in building applications using AppComposer.

Also I think we should not underestimate the possibility that this offers to vizualize existing CloudFormation templates through either the IDE plugin or the AWS Console. This will help to explain big and difficult already existing applications and empowers teams to have a fruitful conversation about changes they would like to implement in existing templates, as have a visualization makes the conversation easier.

What’s next for Application Composer? What are my wishes?

Already last year I have asked to integrate AppComposer in CodeCatalyst and I believe that this would be an awesome possibility to quickly start serverless projects. Application Composer today feels like a playground – to make the service more usable, it needs to have a “deployment” component that allows you to automate the lifecycle of your serverless application (including a full CI/CD pipeline).

I also last year asked for creation of CDK out of Application Composer – or even importing it – but instead of investing into that direction AWS recently announced the existance of the CDK Builder Tool – wouldn’t it be better to merge those initiatives together?

As already mentioned above, supporting additional “CDK-L2-like” patterns – or maybe the “Patterns” from serverlessland.com would be amazing – so users do not need to know how to set up IAM roles, connections between API Gateway and Lambda, … manually would make this a much more usable product!

What are your thoughts around the recent announcements of AppComposer? What are your experiences with it?

Views: 293

re:Capping re:Invent 2023 – Not everything that happens in Vegas should stay there! Let’s go and build!

In this article I will try to re:Cap a few of the announcements at re:Invent 2023 but also share my personal experiences and learnings that covers what I think that should be shared with the world…!

What happens in Vegas…

…should not always stay in Las Vegas! This year’s re:Invent has been another great experience for me and it was amazing to meet AWS enthutiasts from all of the world. I’ve learned a tons of stuff, saw a bunch of cool sessions and also experienced to be part of a big family. All the friendships that have been build in the past few days, the shared knowledge and experiences that have been shared have a big influecnce on myself and shape me.

The technical aspects of re:Invent

This year the technical aspects of re:Invent existed but where not as important to me as they used to be in my previous attendances. Of course AWS hd a bunch of important announcements – some of them bigger, some smaller. Renato has them written up at InfoQ and the AWS Newsblog has them covered too. Luc, the winner of this years “Now, Go Build” award 2023 has created a web application that helps you to read all of them and not miss a single one.

For me, there are a few that stand out:

Of course, there where a bunch of other announcement, minor and smaller ones, but these are the ones that I have remembered and thus they are meaningful to me. Now let’s move over to the more important aspects of re:Invent!

The community aspects of re:Invent

re:Invent 2023 has been once more a gathering of the AWS Community in one place and it has brought a lot of us together to talk, laugh and align. Not everyone was able to join us due to different reasons – but I am sure that you have felt the power of the community throughtout the week by following us on the different social channels.

Being part of the AWS Heros

As I posted last year, going to re:Invent means meeting with friends and getting together. Being an AWS Hero, made it more intense than before: We feel community out of our heart and that’s what makes us strong. Wherever I was in Las Vegas, I saw a fellow Hero.

We all have super powers and our powers are different. One of my super-powers is connecting people – and I hope that I was able to show this in the last few days.

Others have other powers – a few of use were able to present one of their talks – Anahit with her spciality around MSK, Anurag around data patterns and Ran on Lambda Power Tools. Others are great listeners and others have the vision of how things need to or should look in a few years – it was great to see everyones powers in one place and I know that combining them we can incluence to make things better!

Thank you, Taylor and the rest of the team, for creating this group and bringing us together again!

Working with Builders, User Group leaders and others from the community

The AWS Community consists of so much more than the Heros. Thank you, dear Community Builders – lead my Jason and the team – for being an unbelievable source of power throughtout the week. Your entuthiasm, your great ideas and your dedication are what makes us stronger. I’ve been reading a lot of the posts from Builders around the globe that were not able to make it to Las Vegas and it is energizing to see that.

The User Group Leaders that we have world wide on the other side help to thrive the AWS Community across the whole yearand bring us together regularly — to learn, to play or to share knowledge. Thank you all, for helping us to shape where the community goes and for making the community successful. I was glad to be able to meet a lot of you and share my experiences as welll as listening to your experiences.

I had the great pleasure to get the whole team of core contributors of the Speakers Directory together and we were able to present our project as well as take a picture of all of use 🙂

We are going to continue our investment and will help user group leaders to find speakers through our tool!

Working with AWS employees

This year, I’ve joined the club of many other Heros that go to sessions where they can meet AWS service team members that they have worked with before 🙂

I attended a few CodeCatalyst sessions to meet the team that I’ve been working with for more than 12 months “live and in person” and loved to see the energy and innovation live on stage – but I also attended other sessions just to say HI to certain speakers.

Employees at AWS are smart and can often tell you the perspective of WHY something has been build and it’s great to know some more background of a new feature. Thank you all for spending time with me and sharing your thoughts and passion with me!

To those AWS employees in the community and DevRel team – another big THANKS for making the event unforgettable with all of your dedication and support – I love spending time with you and creating new ideas on how to make the AWS community stronger and more engaging than ever before!

A look ahead…

As I try to use my time on the flight to put my head around what I am taking away from the last few days and from re:Invent 2023, I’m still digesting, as many others, what we have all learned and heard.

A few key take-aways:

  • AWS doesn’t feel “secure” anymore to be a market leader
  • innovation at AWS is coming (Q), but it’s still early stages
  • AWS keeps listening to their customers (see the DB2 RDS announcement and the StepFunction HTTP Integration)
  • Community Sessions (COM or DEV track) are the ones to attend at re:invent, or sessions that are AWS + a customer (level 300/400)

What I’m considering to do in the next 3 months

First of all I’m planning to cover the CodeCatalyst announcements at my YouTube channel to explain the impact of the new features to interested enterprise customers.

I’m also looking at trying out a lot of the cool things that have been announced in our AWS Speakers Directory Project, besides hosting multiple User Group meetups of the AWS UserGroup Bergstrasse.

What I’m considering to do in 2024

Of course I will continue my engagement in the AWS Förderverein DACH – we are planning another AWS Community Day in Munich next year!

I also plan to continue my work with the the CodeCatalyst team to shape the product – please let me know if YOU have input on what thte next important steps are.

I would love to work with the AWS team to, for 2024 at re:Invent organize another pre:Invent Community Hike and to talk about the possibility of hosting a complete track at re:Invent where Community Members join forces with AWS employees. I listened to a session (Ran Isenberg and Heitor Lessa) and that was a very powerful message.

Last but not least, I would like to help community members to grow and shape their careers in Cloud – if you need help or have questions, do not hesitate to reach out for questions, I’m happy to help or to connect you with someone that can help!

Thank you for reading until this point, if you have any feedback, let me know!

Views: 174

Hey, I’m going to San Francisco and Vegas to meet friends at re:Invent 2023

In this post I’m writing about how getting to re:Invent 2023 was and about how the last year has made a big impact on my personality and my efforts in the AWS community.

Thinking it couldn’t get better last year, 2023 changed me once more and leveled me up

While going to re:Invent 2022 I wrote down my thoughts and went home from re:Invent expecting things could not get better for me. But this year has prooven my expectations wrong, as the last 10 months have been life-changing in a lot of different aspects.

Blogging & Starting a YouTube channel – winning 1.5 hackathons and starting a project

When I reached my home town after last years re:Invent I knew that this community is what makes me happy and gives me back so much that I also would like to give something back. And so, I formed a few ideas in my head and started a few initiatives that I would like to share with you today:

Blogging & Writing

I have been blogging both on dev.to and on my personal blog – writing about topics and things that I do or did on a regular basis, trying to share experiences of day-to-day things and building experiences.

The most important thing for me he is to always be transarent and authentic – speaking out on things that I liked and challenges that I’ve faced.

For you reading this – thank you, for being part of my journey, for giving me feedback and for being you! Please let me know if you have any feedback on what I should change or do better.

Kicking off a YouTube channel

The last three years of my career have been focused on building secure and reliable CI/CD pipelines! But hey, there is so much more to learn. As part of my travel back to my home town after re:Invent, I had the thought to share my experiences and the experiences of other builders on a “podcast-like” YouTube channel which is today known as @cicdonaws.

Learning on how to produce a YouTube video to how to set up a good lightning and sound, preparing a script and creating better thumbnails – a lot of learnings, experiences and hours went into the videos that I have been producing.

This only kicked off because of all of the great builders that joined me on the show and shared their projects, learnings and experiences with me. Thank you all, every single one, for your time, dedication and experiences. Keep up sharing your knowledge, as this is how we can all grow!

Winning 1.5 hackathons and making a small project something big

Back in march Lily announced a, Community Builders-internal hackathon where it was possible to win some prices – and I thought I had a great idea on what to build that actually fit into the “rules” of the hackathon. And this is how the project started that has since then used up countless of hours of my after-work evenings and weekends. Millions of Slack messages have gone from Germany to US, from US and Germany to Africa and back – we kicked of a real project with our Speakers Directory that is finally ready for prime time! After becoming second in the first Hackathon, we also submitted the same project with a few tweaks for a 2nd hackathon powered by Hashnode and Amplify and this also brought us “honourable mentions” in the list of top 10.

We’re ready now to get more of you to add your talks and events! Please sign up and give us feedback, we are very proud of what we have produced so far and eager to get all of you on board!

Thanks to Danielle, Julian, Matt, Raphal, Baimam for actively working with me on this fun project – and thanks to everyone else that has supported us through 2023 to make this a bigger thing!

Organizing a community day – the EU (more technical) edition of re:Invent

Another piece last year has been organizing the AWS Community Day DACH that happened in Munich this year – it was fascinating to be part of the team that brought more than 500 AWS nerds together, including over 25 AWS Heroes and 30 Community Builers!

Thank you all for attending – and thanks for the sponsors for making this possible!

We’re up for doing this again next year – further details to be announced soon! Looking forward to see you there!

Hey, hiking before re:Invent is what you need!

And then, there is this “secret thing”: Ever since I went to re:Invent I used the sunday before re:Invent to go out “hiking” somewhere close to Las Vegas. The first two visits together with colleagues were three persons – a pretty cool small group.

Last year, they did not go with me to re:Invent, so I asked Community Builders to join me – at the end, we were nine persons from I think 8 different countries hiking together from 10 am till 5 pm – we had a lot of fun, so I decided: Let’s do this again!

Somehow, this became a bigger thing – for this years hike, we have over 50 Community Builders, Heros and UG Leaders signed up. The group became so big that the AWS Community team is helping us this year by sponsoring the transportation.

Thank you all, that you are supporting this thing – and thanks to everyone participating, I am sure this will be an amazing experience for everyone!

How and Why what I do for the community changed

Since June I can name myself an AWS Devtools Hero – which honors me a lot but also drove a bit on how and what I do for the AWS community. My work has become a little bit less “public” – a lot of my hours that I have “free” went into talking to AWS Service Teams, into mentoring other builders and helping them grow. This has lead to less new videos on YouTube and less blog posts – but instead, I’ve helped others to grow and to invest into the community – supporting the “re-start” of the AWS UG Frankfurt, the re-start of the User Groups Mainz, Karlsruhe and Heilbronn and overall making other builders “grow”. This makes me happy. Thank you all, that you are part of my journey.

Making new friends

This year has also shown to me how important relationships are and how much you can make friends by supporting each other. Friends, I’ve never met in person (Matt), friends I’ve finally met in September (Ran) and others like Markus, Thorsten, Philipp and Raphael that are part of most of my days and that I write 100s of Slack messages per week.

This is what community means! Making friends, learning, growing. Thank you, for being you!

What’s next for me?

Changing my role at work

I’ve recently moved to the global Platform Architecture team here at FICO and that gives me a lot of new possibility and learning opportunities.

The platform that FICO is building is an important one and helps organizations around the globe to take better decisions – I’m proud to be part of the team and looking forward to make this bigger than it is today.

Community Day DACH 2024

We’re already planning next year. Stay tuned for some announcements…soon!

Supporting and mentoring builders around the globe

I’m up for helping other builders grow – if you’re intersted to collaborate with me or to learn from me, reach out to me and we can talk!

If you have any CI/CD or CodeCatalyst specific topics that you would like to talk about on my channel – reach out to me!

If you need feedback, help or advice on anything (e.g. a blog post or anything else) – let me know and reach out!

Looking forward to an amazing re:Invent 2023 with a lot of friends in Las Vegas!

Have a great week – don’t be a stranger and say HI if you see me around!

Views: 272

Experiencing GenAI using PartyRock – the best applications I’ve seen until now

As a few of you might have seen, AWS has today launched PartyRock – an Amazon Bedrock Playground that can be used to generate and build applications using GenAI technologies.

PartyRock is an educational tool for providing any builder with low-friction access to learn through experimentation in a foundation model playground built on Amazon Bedrock. It is not a product or service in the traditional AWS definition and should never be referred to as such. The preferred descriptor is playground, though in most cases tool is also acceptable.

AWS

German engineering – Party from Berlin

I’ve been fortunate to be able to know people that are behind this launch and I’m excited for it, not only because a bunch of the engineering team members are part of the AWS Development Center in Germany but also because I’ve tried to not touch anything related to GenAI until this tool was made available.

After being able to look at the tool and playing around with it, I can see a lot of benefits of using Generative AI going forward and I see a lot of the value that this technology can bring us in addition to “simple ChatGPT like” Chatbots.

Build your first GenAI App with PartyRock

It’s too simple. Click on the “Generate app” button, add a prompt for Amazon Bedrock and within seconds you have a working application that uses GenAI under the hood!

Here’s an example of a prompt that I have used:

And the outcome of that: https://partyrock.aws/u/lockhead/rpZ8z9kG5/re%3AInvent-2023-DevTools-and-CICD-Attendees-Guide

Pretty cool, isn’t it? Even tho the underlying model does not have access to the session catalog (which is a pitty), I liked the outcome that you can look at in this snapshot.

Holding to it’s promises it allows you to experience with GenAI

As the introduction of the AWS team says, PartyRock really makes Bedrock accessible and gives everyone a great possibility to “try out” how the different models behave with different prompts.
I can only encourage you to try it out and make your own experiences with it. It’s worth your time!
Being part of the AWS Community (in this case, being a Hero or a Community Builder) gave us the advantage of a few hours to try this out before the official release…and this gives me the chance to already NOW present you a few cool use cases that other builders have created with this tool 🙂

PartyRock Apps & Use cases that have made me smile or bring value

It’s amazing to see how creative AWS Community Builders and Heroes are 🙂
Here a bunch of the apps that I’ve looked at and played around with and I think are worth sharing:

NameAuthorDescription
Content MarketingAWS DevTools Hero Thorsten HögerThis simple app allows you to use GenAI to create tailored Marketing Materials and post snippets for social media base on a Service Input.
Quiz GeneratorAWS Community Builder Dixit R JainGenerates questions for a specific topic that you can use in a quiz – the original idea is from Dixit, I’ve linked my version which is “remixed” and adds a language option
Event Countdown AppAWS DevTools Hero Johannes KochA simple event countdown calculator that shows you the time until a specific event starts.
Gameday Team Name GeneratorAWS Community Hero Markus OstertagGenerate your teamname for your next AWS Gameday – maybe at re:Invent in Las Vegas?
Generate Google Calendar and iCalAWS DevTools Hero Johannes KochGenerates an iCal and a Google Calendar link for your next event
Social Media AssistantAWS Community Builder Matteo DepascaleGenerates Tweets and LinkedIn posts to make your life easier
re:Invent Steps & Calory calculatorAWS DevTools Hero Johannes KochHelps you to calculate steps and calories burned when walking at re:Invent 2023 (hint: DON’T DO IT!) 🙂
TrumpifierAWS Community Builder Damien GallagherNo comments, but it was funny.
Conversation StarterAWS Community Hero Markus OstertagYou have challenges starting a conversation with your PeerTalk expert? This app will help you find the right entry words.
AWS Certification RecommenderAWS Community Builder Ganesh SwaminathanRecommends you a specific certification to do given your personal background
Bill & Ted Quote GeneratorAWS Community Builder Jenn BergstromGenerates quote from Bill & Ted based on your mood and situation
AirlineDestinationAdvisorAWS Community Hero Anders BjørnestadGet travel tips and tricks for a destination

Did you find an exciting app that I should include? Reach out to me and let me know 🙂

Where do we take it from here? How PartyRock helps and what I would love to get

PartyRock is a great starting point for experimenting with GenAI!

To take this to the next level there is a few things I’d love to get:

  1. Make your PartyRock app “yours”
    • Deploy to my AWS Account Button
    • Export to CDK / IaC Option
    • Export to CodeCatalyst Project
  2. Additional UI options
    • Radio Boxes, ComboBoxes
    • Make the applications “user aware”
    • Other generation options than text/image

Especially the first point would help developers to take action after creating the app – you could directly use the generated app and use this on your own AWS Account and this will help you understand how Bedrock fits into your existing AWS architecture.

What do you think of it? I’d love to hear your feedback and thoughts!

Time until re:Invent 2023:

Views: 442

Amplify SDK for Flutter – Developer experience and challenges in a hackathon

Background of this post

This blog post is part of a a series of posts that explains details and technical challenges that we (Danielle, Matt, Julian and myself) faced during an hackathon that was focused on the Transformers Huggingface tools – please read Matt’s article for additional information and details.
In this post we are focusing on the experiences we made in the project using the Amplify SDK for Flutter.

Project Setup – architecture dependencies for project set up

Initially our project is targetting to be available to all AWS User Group Leaders on the web from any browser. As we mature the project, at least I am hoping to be able to also publish this application as an Android and iOS application using cross platform capabilities.
Here is a birds eyes view to our UI architecture:


Why did we choose Flutter as UI?

Flutter is a trending cross-platform development toolkit, altough it is mainly sponsored by Google there is a vibrant open source community. I personally have had some exposure to Flutter in the past and I liked the developer experience and the short iteration cycles. Also the AWS Amplify Team has been investing into making their SDK available to developers, so we thought this is a good possibility to try out our use case and implement the Web app in Flutter using the Amplify SDK for Flutter to connect to the backend resources which we were planning to implement using the AWS CDK. With this approach, we also want to draw some attention on the fact that the Amplify SDKs can be used without an Amplify owned backend. This is cool and opens up a lot of possibilities – but also challenges as we will see later. I convinced Danielle, Matt and Julian to also use Flutter for our frontend. They also saw this as a good opportunity to learn a bit of Flutter by themselves.

Amplify SDK for Flutter – Developer experience and challenges

The Amplify SDK for Flutter recently announced the “GA” for Web and Desktop which is a milestone for the team to reach. As we outlined in Matt’s blog post, we are using Typescript in the backend. Amplify Flutter has native support for AppSync/GraphQL – we needed to connect the Flutter App to existing AppSync endpoints.
The AppSync schema however was written manually. No, we needed to use “amplify codegen” to generate the Dart models for the GraphQL types – but we also needed to write a type model in Typescript to be able to work with the same objects on the backend.
This turned out to be more difficult than expected: the [amplify codegen](https://docs.amplify.aws/cli/graphql/client-code-generation/#shared-schema-modified-elsewhere-eg-console-or-team-workflows) functionality is available for Flutter, too, but it was difficult to get this to work.
We ended up creating a new Flutter application using amplify init and then copy/pasted our schema(s) into the expected location. Then we needed to manually copy the generated models into our project.
Oh, I neqarly forgot: When using amplify codegen you need to ensure that your schema is anotated correctly, types e.g. need to have an @model annotation – but if you have this annotation in the schema when trying to deploy to AppSync, that deployment failed…so we needed to also manually adjust the schema before we were able to execute amplify codegen.

Using the Amplify SDK for Flutter

In our use case, all of the backend infrastructure is created using the AWS CDK. We are not using Amplify to create backend resources – and this use case has – until recently – not really been promoted by the Amplify team. Thanks to one of my last blog posts around the same topic, this new documentation page has been added which simplifies the setup of your amplifyconfiguration.dart – but we were missing the same documentation for the “Authentication” library. One again, we workarounded this problem by using a temporary Amplify Flutter project. That allowed us to copy/paste the configuration and adjust it to our needs.

Environment aware connections

In our current setup, we have at least three environments to test and promote our application. In order to be able to execute and test the Flutter application on all environments without code changes, we needed to make the maplifyconfiguration.dart environment aware:

class EnvironmentConfig {
    static const WEB_URL = String.fromEnvironment('WEB_URL');
    static const API_URL = String.fromEnvironment('API_URL');
    static const CLIENT_ID = String.fromEnvironment('CLIENT_ID');
    static const POOL_ID = String.fromEnvironment('POOL_ID');
}

These environment variables are then used in our configuration object:

const amplifyconfig = """{
"UserAgent": "aws-amplify-cli/2.0",
    "Version": "1.0",
    "api": {
        "plugins": {
            "awsAPIPlugin": {
                "frontend": {
                    "endpointType": "GraphQL",
                    "endpoint": "${EnvironmentConfig.API_URL}",
                    "region": "us-east-1",
                    "authorizationType": "AMAZON_COGNITO_USER_POOLS"
                }
            }
        }
    },
    "auth": {
        "plugins": {
            "awsCognitoAuthPlugin": {
                "UserAgent": "aws-amplify-cli/0.1.0",
                "Version": "0.1.0",
                "IdentityManager": {
                    "Default": {}
                },
                "CognitoUserPool": {
                    "Default": {
                        "PoolId": "${EnvironmentConfig.POOL_ID}",
                        "AppClientId": "${EnvironmentConfig.CLIENT_ID}",
                        "Region": "us-east-1"
                    }
                },
                "Auth": {
                    "Default": {
                        "authenticationFlowType": "USER_PASSWORD_AUTH",
                        "socialProviders": [],
                        "usernameAttributes": [
                            "EMAIL"
                        ],
                        "signupAttributes": [
                            "BIRTHDATE",
                            "EMAIL",
                            "FAMILY_NAME",
                            "NAME",
                            "NICKNAME",
                            "PREFERRED_USERNAME",
                            "WEBSITE",
                            "ZONEINFO"
                        ],
                        "passwordProtectionSettings": {
                            "passwordPolicyMinLength": 8,
                            "passwordPolicyCharacters": []
                        },
                        "mfaConfiguration": "OFF",
                        "mfaTypes": [
                            "SMS"
                        ],
                        "verificationMechanisms": [
                            "EMAIL"
                        ]
                    }
                }
            }
        }
    }
}""";

Within our CI/CD workflow we then pass the correct values for the variables and during the flutter build web they are backed into the application.

After solving these challenges, using the Amplify Flutter library worked without noteworthy problems or hickups.

Using the Amplify Flutter Library to authenticate a user with Cognito

The documentation for Amplify Flutter is really good and we decided to also use the Authenticator Widget – at the end, this project was born through a hackathon – and we did not have much time to implement the authentication flow ourselves.

In our main.dart we needed to include the Amplify configuration:

Future<void> _configureAmplify() async {
  final api = AmplifyAPI(modelProvider: ModelProvider.instance);
  await Amplify.addPlugin(api);

  // Add any Amplify plugins you want to use
  final authPlugin = AmplifyAuthCognito();
  await Amplify.addPlugin(authPlugin);

  try {
    await Amplify.configure(amplifyconfig);
  } on AmplifyAlreadyConfiguredException {
    safePrint(
        'Tried to reconfigure Amplify; this can occur when your app restarts on Android.');
  }
}

and then we only needed to adjust the build method to distinguish betwen AuthenticatedViews and “normal” views:

Widget build(BuildContext context) {


   final GoRouter router = GoRouter(
      routes: <RouteBase>[
        GoRoute(
          path: '/',
          builder: (BuildContext context, GoRouterState state) {
            return HomeScreen();
          },
        ),
        GoRoute(
          path: '/profile',
          builder: (BuildContext context, GoRouterState state) {
            return AuthenticatedView(
                child: ProfileScreen(),
            );
          },
        ),
        GoRoute(
          path: '/imprint',
          builder: (BuildContext context, GoRouterState state) {
            return ImprintScreen();
          },
        ),
        GoRoute(
          path: '/privacy',
          builder: (BuildContext context, GoRouterState state) {
            return PrivacyScreen();
          },

        ),
        GoRoute(
            path: '/in',
            builder: (BuildContext context, GoRouterState state) {
              return AuthenticatedView(
                  child: LoggedInHomepage(title: 'AWS Speakers Directory'));
            },
        ),
      ],
    );

    return Authenticator(
      initialStep: AuthenticatorStep.signIn,
      signUpForm: SignUpForm.custom(
        fields: [
          SignUpFormField.username(),
          SignUpFormField.password(),
          SignUpFormField.passwordConfirmation(),
          SignUpFormField.email(),
          SignUpFormField.custom(
              title: "First name",
              attributeKey: CognitoUserAttributeKey.givenName),
          SignUpFormField.custom(
              title: "Last name",
              attributeKey: CognitoUserAttributeKey.familyName),
          SignUpFormField.custom(
              title: "City",
              attributeKey: CognitoUserAttributeKey.custom("city")),
          SignUpFormField.custom(
              title: "Country",
              attributeKey: CognitoUserAttributeKey.custom("country")),
          SignUpFormField.birthdate(),
          SignUpFormField.gender(),
        ],
      ),
      child: MaterialApp.router(
        title: 'AWS Speakers Directory',
        routerConfig: router,
      ),
    );
  }

We would have loved to open source our code, but unfortunately that option is currently not available in Amazon CodeCatalst.
If you’re an experienced Flutter developer and don’t like the code you see above – that’s fine, the four of us have been learning Flutter iwth this project – approach us and tell us how to do this better! 😉

Using the Amplify Flutter Library to execute AppSync / GraphQL queries

Accessing AppSync was also really easy by following the Amplify Flutter documentation. Here is our code for retrieving events from the backend:

Future<List<EventRow?>> _listEvents() async {
    try {
      String graphQLDocument = '''query GetEvents {
      listEvents {
        pk
        sk
        title
        description
        tags
        length
      }
    }''';

      var operation = Amplify.API
          .query(request: GraphQLRequest<String>(document: graphQLDocument));
      List<EventRow> events = [];
      var response = await operation.response;
      var data = response.data;
      if (data != null) {
        Map<String, dynamic> userMap = jsonDecode(data);
        print('Query result: ' + data.toString());
        List<dynamic> matches =
            userMap["listEvents"] != null ? userMap["listEvents"] : [];
        matches.forEach((element) {
          if (element != null) {
            if (element["id"] == null) {
              element["id"] = "rnd-id";
            }
            var event = Event.fromJson(element);
            events.add(EventRow(event, context));
          }
        });
        return events;
      }
    } on ApiException catch (e) {
      print('Query failed: $e');
    }

    return <EventRow?>[];
  }

This uses the previously generated models. All of the queries are authenticated using the Cognito Authentication – and this is the identity that we can also use in the backend to authorize access.
The Amplify documentation is pretty good, hence I will not be adding more details into this post.
Please ask if you have any specific questions.

Wishes for the Amplify SDK for Flutter

We already mentioned most of them: better amplify codegen support (including the option to generate both Flutter and Typescript models at the same time), better documentation or support for setting up the Amplify configuration completely without an Amplify backend.
Another thing we did not talk about here but Julian mentions in his blog post are the AppSync [merged APIs] which are currently not supported – thus we needed to execute amplify codegen for each of our microservices schemas and then for the merged APi – and then manually needed to bring the models into a usable structure (e.g. copy/pasting from the different ModelProviders).

What’s next for our project

So, what’s next? After the hackathon we are thinking about making this a “kind of” OpenSource, collaborative project. Here we are looking for contributors – please contact us if you are interested.
Besides that, Matt covers a godd bunch of roadmap items already – and we definately need someone with more Flutter experience to review our UI code to e.g. introduce the BloC pattern for cleaner programming, adding some styling and to expand existing functionalities.
And we still need to play the “cross-platform” card – we need a build step to generate iOS and Android versions of our application and need to make sure that the apps land in the Appstore(s)!
Please reach out to us if you would like to get involved!

Who to reach out to?
DanielleMattJulian or myself.

Views: 356

Amazon Codecatalyst reaches “GA” status and becomes available for general use

The new service announced by Amazon in Las Vegas at re:Invent 2022 which is an integrated DevOps service to empower development teams to develop and deliver software faster finally reaches the “general availability” status. As I have previously outlined, this achievement is very important for Amazon and the CodeCatalyst team. Congratulations to the team for reaching this goal, which I can imagine is not an easy step for this product. The tool touches a lot of very sensitive parts of a software project and I can imagine the security standards being really high. 

A hugh achievement – thank you everyone in the team for investing into CodeCatalyst and for listening as closely to the customer feedback as you are!

What changes did get implemented for GA?

As part of the GA release we see a lot of minor improvements in the User Interface and color changes. In the last weeks, we have seen a few “bigger” changes – like the possibility to use Dev Environments for Github based projects. We also got “graviton based” execution environments for CI/CD workflows which, according to AWS, should reduce our costs.

It is still hard to track down all of the changes in CodeCatalyst, as there is – to my knowledge – no public or semi-public roadmap. This is one of the things that I’d love to see, as for an integrated service that is at the core of the Developer Experience for teams, any minor change can either improve or destroy the “usage experience”. If you as a team invest into adoption a new tool like CodeCatalyst, they will need to know how changes in workflow, features or user interface can influence their day-to-day activities. Let’s see, maybe the team can share “something” like a “changelog” with us (or even an RFC process like Amplify or AppSync)?

Reached “GA” – so who can start using it now?

As of today CodeCatalyst is only availble in US regions and this means that it can be adopted mainly by US enterprise customers. CodeCatalyst already gives you the possibility to set up different Spaces for your account and within a space you can manage multiple projects. So in theory, CodeCatalyst is “ready to be used” by everyone. 

Practically speaking, it is easier to adapt the service for new projects than for existing projects , as there is no real “import” functionality. Yes, you can integrate existing Github projects, but that only integrates the source code. Unfortunately that does not make all of the “cool” things available right from the start of integrating the source: existing workflows (CI/CD pipelines) are lost and need to be re-build, issues/tickets are not imported into CodeCatalyst (tho they can be made available through the JIRA integration). 

I have been regularly using CodeCatalyst (both for imported and “new” projects) – and I really think that the tool already works very well. 

The “killer feature” that I see for new projects are the “blue prints” which essentially get you started within minutes, e.g. to deploy a SPA application, or to have a “true” CI/CD pipeline for a full stack application following the DPRA

Right now I would recommend using CodeCatalyst for any new project that you start to start building out your workflows and best practices.

So what do I still need to recommend CodeCatalyst for existing projects?

There are a few things that I have already been writing about:

  • “Import” of existing CI/CD workflows (e.g. Github actions, CDK Pipelines or CodePipelines)
  • Fully import projects
    • existing issues from Github or JIRA
    • Git-based projects including the history
  • Tighter security settings and permissions
    • Fine granular roles to allow or forbid access to specific parts of a project
    • Options to allow or forbid execution of workflows (or to deployments)
  • Additional workflow options
    • Manual approvals are very high on my wish list
    • Integration of other AWS services natively

A question for the readers: What do YOU think that you need to adopt CodeCatalyst?

A big question for the CodeCatalyst team – HOW MANY AWS TEAMS ARE USING CODECATALYST FOR PRODUCTION DEPLOYMENTS TODAY?

Where do I see the potential for CodeCatalyst?

CodeCatalyst is a big bet by AWS. There is a big potential that can really improve the life of development teams and these are the main things that I believe that can out-grow other existing solutions:

  • Integration of AWS Services / deployments metrics
    • the true integration with AWS APIs
    • Integration into “post-deployment” verifications (e.g. auto roll-back after failed CloudWatch metrics)
  • “At-hand” developer support to improve efficiency
    • with CodeWhisperer (who recently reached GA) AWS already aims to support developers during the development phase, but with CodeCatalyst AWS can take this to the next level:
    • AI support during Pull Request Reviews (or automated approvals for PRs – e.g. by including CodeGuru, etc., automated merges, etc.)
    • AI support during workflow executions (when to approve, when to deloy, when to promote, etc.)
    • With improvement proposals for workflows if the “AI model” recognizes patterns (in issue workflows or CI/CD workflows)
  • With automated improvements for existing projects based on blue prints
    • Best practices change – and so blue prints change – and if the CodeCatalyst team can automatically apply them for existing projects, customers will benefit from it

And last but not least:

I trully believe that every software project should start with a CI/CD pipeline – and with the Blue Prints including the CI/CD workflow that follows DPRA and other AWS best practices, we can trully make this possible: Empower developers to deliver their software projects in minutes right after starting their project.

Do you see the potential in CodeCatalyst? If you do not see any potential in the tool – why not?

Views: 1678

Pipeline strategies for a mono-repo – experiences with our Football Match Center projects in CodeCatalyst

Both Christian and I have been writing about our “Football Match Center” project – and as part of this project we obviously also needed a CI/CD (Continuous Integration and Continuous Deployment) pipeline. Our aim was to be able to integrate changes that we do regularly and see commits to the main branch being directly and automatically deployed to our environments.

I will first try to define some pre-requisites and then talk about learnings and experiences. 

What is a mono-repo

A mono-repo is an abbreviation of a “mono repository” which I understand as being a single repository, where different microservices or components are stored and saved in the same git repository. This can be various different services, infrastructure or user interface components or backend services.

A mono-repo has special requirements when building the CI/CD pipeline.

Expectations for our CI/CD pipeline

For our CI/CD pipeline we wanted to be able push changes to production quickly and be able to iterate fast. We wanted to achieve 100% automation for everything required for our project. As we have been writing, we completely develop this project using Amazon CodeCatalyst and thus the pipeline also should be build using the Workflows in CodeCatalyst.

Going forward we want to ensure that the pipeline also includes all CI/CD best practices as well as security scans and automated integration or end to end tests.

How to structure your pipelines

In this article we will purely focus on the CI/CD pipeline for your “main” or “trunk” branch – the production branch that will be used to deploy your software or product to the production environment.

We will not consider pipelines that should be executed on feature branches or on pull request creation.

The “one-pipeline-to-rule-them-all” approach

In this approach all services are deployed within the same pipeline. This means that there is only a single pipeline for the “main” branch. All services that are independed rom each other can deployed in parallel, services that have a dependency need to be deployed one after another. Dependencies or information from one to another service can be pushed through the pipeline using environment variables.

This can lead to longer deployment/execution timelines but ensures that “one commit” to this “main” branch is always deployed completely after a commit. If tests are included in the pipeline, they will need to cover all aspects of the application.

The “context-specific” or “component-specific” approach

Different components or contextes get a different pipeline – which means that e.g. the backend-services are deployed in one pipeline and the frontend-services in a different pipeline. 

In this approach, you automate the deployments for components and need to ensure that, if there are dependencies between the components, the pipeline verifies the dependencies. If one component requires information from another one you need to pass these dependencies using other options.

This can lead to faster iteration cycles for specific components but increases the complexity of the pipeline dependencies. You can also do not directly see if a specific commit has been deployed for all components or not.

The “one-pipeline-for-each-service” approach

This is the most decoupled option for building a CI/CD pipeline. Each service (lambda function, backend, microservice) gets its own pipeline. For each service, you can implement service specific steps as part of the pipeline. 

One of the main requirement for this is that the services are fully decoupled, otherwise managing dependencies can get very difficult. However, this allows a very fast iteration and development cycle for each microservice as the pipeline execution for each service is usually very fast.

The pipeline needs to verify the dependencies for each service as it executes the deployment.

Football Match Center – our experiences with building our CI/CD pipeline in Amazon CodeCatalyst

For our project we decided to start with a “mono-repo” – in our case today, we have a CDK application (written in Typescript) that describes the required infrastructure and includes Lambda functions (where required) and a user interface which is written in Flutter.

From a deployment perspective, the CDK application needs to be deployed on AWS and the Flutter application then needs to be deployed on a S3 bucket to serve as a Single Page Application (SPA) behind Cloudfront. Obviously this deployment/upload has the pre-requisite of the S3 bucket to be already available.

How we started

We started, very classic, with the “one-pipeline-to-rule-them-all” approach. We had one single pipeline that was used to deploy all services that are part of the infrastructure.

This pipeline started with “cdk synth” using the “CDK deploy” action in CodeCatalyst and then had other steps that depended on the first one – to executing the “flutter build” and later the “UI deploy” (using the S3 deploy action).

In this first version, the CDK deploy step had variables/output with the name of the S3 bucket and the CloudFront distribution ID passing it it to the next step where the output of “flutter build” was then uploaded and the CloudFront distribution invalidation request was triggered.

In this approach a commit to the “main” branch always triggered the same pipeline and this pipeline deployed the complete application.

We also used only natively available CodeCatalyst actions for deployment – “cdk deploy” and “build”. For the Flutter action we used a Github Action for flutter.

Experiences and pipeline adjustments

With this approach we had the problem that the Flutter build step took ~8 minutes and blocked a new iteration of changes in the CDK application or the lambda function. Thus, this slowed down our development cycle.

In addition to that we found out that there was no possibility to influence the CDK version with the CDK deploy action – but we wanted to be able to use the version defined in our Projen project – to be able to deploy to development environments from our local with the same version as from the CI/CD pipeline.

Both of these findings and experiences brought us to implement some changes to the pipeline:

  • We separated the UI build from the CDK build
  • We moved away from using “cdk deploy” and replaced it with a “build” step – to be able to trigger “projen” as part of the pipeline

So now we have two pipelines:

  1. CDK deployment
    • Triggered on changes to the “cdk-app/*” folder
      • Executing CDK synth, build and deploy steps – but not using the “cdk deploy” action but a normal build step instead
      • We adjusted the CDK app to include Cloudformation exports that exports the S3 bucket name and the Cloudfront distribution ID
  2. Ui deployment
    • Triggered on changes to the “ui/*” folder
    • Reads the values for the S3 bucket and the CloudFront distribution ID from the CloudFormation exports using the AWS cli
    • Executing the Flutter build steps and the S3 deploy action

These changes reduced in faster iterations for the development cycle of the CDK app and allowed decoupling the backend from the UI part. We were also able to fix the CDK version to the version we have selected in Projen.

In our project we have chosen the “context-specific” approach for the pipeline.

My recommendations for building CI/CD pipelines for a mono-repo

Our CI/CD pipeline is not perfect yet and we’re yet to add some important things to our pipeline.

From the experiences we have made I am still not convinced that our “context-spefic” approach is the right path.

As of writing this post in early April 2023 I’m inclined to move towards a model where we combine the “context specific” and the “one-pipeline-to-rule-them-all” approach: context-specific for “lower”, non production environments and then a single pipeline that does the promotion to our production environment.

Today we do not yet have a production environment, so we did not answer that question yet 🙂

How do you solve this challenge around building CI/CD pipelines for mono-repos?

Views: 457

What can we expect for the General Availability (GA) of Amazon CodeCatalyst?

At re:Invent 2022, as usualy, different new AWS Services or functionalities have been announced in Preview. Now, at the beginning of April 2023, a few of them have already reached the “General Availability” (GA) status – Application Composer (in early march), Latice (in late march). My favourite new service, Amazon CodeCatalyst, has not yet reached this goal – but I have a feeling that now is the right time to think about what and when we can expect this status.

You wonder what CodeCatalyst is? Watch this video on my YouTube channel or read my two initial posts about it.

Why is reaching the “GA” milestone so important?

Before starting with my assumptions on what we can expect for GA, lets clarify why reaching this milestone is so important. Being “in Preview” can mean a lot of different things. In a lot of organizations this usually translate to “limited availability”, a service not being available in all regions or not being reliable or scalable. For other organizations, it means that specific aspects of the product can be immature or not reliable. It can also mean that bigger API change are yet to be implemented or missing security guardrails. 

In general, this can be seen as a “beta” offering which is not appropiate to use for productive workloads.

Because of these reasons and maybe others, a lot of organizations (especially US based) do not allow using or adapting services that are in “Preview”.

For all of my experiences, tests, videos and projects I was so far able to only be on the free tier. And I assume that this will also be the truth for most of my readers: You can get a long way using the Free Tier that Amazon CodeCatalyst offers today.

So thats another big reason for AWS to push this service out of “Preview”: It gives organizations that are forbidden to use the service in “Preview” the possibility to start using and adopting the service – and with that Amazon can start earning money with the service which unil now might be difficult.

And as we know, AWS tries to “work backwards” from client requirements and the early usage of CodeCatalyst will drive further investments into the service.

What to expect for GA of CodeCatalyst?

Simple: Nothing big – most probably only regional rollout.

I do personally not expect any major new features for the service as the team has been constantely releasing new features and functionalities to the service on a regular cadance. There was simply not more time to work on bigger features while preparing the “General Availability” (GA).

What the CodeCatalyst already has delivered until today…

Let’s look at what has been added to CodeCatalyst since its official release in december 2022:

  • Additional Reporting auto-discovery
  • Change Tracking – the possibility to see which changes have been deployed to a certain environment
  • Additional Workflow native actions and improvements, E.g.
    • a problem with the CDK action to be able to define the “workpath” of a CDK app
    • Additional native actions
  • Linked issues to Pull Requests – you are now able to link issues to a pull request
  • UX improvements
    • Log files wider accessible in UI – at the beginning you where not able to make the log view larger, now this is possible
    • Page title adjustments
  • New Blue Prints (like the “Textract” one)
  • Development environments for Github based projects

This is not a complete list, but the things that I personally noticed and that I liked to see.

So…when is “the date”?

Hard to guess, but I would expect “soon”. Ideally right before a month starts, which will make the billing cycle easier 🙂 

So I would guess “end of april” which would bring the service right in time for the Berlin Summit (3rd of May).

Next steps for CodeCatalyst

In my last posts I have already been communicating my thoughts and features that I would love to see. But what will AWS implement?

Given reaching the “GA” status opens the way to “enterprise clients” I would expect that one of the first features will be Single-Sign-On functionalities, maybe with an integration to Okta, Ping, Azure Active Directory or other already existing IdPs.

In addition to that I believe that the User Interface needs to get some tweaks to streamline the navigation and workflow – that’s something that I personally experience every day: not knowing when and where to click to get on the rigth place. Also I think that additional service integrations will be added – e.g. StepFunctions or SNS, maybe SQS – see also my post about sending notifications from workflows.

And then there is one last thing which has been getting only limited attention so far: APIs and CLI integrations that can be used – so I would expect a major update there.

I’m really looking forward to see CodeCatalyst reaching GA – I’ve had various conversations with the team in the last months and I know that they have a true vision to make CodeCatalyst successfull as a trully AWS integrated and fully functional DevOps tool.

Are there features you are missing? Please let me know and I will forward them to the team.

Views: 741

How we automated infrastructure creation and Windows workloads through Lambda and StepFunctions

This article starts at the very beginning of my own and personal story to the cloud and to where I am today in my career: Back in 2015, when I barely knew that something big as “AWS” existed.

Of course, being a tech nerd since I started my career, I knew what “AWS” was and that I also had glimpse at guessing its tremendous power and opportunities, but I was not aware of the details and of the possibilities I would see.

How we started

At that time me and my team had built out a lift and shift solution on EC2 instances, where our product was manually deployed on. We aimed to grow our business, but we knew that this would not be possible without automation. Our product had different automation requirements and we did automate these running the required jobs through the Windows task manager.

Now, as we decided to be able to offer our service to other and accitional customers, we needed to find additional possibilities for automation and better operational support.

The first thing that we did was to move away from manually provisioning EC2 instances towards using Cloudformation for bringing up the instances and all of the required infratructure (VPC, sub-nets, load balancers, etc.). This already helped us a lot towards being able to deliver our solution faster. But we still had these Windows tasks which needed to be set up on the different instances. And this is where we looked at additional services in AWS to be able to replace these windows tasks with other possibilities.

Adding Serverless capabilities to the mix

At that time, we looked at using AWS Lambda and Stepfunctions – where were a “brand new thing” at that time – to be able to automate and orchestrate our workflows. With StepFunctions being really new, we recorded a “This is my architecture” video at re:Invent 2018. To be honest, when we started to look at Lambda and StepFunctions, I personally was not very convinced. Coming from a Windows /  Java background, moving orchestration capabilities “outside” the “server” (=EC2 instance) feeld wrong as I had not thought about orchestrated workfloads which could run across multiple infrastructure components before. 

Through StepFunctions, we orchestrated retrieving data, starting a new EC2 instance, automatically installing our software on it and then running the required workfload. AWS Lambda helped us in this case to be able to start EC2 instances programatically. At the same time StepFunctions gave us the possibility to get an overview on the current status of the executions through the AWS Console. The integration with CloudWatch, which was already available at that time, allowed us to implement alarms and enabled monitoring of execution time. 

During the process of testing and implementing this orchestration, we regularly hit new obstacles – e.g. a specific instance type not being available in an availability zone or a different error while reading data from S3. We often thought about giving up our approach, but moved on after seing the benefits of automation and being less dependend on a specific EC2 instance.

Orchestrating workflows using AWS StepFunctions is way easier today than at the time that I was part of this project. In 2017, there were only minimal possibilities of direct integrations with other AWS Services from AWS StepFunctions (like Lambda). Today, AWS StepFunctions offers more than 200 service integrations, “normal” workflows (which are quite expensive) and “express” workflows. 

I have been using StepFunctions in different projects lately, also in my project around building my own online & mobile game using serverless technologies at pegasus-galaxy.net.

How do you orchestrate your serverless workflows?

What are your experiences with AWS Step Functions?

Views: 126