A self-hosted CodeCommit alternative

A few weeks ago AWS CodeCommit became a deprecated service on AWS. This means, customers cannot create new repositories anymore – refer to this announcement for all details: Blog for CodeCommit

There are obviously a lot of alternatives to CodeCommit (Github, Gitlab, …) but if you need a “self-hosted” Git repository in your own AWS account this can become a little bit harder to provision.

If you’re looking for a “self-hosted” Git server, a bunch of tools come up:

  • Gitea
  • Gitness
  • Gitlab
  • …and obviously also others or just a “git” server running on EC2

As I wanted to be able to deploy something that works “out of the box” I looked at how to provision one of these alternatives on my own AWS account.

Gitness

Gitness is an open source development platform packed with the power of code hosting and automated DevOps pipelines. It’s biggest mantainer is Harness. It includes “way more” than just Git – eg Gitspaces, Pipelines, etc.

Deployment of Gitness

If you want to deploy Gitness, the docs point you at running it locally using Docker, or by deploying it on EC2 or k8s.

My aim was to “only” make the “Git” component available and because of that I’ve chosen Amazon ECS with EFS as storage to provision Gitness.

Open Source Code for the deployment of Gitness

I’ve set up a project on Github where all of the code examples are available for you to look at:

https://github.com/Lock128/setup-gitness

In the rest of the article, I’ll try to walk you through the most important parts of the project.

Please note that this code is not production-ready, it is more a PoC to showcase the direction that you could take.

Source Overview

We’re using AWS CDK and Typescript for Infrastructure As Code (IaC).

We have a “bin” directory where the main application is and a “lib” directory where the “Cloudformation Stack” is.

The “real code” is in setup-gitness-stack.ts where we create:

Required modifications?

You only need to set up “Secrets” in your repository with the names AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

After the initial deployment, you will need to modify the GITNESS_URL_BASE in https://github.com/Lock128/setup-gitness/blob/main/lib/setup-gitness-stack.ts#L104 to point it at the Load Balancer URL that has been set up for you.

Deployment

If you “fork” this repository and then push something to the “main” branch, a Github Action workflow will deploy this to your AWS account.

The deployed CloudFormation stack will contain everythign that you need:

The expected costs for this is roughly 50 USD / month.

Next steps from here

As mentioned above the code presented here is not production ready and it does not allow to use all of the functionalities of Gitness.

Things that you will need to consider / think of when using this code as a starting point:

  • What’s the backup schedule for the data on EFS?
  • What’s the required scaling policies?
  • Do you need a custom domain name?
  • Do we need to do security hardening – on the image, on the infrastructure, etc.?
  • Do we need to have multiple environments (development / production) for Gitness?
  • How to connect to this Git server from your VPC?
  • How to automate user creation or tie it to an existing AWS Identity Center?

As you can see, this is just a starting point of your journey to host your own Git Server on AWS 🙂

Please let me know if you have better ideas, suggestions or alternatives!

Views: 299

Application Composer levels up a lot and adds amazing IDE integration capabilities

In this post we’re going to look at the new functionalities that have been added to Application Composer

In this post we’re going to look at the new functionalities that have been added to Application Composer by re:Invent 2023. After announcing the support of all CloudFormation resources earlier in the year, Application Composer now allows editing StepFunctions within the same user interface and – even cooler – announces the integration of an IDE plugin that allows developers to build serverless functions locally.

Application Composer as a serverless, rapid prototyping service adds additional capabilities to empower developers building serverless applications

Application Composer, that was originally announced last year at re:Invent 2022, has gotten a lot of major improvements thoughout 2023. As we are right at re:Invent 2023, its time to look back on which new capabilities have been added and how they influence building serverless applications using AppComposer.

Supporting all CloudFormation resouces

Already a few weeks ago the team announced that all over 1000 CloudFormation resources are now supported by AppComposer. This really gave a big update and make it simpler to build all kind of serverless applications. However, as this only alled AppComposer to expose the resources, this still requires the developer to know all required connections between the different resources. I personally would love to get more “supported” resources (just like L2 resources in CDK) to be made available as part of AppComposer. I would hope that this will be an additional functionality soon.

Integrating additional services

With the integration of the Stepfunctions Workflow Studio within the same interface, developers can now build and end to end application within composer before using the generated SAM or CDK templates to trigger the deployment. As a next step I think it would be great to also be able to define Eventbridge Rules & Pipes within the same interface.

Local development and IDE integration

AppComposer announced a Visual Studio Code integration that makes it possible to build and design serverless applications right from your IDE!

With this feature, you can visualize your serverless applications without being within your browser or the AWS console – start building, whereever you are and whenever you want!

I have not been able to try out this functionality yet, but especially the integration with sam connect that allows to also directly deploy the changes you made to your picture / template will make a big different in building applications using AppComposer.

Also I think we should not underestimate the possibility that this offers to vizualize existing CloudFormation templates through either the IDE plugin or the AWS Console. This will help to explain big and difficult already existing applications and empowers teams to have a fruitful conversation about changes they would like to implement in existing templates, as have a visualization makes the conversation easier.

What’s next for Application Composer? What are my wishes?

Already last year I have asked to integrate AppComposer in CodeCatalyst and I believe that this would be an awesome possibility to quickly start serverless projects. Application Composer today feels like a playground – to make the service more usable, it needs to have a “deployment” component that allows you to automate the lifecycle of your serverless application (including a full CI/CD pipeline).

I also last year asked for creation of CDK out of Application Composer – or even importing it – but instead of investing into that direction AWS recently announced the existance of the CDK Builder Tool – wouldn’t it be better to merge those initiatives together?

As already mentioned above, supporting additional “CDK-L2-like” patterns – or maybe the “Patterns” from serverlessland.com would be amazing – so users do not need to know how to set up IAM roles, connections between API Gateway and Lambda, … manually would make this a much more usable product!

What are your thoughts around the recent announcements of AppComposer? What are your experiences with it?

Views: 435

A real project with CodeCatalyst – Our hackathon gave us a good insight into what works and what doesn’t

As Matt already wrote in his project introduction post we have been using Amazon CodeCatalyst to handle all of our project activities. This gives a very good indication into where CodeCatalyst is already “ready for prime time” and where further adjustments need to be made in order to develop the service into a fully fledge, all-or-nothing, integrated DevOps service.
The one pushing for using CodeCatalyst was myself, as I wanted to try out the new Amazon service that recently went “GA” after being announced at re:invent 2023 in a bigger team and in a real project. It was great fun – and we learned a lot, too!
Let’s look at some details.

How we planned our tasks using “issues”

Our journey with CodeCatalyst started with a – very agile and intense – planning session using the “issues” component of CodeCatalyst. Given the nature of an hackathon, we created roughly the first ten issues in the system. Three of us were “new users” for CodeCatalyst, but they were able to quickly interact with the issues component – by default, this is a simple Kanban board and has a well structured user interface. One thing we directly missed was the ability to copy/paste images into comments within the issues…and to attach files to the issues themselves. Linking issues to one another is also only hardly possible right now.
We used a simple workflow and did a regular planning (twice per week) and created/updated/closed more than 50 issues (of partly very small scope) throught the course of less than four weeks. Overall, the available functionalities were good enough for our project and supported our project.

Collaborating source code – working with Git repositories and pull requests

The four of us are (Community) Builders from the bottom and all we need to be happy is a Git repository that we can push code to 🙂
Of course this is not currect, but CodeCatalyst allows you to host you own, private Git repositories. You can have multiple repositories per project. In our case, we had decided to go with a mono-repo approach and so we strickly worked in a single repository. Working with the source code repository was “just fine” and working as expected. We also decided to merge as early and as often as possible to our “main” branch. The team however struggled in working with setting up Pull Requests and working in the UI when reviewing pull requests.
The most problematic things: UI response time, working with comments on pull requests (e.g. threaded comments) and creating pull requests. The whole UI does not yet feel as “structured” as in other competitors (e.g. Github) – examples: PR title/description proposals when creating a PR, link in terminal/commandline that allows your to create a PR.
Another thing we missed is “git blame” online – not to blame each other but to understand what changed and broke our “hacked” “hackathon” source code 🙂
Cool: the markdown rendering for files named *.md. Missing here: including local images is not possible.

CodeCatalyst usage increased a lot in this project!

Working with Workflows – Github actions for the rescue!

We started off building our workflows with natively available actions in Amazon CodeCatalyst – using the “cdk deploy” action to deploy out infrastructure and the “s3 deploy” action to deploy our front end – but quickly shifted away from that and switched over to using Github Actions within the workflow to allow building our Flutter application as a CDK asset. As our Continuous Integration pieces matured within our npm implementation, there was less reason to go back to the native actions as the CI part was handled through npm.
Our current workflow and CI/CD pipeline:



The main preference here is to allow a continuous deployment to our production environment. This goes through an intial “sandbox” deployment, promotes to a “test” environment and then deploys to “production”. The pipeline is prepared to execute integration tests, but they are not yet implemented – which is the nature of a _hack_athon 🙂
This would definately one of the next things to change – adding real integration tests and also adding some security verifications.

Our workflow is, as I mentioned, pretty basic – here an excerpt of it until the deployment to our sandbox environment:

BuildAndTestCDKApp:
    Identifier: aws/github-actions-runner@v1
    Compute:
      Type: EC2
    Inputs:
      Sources:
        - WorkflowSource
      Variables:
        - Name: AWS_REGION
          Value: us-east-1
        - Name: NODE_ENV
          Value: sandbox
    Outputs:
      AutoDiscoverReports:
        Enabled: true
        ReportNamePrefix: rpt
        IncludePaths:
          - ./coverage/lcov.info
    Configuration:
      Steps:
        - name: Flutter Build Web
          uses: subosito/flutter-action@v2.8.0
          with:
            channel: stable
        - name: Java Install
          uses: actions/setup-java@v2
          with:
            distribution: zulu
            java-version: "11"
        - name: Setup Android SDK
          uses: android-actions/setup-android@v2
        - uses: actions/setup-node@v3
          with:
            node-version: 16
        - run: npm ci
        - run: npm run
        - run: npm run synth
    Environment:
      Connections:
        - Role: CodeCatalystWorkflowDevelopmentRole-AWSCommunityBuilders
          Name: community-builders-sandbox
      Name: speakers_sandbox
  DeploySandboxCDKApp:
    DependsOn:
      - BuildAndTestCDKApp
    Actions:
      DeployBase:
        Identifier: aws/github-actions-runner@v1
        Compute:
          Type: EC2
        Inputs:
          Sources:
            - WorkflowSource
          Variables:
            - Name: AWS_REGION
              Value: us-east-1
            - Name: NODE_ENV
              Value: sandbox
        Configuration:
          Steps:
            - name: Flutter Build Web
              uses: subosito/flutter-action@v2.8.0
              with:
                channel: stable
            - name: Java Install
              uses: actions/setup-java@v2
              with:
                distribution: zulu
                java-version: "11"
            - name: Setup Android SDK
              uses: android-actions/setup-android@v2
            - uses: actions/setup-node@v3
              with:
                node-version: 16
            - run: npm ci
            - run: npm run deploy
        Environment:
          Connections:
            - Role: CodeCatalystWorkflowDevelopmentRole-AWSCommunityBuilders
              Name: community-builders-sandbox
          Name: speakers_sandbox
  IntegrationTestSandbox:
    Identifier: aws/build@v1
    Inputs:
      Sources:
        - WorkflowSource
    Configuration:
      Steps:
        - Run: ls -al
    Compute:
      Type: Lambda
    DependsOn:
      - DeploySandboxCDKApp

As you can see, a lot of the CD (and also the “deployment”) runs within the corresponding task definitions in package.json.

No “Open Source” or “Readonly” view

We would really love to open-source our whole projet, to give you the possibility to get involved and learn from what we build, but unfortunately Amazon CodeCatalyst does currrently not offer an Open Source or a “Readonly” view and forces you to create an account for CodeCatalyst – these are things that make it difficult for us as we cannot link to our sources.
If you are an AWS Community Builder, please reach out to either of us in Slack and we can give you access to the project.

What did we really miss in the workflow component

  • manual approvals before promotion to our “production” environment
  • Notifications from workflows – could have been done through this implementation but we did not take time to implement that
  • Integration of actions deployed through “npm – cdk deploy” with making the changes in infrastructure visible
  • importing existing pipelines or render CDK pipelines to CodeCatalyst workflows The “killer feature” of the workflows, at least as far as we see it, is the possibility to use Github actions as part of the workflow. With only minor adjustmends an existing Github workflow can be transformed to a CodeCatalyst workflow. If either of you know about an existing tool that converts existing Github workflows to CodeCatalyst workflows in an automated way – please let me know.

The hidden gem: Development Environments

One of the “hidden gems” of CodeCatalyst are the Dev Environments – also known as Development Enviroments. Essentially, its a “service in the service” similar to Gitpod where you can host your IDE or the environment that you develop on. In our project we all didn’t use it as the architecture was small and we only had this single project. Also, we were developing mainly on our main machines without switching between machines too often. That’s why we all did not consider really using the DevEnvironments for this project. Last but not least, Flutter is not natively supported on the Dev Environments which would have forced us to prepare a devfile that includes the Flutter dependencies – and thats something that would have taken too much time out of the project.
Still, the De Environments within CodeCatalyst are hidden gems that I believe should get more traction than they are getting until now.

A summary of what we think works and what doesn’t – and our biggest wishes

For a hackathon like this (3-6 weeks working time) CodeCatalyst seems to be a very good choice for a project that starts with a “new” or “fresh” codebase. The tool has most of the minimal requirements implemented to allow simple project management and good integration with AWS.
It also allowed us to quickly get started and to collaborate on our sources.

We really missed workflow notifications as well as manual approvals in the workflows.
As the tool matures I personally hope that more and more AWS internal teams will be using CodeCatalyst to develop, plan and deploy out their internal AWS services and with that the “flow” in the User Interface will definately be improved, too – as today there are some hickups in a real developers workflow.

And, last but not least – we would love to share this project with you, but given CodeCatalyst does not allow to “Open Source” a project, that’s not possible 🙂

Maybe something for the next hackathon?

If you would like to get involved in this project and help us to shape and build our AWS Speakers directory to make it easier for AWS User Group Leaders to find speakers – please reach out to either of us four about collaborating!

Who to reach out to?
DanielleMattJulian or myself.

Views: 1693

Amplify SDK for Flutter – Developer experience and challenges in a hackathon

Background of this post

This blog post is part of a a series of posts that explains details and technical challenges that we (Danielle, Matt, Julian and myself) faced during an hackathon that was focused on the Transformers Huggingface tools – please read Matt’s article for additional information and details.
In this post we are focusing on the experiences we made in the project using the Amplify SDK for Flutter.

Project Setup – architecture dependencies for project set up

Initially our project is targetting to be available to all AWS User Group Leaders on the web from any browser. As we mature the project, at least I am hoping to be able to also publish this application as an Android and iOS application using cross platform capabilities.
Here is a birds eyes view to our UI architecture:


Why did we choose Flutter as UI?

Flutter is a trending cross-platform development toolkit, altough it is mainly sponsored by Google there is a vibrant open source community. I personally have had some exposure to Flutter in the past and I liked the developer experience and the short iteration cycles. Also the AWS Amplify Team has been investing into making their SDK available to developers, so we thought this is a good possibility to try out our use case and implement the Web app in Flutter using the Amplify SDK for Flutter to connect to the backend resources which we were planning to implement using the AWS CDK. With this approach, we also want to draw some attention on the fact that the Amplify SDKs can be used without an Amplify owned backend. This is cool and opens up a lot of possibilities – but also challenges as we will see later. I convinced Danielle, Matt and Julian to also use Flutter for our frontend. They also saw this as a good opportunity to learn a bit of Flutter by themselves.

Amplify SDK for Flutter – Developer experience and challenges

The Amplify SDK for Flutter recently announced the “GA” for Web and Desktop which is a milestone for the team to reach. As we outlined in Matt’s blog post, we are using Typescript in the backend. Amplify Flutter has native support for AppSync/GraphQL – we needed to connect the Flutter App to existing AppSync endpoints.
The AppSync schema however was written manually. No, we needed to use “amplify codegen” to generate the Dart models for the GraphQL types – but we also needed to write a type model in Typescript to be able to work with the same objects on the backend.
This turned out to be more difficult than expected: the [amplify codegen](https://docs.amplify.aws/cli/graphql/client-code-generation/#shared-schema-modified-elsewhere-eg-console-or-team-workflows) functionality is available for Flutter, too, but it was difficult to get this to work.
We ended up creating a new Flutter application using amplify init and then copy/pasted our schema(s) into the expected location. Then we needed to manually copy the generated models into our project.
Oh, I neqarly forgot: When using amplify codegen you need to ensure that your schema is anotated correctly, types e.g. need to have an @model annotation – but if you have this annotation in the schema when trying to deploy to AppSync, that deployment failed…so we needed to also manually adjust the schema before we were able to execute amplify codegen.

Using the Amplify SDK for Flutter

In our use case, all of the backend infrastructure is created using the AWS CDK. We are not using Amplify to create backend resources – and this use case has – until recently – not really been promoted by the Amplify team. Thanks to one of my last blog posts around the same topic, this new documentation page has been added which simplifies the setup of your amplifyconfiguration.dart – but we were missing the same documentation for the “Authentication” library. One again, we workarounded this problem by using a temporary Amplify Flutter project. That allowed us to copy/paste the configuration and adjust it to our needs.

Environment aware connections

In our current setup, we have at least three environments to test and promote our application. In order to be able to execute and test the Flutter application on all environments without code changes, we needed to make the maplifyconfiguration.dart environment aware:

class EnvironmentConfig {
    static const WEB_URL = String.fromEnvironment('WEB_URL');
    static const API_URL = String.fromEnvironment('API_URL');
    static const CLIENT_ID = String.fromEnvironment('CLIENT_ID');
    static const POOL_ID = String.fromEnvironment('POOL_ID');
}

These environment variables are then used in our configuration object:

const amplifyconfig = """{
"UserAgent": "aws-amplify-cli/2.0",
    "Version": "1.0",
    "api": {
        "plugins": {
            "awsAPIPlugin": {
                "frontend": {
                    "endpointType": "GraphQL",
                    "endpoint": "${EnvironmentConfig.API_URL}",
                    "region": "us-east-1",
                    "authorizationType": "AMAZON_COGNITO_USER_POOLS"
                }
            }
        }
    },
    "auth": {
        "plugins": {
            "awsCognitoAuthPlugin": {
                "UserAgent": "aws-amplify-cli/0.1.0",
                "Version": "0.1.0",
                "IdentityManager": {
                    "Default": {}
                },
                "CognitoUserPool": {
                    "Default": {
                        "PoolId": "${EnvironmentConfig.POOL_ID}",
                        "AppClientId": "${EnvironmentConfig.CLIENT_ID}",
                        "Region": "us-east-1"
                    }
                },
                "Auth": {
                    "Default": {
                        "authenticationFlowType": "USER_PASSWORD_AUTH",
                        "socialProviders": [],
                        "usernameAttributes": [
                            "EMAIL"
                        ],
                        "signupAttributes": [
                            "BIRTHDATE",
                            "EMAIL",
                            "FAMILY_NAME",
                            "NAME",
                            "NICKNAME",
                            "PREFERRED_USERNAME",
                            "WEBSITE",
                            "ZONEINFO"
                        ],
                        "passwordProtectionSettings": {
                            "passwordPolicyMinLength": 8,
                            "passwordPolicyCharacters": []
                        },
                        "mfaConfiguration": "OFF",
                        "mfaTypes": [
                            "SMS"
                        ],
                        "verificationMechanisms": [
                            "EMAIL"
                        ]
                    }
                }
            }
        }
    }
}""";

Within our CI/CD workflow we then pass the correct values for the variables and during the flutter build web they are backed into the application.

After solving these challenges, using the Amplify Flutter library worked without noteworthy problems or hickups.

Using the Amplify Flutter Library to authenticate a user with Cognito

The documentation for Amplify Flutter is really good and we decided to also use the Authenticator Widget – at the end, this project was born through a hackathon – and we did not have much time to implement the authentication flow ourselves.

In our main.dart we needed to include the Amplify configuration:

Future<void> _configureAmplify() async {
  final api = AmplifyAPI(modelProvider: ModelProvider.instance);
  await Amplify.addPlugin(api);

  // Add any Amplify plugins you want to use
  final authPlugin = AmplifyAuthCognito();
  await Amplify.addPlugin(authPlugin);

  try {
    await Amplify.configure(amplifyconfig);
  } on AmplifyAlreadyConfiguredException {
    safePrint(
        'Tried to reconfigure Amplify; this can occur when your app restarts on Android.');
  }
}

and then we only needed to adjust the build method to distinguish betwen AuthenticatedViews and “normal” views:

Widget build(BuildContext context) {


   final GoRouter router = GoRouter(
      routes: <RouteBase>[
        GoRoute(
          path: '/',
          builder: (BuildContext context, GoRouterState state) {
            return HomeScreen();
          },
        ),
        GoRoute(
          path: '/profile',
          builder: (BuildContext context, GoRouterState state) {
            return AuthenticatedView(
                child: ProfileScreen(),
            );
          },
        ),
        GoRoute(
          path: '/imprint',
          builder: (BuildContext context, GoRouterState state) {
            return ImprintScreen();
          },
        ),
        GoRoute(
          path: '/privacy',
          builder: (BuildContext context, GoRouterState state) {
            return PrivacyScreen();
          },

        ),
        GoRoute(
            path: '/in',
            builder: (BuildContext context, GoRouterState state) {
              return AuthenticatedView(
                  child: LoggedInHomepage(title: 'AWS Speakers Directory'));
            },
        ),
      ],
    );

    return Authenticator(
      initialStep: AuthenticatorStep.signIn,
      signUpForm: SignUpForm.custom(
        fields: [
          SignUpFormField.username(),
          SignUpFormField.password(),
          SignUpFormField.passwordConfirmation(),
          SignUpFormField.email(),
          SignUpFormField.custom(
              title: "First name",
              attributeKey: CognitoUserAttributeKey.givenName),
          SignUpFormField.custom(
              title: "Last name",
              attributeKey: CognitoUserAttributeKey.familyName),
          SignUpFormField.custom(
              title: "City",
              attributeKey: CognitoUserAttributeKey.custom("city")),
          SignUpFormField.custom(
              title: "Country",
              attributeKey: CognitoUserAttributeKey.custom("country")),
          SignUpFormField.birthdate(),
          SignUpFormField.gender(),
        ],
      ),
      child: MaterialApp.router(
        title: 'AWS Speakers Directory',
        routerConfig: router,
      ),
    );
  }

We would have loved to open source our code, but unfortunately that option is currently not available in Amazon CodeCatalst.
If you’re an experienced Flutter developer and don’t like the code you see above – that’s fine, the four of us have been learning Flutter iwth this project – approach us and tell us how to do this better! 😉

Using the Amplify Flutter Library to execute AppSync / GraphQL queries

Accessing AppSync was also really easy by following the Amplify Flutter documentation. Here is our code for retrieving events from the backend:

Future<List<EventRow?>> _listEvents() async {
    try {
      String graphQLDocument = '''query GetEvents {
      listEvents {
        pk
        sk
        title
        description
        tags
        length
      }
    }''';

      var operation = Amplify.API
          .query(request: GraphQLRequest<String>(document: graphQLDocument));
      List<EventRow> events = [];
      var response = await operation.response;
      var data = response.data;
      if (data != null) {
        Map<String, dynamic> userMap = jsonDecode(data);
        print('Query result: ' + data.toString());
        List<dynamic> matches =
            userMap["listEvents"] != null ? userMap["listEvents"] : [];
        matches.forEach((element) {
          if (element != null) {
            if (element["id"] == null) {
              element["id"] = "rnd-id";
            }
            var event = Event.fromJson(element);
            events.add(EventRow(event, context));
          }
        });
        return events;
      }
    } on ApiException catch (e) {
      print('Query failed: $e');
    }

    return <EventRow?>[];
  }

This uses the previously generated models. All of the queries are authenticated using the Cognito Authentication – and this is the identity that we can also use in the backend to authorize access.
The Amplify documentation is pretty good, hence I will not be adding more details into this post.
Please ask if you have any specific questions.

Wishes for the Amplify SDK for Flutter

We already mentioned most of them: better amplify codegen support (including the option to generate both Flutter and Typescript models at the same time), better documentation or support for setting up the Amplify configuration completely without an Amplify backend.
Another thing we did not talk about here but Julian mentions in his blog post are the AppSync [merged APIs] which are currently not supported – thus we needed to execute amplify codegen for each of our microservices schemas and then for the merged APi – and then manually needed to bring the models into a usable structure (e.g. copy/pasting from the different ModelProviders).

What’s next for our project

So, what’s next? After the hackathon we are thinking about making this a “kind of” OpenSource, collaborative project. Here we are looking for contributors – please contact us if you are interested.
Besides that, Matt covers a godd bunch of roadmap items already – and we definately need someone with more Flutter experience to review our UI code to e.g. introduce the BloC pattern for cleaner programming, adding some styling and to expand existing functionalities.
And we still need to play the “cross-platform” card – we need a build step to generate iOS and Android versions of our application and need to make sure that the apps land in the Appstore(s)!
Please reach out to us if you would like to get involved!

Who to reach out to?
DanielleMattJulian or myself.

Views: 446

Continuous Integration and Deployment on AWS – and my wishlist for CI/CD Tools on AWS

As I’ve been sharing before, I am very fortunate this year and will be giving a DevChat at the biggest AWS conference of the world – at re:Invent 2022 in Las Vegas.

AWS offers different tools for all parts of your CI/CD lifecyle.
In this post I am going to cover the set of Code* tools that are available on AWS today – and will share my thoughts about what these tools are missing.

As part of the preparation for the talk and as part of both my private project (code-name: MPAGA) and my main job @ FICO I have been researching and learning a lot about CI/CD (Continuous Integration and Continuous Deployment) – and for the private projects especially around CI/CD that natively runs on AWS.
I’ve found out that not everything that these tools offer today is perfect and wanted to share some ideas on what could be improved. Where possible or applicable, I will also propose workarounds or alternatives.

We will look at a few of the tools in the order of the “product lifecycle”:
1. Code
2. Build/Test
3. Deploy
4. Release

Tools that are part of the “Code” phase

For the purpose of this post we are going to focus on tools that are natively offered by AWS as already mentioned and part of your CI/CD pipeline.

AWS CodeStar – Integration of projects

AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS and provides a unified interface for your project. It provides you different templates that you can choose from to quickly start your project.

It allows you to manage your team, with permissions and integrates with your existing JIRA for issue management. It also integrates with your IDE (or with Cloud9).
You can also integrate with an existing Github repository.

AWS CodeCommit – hosted Git

AWS CodeCommit is a managed service for Git (just like Bitbucket, Github, Gitlab, …. It provides a hosted “git” environment that is encrypted at rest and can be accessed using usual Git clients.

AWS CodeGuru

Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve code quality and identify an application’s most expensive lines of code. Integrate CodeGuru into your existing software development workflow to automate code reviews during application development and continuously monitor application’s performance in production and provide recommendations and visual clues on how to improve code quality, application performance, and reduce overall cost.

Tools that are part of the “Build” or “Test” phase

AWS CodePipeline – Tool to manage your CI/CD pipeline

AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.

AWS CodeBuild – Build tool based on containers

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages.

AWS CodeArtifact – artifact storage

AWS CodeArtifact allows you to store artifacts using popular package managers and build tools like Maven, Gradle, npm, Yarn, Twine, pip, and NuGet.

Tools that are part of the “deploy” phase

AWS CodeDeploy

AWS CodeDeploy is a fully managed deployment service that automates software deployments to various compute services, such as Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Service (ECS), AWS Lambda, and your on-premises servers.

AWS FIS

AWS Fault Injection Simulator (FIS) is a fully managed service for running fault injection experiments to improve an application’s performance, observability, and resiliency.

Tools that are part of the “Release” phase

AWS AppConfig (part of Systems Manager)

AWS AppConfig makes it easy for customers to quickly and safely configure, validate, and deploy feature flags and application configuration.

Wishlist

I’ve been able to gain some experience with the tools while working on a few projects, including cdk-codepipeline-flutter and here is a list of things that I believe could be improved.
My main focus here is on CodePipeline, as it serves as the glue between all of the other tools.

Native branch support for CodePipelines

Working with Jenkins and the MultiBranch plugin makes it easy to allow developers to quickly test and deploy code that they are working on using the CI/CD pipeline. Unfortunately, CodePipeline today does not allow automated branch discovery, so if you want to enable the automated execution of a pipeline for a branch, you will need to manually configure webhooks and then create a new pipeline (or delete an existing pipeline) when branches are created (or deleted). This is not easy to implement and it would be great if CodePipeline should natively allow creating a pipeline automatically for all branches of a linked Git repository.

Additional Templates and Best Practices

When setting up a CI/CD pipeline on AWS CodePipeline, this would be easier to use if additional best practices and templates would be available as part of the tool itself. AWS is starting to promote a new Open Source project called “Deployment Pipeline Reference Architecture“. this is a step in the right direction, but it needs to be expanded by other flavours of a deployment pipeline. Also the code examples need to be improved, made up to date and needs to include all languages supported by AWS CDK.
This is critical to allow an efficient adoption of the different tools.

Native integration of 3rd party tools

AWS CodePipeline should natively support integrations to other 3rd party tools that should be part of your CI/CD pipeline – e.g. security scans like Aquasec and Checkmarx.

Remove dependency for a specific AWS account and support Cross-Account deployments natively

As indicated in this AWS Blog post, the best practice for setting up a CI/CD pipeline and for managing your deployments is to use multiple, different accounts to manage your deployments. CI/CD should not be bound to an account level and this includes the management of your accounts that are able to access and configure the CI/CD tools.
Maybe a good option here would be the integration with the AWS Identity service. That might allow decoupling the CI/CD toolchain from the AWS account.

Up to date CodeBuild images

Docker Images provided by the CodeBuild team should be updated regularly and should support all “modern” toolkits. The open source project has some activity, but an issue for supporting newer Android versions is now open for some time…

Publishing options to the different mobile stores (AppStore, Play Store, Windows Store, etc….) should be possible

I’ve been looking at developing a mobile app using Flutter, but what I have not yet been able to achieve is pushing the created and build applications to the different app stores. Today, AWS does not support this natively.
You CAN integrate this with 3rd party tools like CodeMagic, but natively there is no option on AWS to publish your application.

Wrap up

This concludes the wish list that I have today for the existing AWS CI/CD tools.

Did I miss anything that you believe should be added?

Use the comments to give feedback or reach out to me on LinkedIn or by E-Mail!

Views: 442

Building a Flutter application for Web, iOS and Android using a CI/CD pipeline on CodeBuild – #cdk4j

This post is a follow up to the last one where I showed a CDK project that can be used to build a Flutter application for Web.

In this post, we are going to expand our existing project on Github to be able to build an “apk” file for Android and a zip file for iOS. Before I can show you how this is possible, let’s start with some challenges that I’ve faced 🙂

The aim of this CI/CD pipeline is (not yet) to be able to push the apps into the AppStore / PlayStore for testing. That’s something we can add later 😉

Challenges on the way to a full CI/CD pipeline for Flutter on CodeBuild

While preparing this post I unfortunately faced more problems building up the pipeline than expected. Several problems.

AWS CodePipeline does not support M1 / macOS build images

Currently, AWS CodePipeline unfortunately does not provide the possibility to use the famous M1 minis on AWS as CodeBuild images. Tis his a real problem, as is makes it impossible to use CodeBuild for building iOS apps.
Running XCode on macOS is a requirement for building a Flutter app for iOS.
The M1 minis on AWS are currently pricey as hell for this use case – if you start ONE build you are directly charged 24 hours, even if the build takes only a few minutes! You need to actual get a dedicated instance, … – not usable for our use case of quickly building something for a side-project.
So we needed to find an alternative… read below! 😉

The current AWS CodePipeline standard runtimes are not able to build (modern) Android applications

The runtimes available and exposed by CodePipeline support Android runtime 29 – and the Docker images are provisioned using Java 8. Unfortunately, as of July 2021, the Android gradle tools (used by Flutter) require Java 11. I have created an issue in the corresponding Github (see here) but needed to find a workaround to move on – I think I’ve found one, but I hope that anyone reading this might have a better way or idea?

TypeScript dependencies on AWS Lambda can take your sleep

#awscommunity helps!

When implementing the trigger for the iOS App build (see more details below) I decided to “quickly” implement the HTTPS POST call using TypeScript – which turned out to be a bad decision 🙂
I had trouble getting the “axios” dependency that I am using installed correctly. I asked around, especially my fellow AWS Community Builders and got a lot of great tips and ideas (kudos to Martin and Matt). Martin had the right “stomach feeling – I was missing a “npm install”.

Matt enlightened me with the three different possibilities of making Typescript Lambda functions understand their dependencies:

1. Bundle dependency with your source code (can be achieved using esbuild)

2. Add a package.json and node_modules to the lambda function source – only a good idea if dependencies cannot be minified

3. Put the dependencies in a lambda layer

Matt Morgan

At the end, this challenge was especially difficult because I needed to add the required “npm install” in two places: In the “installCommands” for the CodePipeline itself and in the “installCommands” for the Flutter build step.

CodeBuild is slow, misses conditional steps and misses integrations – and does not easily allow multi-branch pipelines

While implementing the pipeline and solving the different challenges mentioned above, I lost some time because of CodeBuild being “slow” (>1min wait time during provisioining of the build containers). Thats understandable given the nature of the service, however it would be cool to have something like a “warm start” for a pipeline where the containers are re-used instead of re-provisioned.

There are no conditional steps – no chance to run a job only based on environment variables or anything similar. That made me implement a workaround. It would be cool to be able to use something like “branch-conditions” in the way Jenkins offers it.

CodeBuild offers only basic integration to SNS, but you cannot integrated a “lambda build step” to run the CodeMagic integration i nparallel to the flutter build job, but that is not possible, so I needed to run this “at the end” of the pipeline.

Another thing I’d love to have: multibranch pipelines. I needed to merge everything to main directly in order to test, because I couldnt figure out how my CDKPipeline would be able toe support multiple branches.

Reaching the goal: a full CI/CD pipeline running on AWS CodeBuild to build a Flutter app for Web, Android and iOS

Here is a diagram of the “final result” that I am presenting today:

Overview picture for CI/CD pipeline for Flutter App publishing to Web, Android APK and iOS zip

The “output” artifacts of our pipeline are:
– Flutter Web application (located on S3 and reachable through HTTP call)
– Flutter Android APK (that can be side-loaded on Android phones, located on S3 bucket)
– Flutter iOS App (that can be side-loaded on iOS phones, located within CodeMagic)

As the diagram shows, we needed to fall back to an 3rd-party, non-AWS service to be able to package the iOS application. After doing a quick “vendor selection” and a shortlist that included Bitrise and CodeMagic I decided to integrate CodeMagic in this example – because I liked the API more and it offers more free build credits/minutes. Setting it up took less then 5 minutes – it connects natively to Github and the set up of the Flutter pipeline is very easy.
The integration is set up using a Lambda function that calls the “start build” API.

How did I solve the challenges mentioned above?

The problem building the iOS image was resolved by integrating the external Service CodeMagic.

The Android Runtime Dependencies problems with Java 11 was resolved by switching to a custom docker container (Open source) – and then installing the requirements on top of it (npm/node, awscli, etc.).

What did you learn in this post?

In this post you have learned on how to expand the implementation of our CI/CD pipeline for an example Flutter application to not only building a “web” application, but also building an Android APK and an iOS zip file.
You have also seen an extension and integration of the Codepipeline with SNS for notifications and those events being picked up by a lambda function to trigger an external HTTPs API.
This is a major step – with this pipeline we are able to publish our application for three different “platforms” without a single code change – and it will all happen completely automatically!

I’d be glad to get your inputs into my Github repository as a pull request or just as comments on the project itself.

Next steps

Further expansions needed to this project:
– CodePipeline already has a SNS topic that it reports to – but right now the build iOS / Android App packages are not exposed anywhere – the idea would be to publish to an SQS queue the name of the APK file and the CodeMagic Build Id – and have a lambda function that is triggered by the queue update a link on the example application to download the newest version of the app 😉 Today, we need to retrieve both from S3 / CodeMagic itself
– use the CloudFormation Exports of the Lambda functions in the Flutter application instead of hardcoding the URLs for the Lambda Function URLs- enhance security for Lambda Function URLs
– add CloudFront in front of S3 to allow HTTPs connections to the Flutter App
– enhance CI/CD pipeline to package Windows App using Flutter
– enhance CI/CD pipeline to push created apps to App Store / Play Store

Feel free to contribute and add your contributions to this project into my Github repository.

Views: 3156

Building a Flutter application (for Web) with AWS Lambda Function URL backend using AWS CDK Pipelines (written in Java)

Wow, this is a long title 😉

In this post I am going to use CDK Pipelines to build a demo Flutter application hosted on AWS S3 with a Backend powered by AWS Lambda (using Function URLs).
The CDK code will be in Java, the Lambda functions in Typescript and the WebApp in Dart. Why? Because I love trying out things 🙂

The code used here is not production ready and does not fulfill required security best practices! 🙂

The CI/CD pipeline

The CI/CD pipeline for this project uses CDK Pipelines and that means it is build on top of AWS CodePipeline (which under the hood uses CodeBuild, CodeStar and other services to be functional).

It consists of different stages that build and deploy the corresponding part of the application:
– one stage for each lambda function
– one stage for the build and deployment of the Flutter application

The stages required by the CDK Pipeline to update itself are automatically added but not part of our code:

public CdkPipelineStack(final Construct parent, final String id, final String branch, final StackProps props) {
		super(parent, id, props);

	String connectionArn = "arn:aws:codestar-connections:eu-central-1:xxx:connection/xxx";
	final CodePipeline pipeline = CodePipeline.Builder.create(this, getCodepipelineName(branch))
			.pipelineName(getCodepipelineName(branch)).dockerEnabledForSynth(true)
			.synth(CodeBuildStep.Builder.create("SynthStep")
					.input(CodePipelineSource.connection("lock128/cdk-codepipeline-flutter", branch,
							ConnectionSourceOptions.builder().connectionArn(connectionArn).build()))
					.installCommands(List.of("npm install -g aws-cdk" // Commands to run before build
					)).commands(List.of("mvn test", "mvn package", // Language-specific build commands
							"npx cdk synth", // Synth command (always same)
							"bash start_codecov.sh"))
					.build())
				.build();

	pipeline.addStage(new CheckAgeLambdaStage(this, "DeployCheckAgeLambda"), getCheckAgeStageOpts());
	pipeline.addStage(new PaperSizeStage(this, "DeployPaperSizeStage"), getPaperSizeStageOpts());
	pipeline.addStage(new CalculatorStage(this, "DeployCalculatorStage"), getCalculatorStageOpts());

	PolicyStatement flutterDeployPermission = getDeployPermissions();
	CodeBuildStep buildAndDeployManual = CodeBuildStep.Builder.create("Execute Flutter Build and CodeCov")
			.commands(getFlutterBuildShellSteps()).rolePolicyStatements(Arrays.asList(flutterDeployPermission))
			.build();

	pipeline.addStage(new FlutterBuildStage(this, "FlutterBuildStage"),
			getFlutterStageOptions(buildAndDeployManual));

}

This is the definition of the CI/CD pipeline, it uses our Github repository as the source of the code and automatically starts after a push to the repository on the main branch.
The “Flutter Build Stage” is the one that currently builds the Flutter web application, deploys it and makes it available to the end user. Going forward, to make best use of Flutter, we would need to expand this stage to also be able to build an iOS application, an android application or an application for any other platform supported by Flutter. As a “goal” I would personally also want to extend this stage to be able to publish the apps to the corresponding stores (App Store, Play Store, Windows Store, …) – Thanks to my friends at cosee for the help and guidance around this process!

The architecture diagram of the application

What we are using to show-case the usage of AWS CodePipeline, Flutter as an application and AWS Lambda Function URLs as backend is not really an “application” – but it can do dynamic things and it can easily be extended to include a database backend, etc.

Infrastructure as Code using AWS CDK in Java

In this section you are going to have a look on the CDK code required to provision the infrastructure on AWS.
We are using AWS CDK written in Java – and because of that Maven as a build tool (Maven is the default tool for CDK projects – I’ve already used Gradle as build tool and that works in the same way.

The possibility of writing Infrastructure Code in Java is a great thing because it gives us the option to build on top of our existing skills – and I’ve written enough lines of Java in my career to feel comfortable using it to provision the infrastructure.
This is one of the best things in CDK: You can write your Infrastructure code in Java, Typescript, Python, … – and that helps us to build teams that only have “one” language as a “core skill” – one team might choose to develop in Java, anotherone in Typescript, another team could use Go – this allows the teams to build up mastery in a specific language!

In our example however, we are not making use of this possibility – I’ve choosen to rather go the opossite way: Combine a few languages, just to show that it works 😉

Stacks in the Example Project

The CDK code consists of four “stacks” – each of the Stack representing one “component” in this application.
In our example these four stacks are part of one CDK Application & one CodePipeline. In bigger projects, you might want to split these out into seperate applications – which introduces a lot tof things to consider (e.g. how do they fit and work together, etc.).

While writing this post, the four stacks combined are 129 lines of source code. With the help of the CDK Constructs that are being used this translates to over 1k lines of code in CloudFormation. We are only using L2 constructs here – there is way more constructs available that you can use in the Constructs Hub and also a lot of guidance regarding the usage of CDK over at CDKPatterns.

CDK Stack that provisions the infrastructure for the Flutter hosted website

Making a S3 bucket available to host our Flutter application becomes a very short piece of Java code as shown in the picture above.

Lambda functions behind Function URLs in Typescript and Python

This was definately one of the most awaited announcements in the Serverless space this year: The GA of the “Function URLs” for Lambda on AWS – and this is the reason for me using these as a backend in this showcase.
With this announcment, it is possible to directly expose your application running on AWS Lambda behind an HTTPs endpoint! Without an ApiGateway, Proxy, …
Provisioning the infrastructure for the lambda function with the function URL functionality activated is only a few lines in CDK:

CDK Stack for the Check Age Lambda function with the activation of the Function URL

These Lambda functions will now horizontally scale without us as a developer getting involved. More details on function URLs – there is a lot of good posts on it around like this one or this one.
Around scalability, fellow community builder Vlad has written up a great guide around scalability for containers on his website.

Web Single Page Application built using Flutter and the benefits of using Flutter

Flutter as a multi-platform, developer driven tool gives a lot of flexibility. With a single code base, you are able to publish your application to various targets. In this example, we are using the target platform “web” – which compiles the Dart code to a single page application.

This application is automatically aware of the size of your screen and is interative – which is another cool thing that Flutter takes away from you as a developer. A lot of organizations use Flutter today and the cookbook gives you an easy and good start into developing Flutter applications.

Our example application has three input fields that take input and pass it back to our Function URLs – and then automatically update a Text Widget with the results of the Function URL call. This implementation will work on all platforms.

What did you learn in this post?

You have learned how easy it is to provision AWS infrastructure using AWS CDK. You have also seen that you can easily combine different programming languages in a single CDK application and got an experience on how a CI/CD pipeline can help to automatically deploy our application using AWS CodeBuild.
In addition to that, you have looked at Flutter as a multi-platform development tool.

I’d be glad to get your inputs into my Github repository as a pull request or just as comments on the project itself.

Next steps

Further expansions needed to this project:
– use the CloudFormation Exports of the Lambda functions in the Flutter application instead of hardcoding the URLs for the Lambda Function URLs- enhance security for Lambda Function URLs
– add CloudFront in front of S3 to allow HTTPs connections to the Flutter App
– enhance CI/CD pipeline to package Android App using Flutter
– enhance CI/CD pipeline to package iOS App using Flutter
– enhance CI/CD pipeline to package Windows App using Flutter
Feel free to contribute and add your contributions to this project.

Listen to Werner Vogels talking about the benefits of CDK

Views: 2852

Why CDK changes the everything for building DevOps teams that own something end-to-end

Software Engineers, Developers, etc. are all “Builders” in my mind. Builders try out a lot of things and most of them are eager to try out new technologies and possibilities.
While doing that, a lot of them behave like these engineers “in real world”:

people working on building during daytime

What does that mean?

They go to the top of something, climbing somewhere and taking risks, but a lot of times they forget what is “below” what they are building.

For these workers in real life, it will most probably obvious that they are not risking their life by climbing up there – because they can see what is below and are aware of the groundwork below the wall they are climbing on.

How is that different in our “Cloud-Software/SaaS-industry”?

I believe that the main difference to day is, that most of our “Engineers” (= Software Developers) are not aware of the infrastructure components that are required to bring their application or microservice up and make sure that it can consistently run.

Why are they not aware?

One of the challenges that I am seing in my day to day job is that we have a lot of “abstractions” that we have build for software developers to make it “easy” to develop and test software. Think of “Docker” or “Kubernetes” (k8s) as making it easy to test applications or microservices locally and make them look, feel and behave the same as in the “target environment”.
However, that is not essentially the truth.
During the development cycle, the engineer will test locally – or maybe within a Continuous Integration environment – but both of these environments will usually not have “production like” data assets and thus will never be comparable to a production environment.

So – it is a real problem, because engineers test against infrastructure (and maybe even deployment strategy) that is not even close to how the service will run in a production environment.

How do we change that?

It should all start with a plan…and everyone that is part of a products lifecycle should be part of it.

person working on blue and white paper on board

CDK changes everything

CDK – and for me this includes awscdk, cdktf, cdk8s – gets the engineer where they feel “home”:

We can describe and write infrastructure in “the developers native language” – Java, Typescript, Go, .NET.

With this, everyone can be empowered to write infrastructure code and feel responsible for it. No more excuses: I don’t like YAML / JSON, I dont know HCL, I don’t know the services, etc.
If you are a developer, you can now write infrastructure code.

This opens up new possibilities for building DevOps teams

Now, with CDK “in the game”, we can empower “developers” and “operators” to talk to help each other “in one joined language”.
Operators can help Developers understand the infrastructure required to bring their service up to speed – and Developers can help Operators to develop infrastructure code.

On the other side, if you start a new DevOps team, you can directly start building out the infrastructure “as it would look like in production” using CDK!
This really makes the developers think about how the service should be running in the production environment later and that will help to drive the correct architecture decisions right from the start.

If you want to learn CDK – look at the CDK Workshop: https://cdkworkshop.com/


More to follow around why I believe CDK is making every cloud developers and DevOps engineers life better soon!

Views: 717