AWS: Containers, serverless, and cloud-native computing oh my!

Amazon Web Services (AWS) CEO Andy Jassy is famous for his hours-long keynotes, but his 2020 AWS re: Invent took the cake. While numerous AWS hardware upgrades and the eye-popping announcement that AWS and Apple are now offering macOS as a desktop-as-a-service got most of the headlines, what I see as the most significant updates is how AWS is improving its Lambda serverless function service.

Of course, we still have to be on servers. Serverless services like Lambda must run on something after all. But serverless computing functions don’t require management. You simply call these functions as needed; they’re executed, scaled, and billed in response to the exact demand needed at the moment.

And, when AWS says “the moment,” it means a millisecond of moment. Starting now, AWS announced, “We are rounding up the duration to the nearest millisecond with no minimum execution time.” This is going to save many AWS customers some serious coin.

Making it even more useful, you can now allocate up to 10GBs of memory to a Lambda function. That’s three-times bigger than before. To put all that RAM to good use, Lambda also allocates CPUs and other resources linearly in proportion to the amount of memory configured. In other words, you can now have access to up to six vCPUs in each execution environment. That’s a lot of resources to make even your biggest multithreaded and multiprocess applications run faster than ever. And, remember, since AWS charges you for time, you may actually save money with this approach. 

What’s that? While serverless computing is all very nice and nifty, do you use containers for your cloud applications? No problem, my friend. AWS now enables you to package and deploy Lambda functions as container images of up to 10GB in size

In this marriage of containers and serverless computing, AWS is providing you with all base images for the supported Lambda runtimes — Python, Node.js, Java, .NET, Go, and Ruby — so you don’t have to do it yourself. With these, it’s much easier to add your code and dependencies without trying to refactor your application to work in a cloud-native, serverless way.

Or, if you’d rather, AWS is also giving you base images for custom runtimes based on Amazon Linux that you can include your own runtime with the Lambda Runtime API.

Not using Amazon Linux? No sweat. You can deploy your own arbitrary base Linux images to Lambda. All that’s required is that they implement the Lambda Runtime API.  For now, AWS is supporting images based on Alpine or Debian Linux. 

You can also build your own base images using AWS’s newly released open-source Lambda Runtime Interface Clients. These implement the Runtime API for all supported runtimes. So, for example, if you’re using Red Hat Enterprise Linux (RHEL) or Ubuntu, you can easily add them with their native package managers.

If you want to try this out at home first, rather than burn time and dollars on AWS, Amazon is also releasing an open-source Lambda Runtime Interface Emulator. With this, you can test your images locally to make it run when deployed to AWS and Lambda. The Lambda Runtime Interface Emulator is included in all AWS-provided base images and can be used with arbitrary images as well.

Thinking of doing things locally, like in your own private cloud or data centers, AWS is also enabling customers to run Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS) on your own services These new services,  Amazon ECS Anywhere and Amazon EKS Anywhere will be available in the first half of 2021.

As promised, AWS also released a new container registry to let developers share and deploy container images publicly. This revised Amazon Elastic Container Registry (ECR) Public enables developers to store, manage, share, and deploy container images for anyone. You can host both your private and public container images on ECR Public. This also means you no longer must operate your own container repositories. These images are geo-replicated for reliable availability across the world. No matter where your company’s offices are, you’ll be able to quickly serve up your own custom or preferred images as needed. 

Of course, managing all these containers, serverless computing, and cloud-native functions isn’t easy. Kubernetes is great for container orchestration, but it can take you so far. That’s where AWS’s Proton comes in. 

While Proton is more than a little confusing — see the AWS Proton Hacker News discussion thread if you don’t believe me — the goal seems clear.  Since these new flanged cloud-native applications are built from numerous smaller chunks of independently developed and maintained code stitched together to build and scale an application, it’s hard to keep track of them, never mind use them.

AWS Proton is meant to provision, deploy, and monitor applications built from container and serverless functions. How? By letting you define application components as “stacks.” The program comes with a set of curated application stacks with built-in AWS security, architecture, and tools best practices. AWS, behind the scenes, ensures that the stacks stay standardized and up-to-date. In theory, AWS Proton will thus automate the deployment of infrastructure as code, CI/CD pipelines, and monitoring for container and serverless applications.

That’s all sounds great, but it leaves open the question of how exactly will Proton do this? The closer you look, the more you’re left wondering, at this point, what exactly is going on here? Stay tuned. We don’t have the answers yet. 

In addition, it’s clear that, even if Proton is the greatest thing since sliced bread (or Kubernetes), if you use it, you’re going to be locked into the AWS cloud for forever and a day. Maybe that will work for you, maybe it won’t. 

Related Stories:

Access the original article
Subscribe
Don't miss the best news ! Subscribe to our free newsletter :