philschmid RSS feed 09月30日 19:14
AWS Lambda容器支持详解与实践
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

AWS re:Invent大会上,AWS Lambda宣布支持自定义容器(Docker)作为运行时,突破250MB限制,允许使用BERT等复杂模型。本文通过SAM模型演示如何部署包含10GB内存和6vCPUs的Lambda函数,并涵盖ECR集成、Docker镜像构建及API Gateway配置。适合希望扩展Lambda能力的开发者。

AWS Lambda容器支持允许使用自定义Docker镜像作为运行时,突破传统250MB大小限制,支持更复杂的NLP模型和大型应用。容器运行时需通过ECR存储镜像,并配置ImageUri参数指向镜像地址,同时PackageType设为Image类型。

部署流程包括:安装SAM CLI(至少1.13.0版)、构建Docker镜像(基于public.ecr.aws/lambda/python:3.8基础镜像)、推送镜像至ECR、创建template.yaml定义函数并指定ImageUri为ECR镜像URL,最后通过sam deploy部署应用。

Lambda函数配置可扩展至10GB内存和6vCPUs,配合容器运行时极大提升模型处理能力。示例中通过API Gateway触发Lambda,请求路径/hello返回预设JSON响应,验证部署成功。此方案特别适用于需要集成机器学习模型的场景。

SAM模板中需注意Function资源配置,包括FunctionName、ImageUri(格式如账号ID.dkr.ecr.区域.amazonaws.com/镜像名:最新版)、PackageType及触发源(如API Gateway的Path和Method)。部署后可通过API Gateway端点测试函数响应。

It's the most wonderful time of the year. Of course, I'm not talking about Christmas but re:Invent. It is re:Inventtime. Due to the current situation in the world, re:Invent does not take place like every year in Las Vegas but isentirely virtual and for free. This means that it is possible for everyone to attend. In addition to this, this year itlasts 3 weeks from 30.11.2020 to 18.12.2020. If you haven´t already registered do ithere.

In the opening keynote, Andy Jassy presented the AWS Lambda Container Support, which allows you to use custom container(docker) images as a runtime for AWS Lambda. With that, we can build runtimes larger than the previous 250 MB limit, beit for "State-of-the-Art" NLP APIs with BERT or complex prochttps://www.philschmid.de/static/blog/aws-lambda-with-custom-docker-image/reinvent.pngdocker-image/reinvent.png" alt="screenhsot-andy-jessy">

photo from the keynote by Andy Jassy, rights belong to Amazon

Furthermore, you can now configure AWS Lambda functions with up to10 GB of Memory and 6 vCPUs.

In their blog post, Amazonexplains how to use containers as a runtime for AWS lambda via the console.

But the blog post does not explain how to use custom docker images with the Serverless Application Model. For thesecircumstances, I created this blog post.


Services included in this tutorial

AWS Lambda

AWS Lambda is a serverless computing service that lets yourun code without managing servers. It executes your code only when required and scales automatically, from a fewrequests per day to thousands per second.

Amazon Elastic Container Registry

Amazon Elastic Container Registry (ECR) is a fully managed container registry.It allows us to store, manage, share docker container images. You can share docker containers privately within yourorganization or publicly worldwide for anyone.

AWS Serverless Application Model

The AWS Serverless Application Model (SAM) is an open-source framework and CLIto build serverless applications on AWS. You define the application you want using yaml format. Afterwards, you build,test, and deploy using the SAM CLI.


Tutorial

We are going to build an AWS Lambda with a docker container as runtime using the "AWS Serverless Application Model".We create a new custom docker image using the presented Lambda Runtime API images.

What are we going to do:

    Install and setup samCreate a custom docker imageDeploy a custom docker image to ECRDeploy AWS Lambda function with a custom docker image

You can find the complete code in this Github repository.


Install and setup sam

AWS provides a5 step guide on how to installsam. In this tutorial, we are going to skip steps 1-3 and assume you already have an AWS Account, an IAM user with thecorrect permission set up, and docker installed and setup otherwise check out thislink.The easiest way is to create an IAM user with AdministratorAccess (but I don´t recommend this for production usecases).

We are going to continue with step 4 "installing Homebrew". To install homebrew we run the following command in ourterminal.

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

Note: Linux Users have to add Homebrew to your PATH by running the following commands.

test -d ~/.linuxbrew && eval $(~/.linuxbrew/bin/brew shellenv)test -d /home/linuxbrew/.linuxbrew && eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)test -r ~/.bash_profile && echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.bash_profileecho "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.profile

Afterwards we can run brew --version to verify that Homebrew is installed.

The fifth and last step is to install sam using homebrew. We can install the SAM CLI using brew install.

brew tap aws/tapbrew install aws-sam-cli

After we installed it we have to make sure we have atleast version 1.13.0 installed

sam --version # SAM CLI, version 1.13.0

To update sam if you have it installed you can run brew upgrade aws-sam-cli.


Create a custom docker image

After the setup, we are going to build a custom python docker image.

We create a app.py file and paste the following code into it.

import json def handler(event, context):    body = {        "message": "Go Serverless v1.0! Your function executed successfully!",        "input": event    }    response = {        "statusCode": 200,        "body": json.dumps(body)    }    return response

To containerize our Lambda Function, we create a dockerfile in the same directory and copy the following content.

FROM public.ecr.aws/lambda/python:3.8 # Copy function codeCOPY app.py ${LAMBDA_TASK_ROOT} # Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)CMD [ "app.handler" ]

Additionally we can add a .dockerignore file to exclude files from your container image.

DockerfileREADME.md*.pyc*.pyo*.pyd__pycache__.pytest_cacheevents

To build our custom docker image we run.

docker build -t docker-lambda .

and then to test it we run

docker run -d -p 8080:8080 docker-lambda

Afterwards, in a separate terminal, we can then locally invoke the function using curl.

curl -XPOST "http://localhost:8080/2015-03-31/functions/function/invocations" -d '{"payload":"hello world!"}'

Deploy a custom docker image to ECR

Since we now have a local docker image we can deploy this to ECR. Therefore we need to create an ECR repository withthe name docker-lambda.

aws ecr create-repository --repository-name docker-lambda

using AWS CLI V1.x

To be able to push our images we need to login to ECR. We run an output ($()) from the command we retrieve fromecr get-login. (Yes, the $ is intended).

$(aws ecr get-login --no-include-email --region eu-central-1)

using AWS CLI V2.x

aws_region=eu-central-1aws_account_id=891511646143 aws ecr get-login-password \    --region $aws_region \| docker login \    --username AWS \    --password-stdin $aws_account_id.dkr.ecr.$aws_region.amazonaws.com

read more here.

Next we need to tag / rename our previously created image to an ECR format. The format for this is{AccountID}.dkr.ecr.{region}.amazonaws.com/{repository-name}

docker tag docker-lambda $aws_account_id.dkr.ecr.$aws_region.amazonaws.com/docker-lambda

To check if it worked we can run docker images and should shttps://www.philschmid.de/static/blog/aws-lambda-with-custom-docker-image/docker-image.pngs-lambda-with-custom-docker-image/docker-image.png" alt="docker-image">

Finally, we push the image to ECR Registry.

 docker push 891511646143.dkr.ecr.eu-central-1.amazonaws.com/docker-lambda

Deploy AWS Lambda function with a custom docker image

Now, we can create our template.yaml to define our lambda function using our docker image. In the template.yaml weinclude the configuration for our AWS Lambda function. I provide the complete template.yamlfor this example, but we gothrough all the details we need for our docker image and leave out all standard configurations. If you want to learnmore about the sam template.yaml, you can read through the documentationhere.

AWSTemplateFormatVersion: '2010-09-09'Transform: AWS::Serverless-2016-10-31Description: serverless-aws-lambda-custom-docker # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rstGlobals:  Function:    Timeout: 3 Resources:  MyCustomDocker:    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction    Properties:      FunctionName: MyCustomDocker      ImageUri: 891511646143.dkr.ecr.eu-central-1.amazonaws.com/docker-lambda:latest      PackageType: Image      Events:        HelloWorld:          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api          Properties:            Path: /hello            Method: get Outputs:  # ServerlessRestApi is an implicit API created out of Events key under Serverless::Function  # Find out more about other implicit resources you can reference within SAM  # https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api  MyCustomDockerApi:    Description: 'API Gateway endpoint URL for Prod stage for Hello World function'    Value: !Sub 'https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/'

To use a docker image in our template.yaml we have to include the parameters ImageUri and PackageType in ourAWS::Serverless::Function resource. The ImageUri, as the name suggests is the URL to our docker image. For an ECRimage, the URL looks like this 123456789.dkr.ecr.us-east-1.amazonaws.com/myimage:latest, and for a public docker imagelike that namespace/image:tag or docker.io/namespace/image:tag.

PackageType defines the type we provide to our AWS Lambda function, in our case an Image.

Afterwards, we can deploy our application again using sam deploy and thats it.

The Guided deployment will walk through all required parameters and wilhttps://www.philschmid.de/static/blog/aws-lambda-with-custom-docker-image/sam-deployment.pngmg src="/static/blog/aws-lambda-with-custom-docker-image/sam-deployment.png" alt="sam-deplohttps://www.philschmid.dehttps://www.philschmid.de/static/blog/aws-lambda-with-custom-docker-image/deployment-result.pngg like this.

We https://www.philschmid.dehttps://www.philschmid.de/static/blog/aws-lambda-with-custom-docker-image/insomnia.pnge> section and use any REST Client to test it.

It worked. 🚀

We successfully deployed and created an AWS Lambda function with a custom docker image as runtime.


Conclusion

The release of the AWS Lambda Container Support enables much wider use of AWS Lambda and Serverless. It fixes manyexisting problems and gives us greater scope for the deployment of serverless applications.

Another area in which I see great potential is machine learning, as the custom runtime enables us to include largermachine learning models in our runtimes. The increase of configurable Memory and vCPUs boost this even more.

The future looks more than golden for AWS Lambda and Serverless.


You can find the GitHub repository with the complete codehere.

Thanks for reading. If you have any questions, feel free to contact me or comment on this article. You can also connectwith me on Twitter orLinkedIn.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AWS Lambda 容器支持 Docker SAM re:Invent
相关文章