My journey into the world of AWS Lambda was a bumpy one where I was initially “forced” to try and use node.js to get some simple functionality working. “Forced” being that I stuck with the status quo, and just followed a similar pattern someone else had done to get something working.
I realised longer term node.js wasn’t for me so looked for other options. I am a long time ruby user so was pleased to discover that python was similar in structure and wasn’t too difficult to pick up.
NB At the time I created this process, Ruby was not native to Lamba, so this is why I learnt python.
The problem wasn’t the new language, it was how to get it working on AWS Lambda. I spent far too long building and uploading code artifacts, completely stumped as to why it wasn’t working, constantly getting errors messages that it couldn’t find module X or Y.
The Amazon documentation was little to no help and I finally fell across an obscure article which revealed the problem … the OS I was building on wasn’t compatible with what runs behind Lambda. Of course this should have been obvious but wasn’t.
I thought I should share my method in case someone else was struggling with a similar issue.
The crux of the solution is to get Docker (https://www.docker.com/products/docker-desktop) up and running and use a container to bundle the modules into a local directory. My current solution uses python 3.6 but you could easily modify it to be a different version.
In your project create a file called Dockerfile with the following content:
# Use an official centos
FROM centos:7
# Set the working directory to /working
WORKDIR /working
RUN yum -y update
RUN yum -y install yum-utils groupinstall development
RUN yum -y install https://centos7.iuscommunity.org/ius-release.rpm
RUN yum -y install python36u
RUN yum -y install python36u-pip
RUN yum -y install python36u-devel
RUN yum -y install zip build-essential libssl-dev libffi-dev vim \
&& echo ‘alias python=python3.6’ >> ~/.bash_aliases \
&& echo ‘alias pip=pip3.6’ >> ~/.bash_aliases \
&& echo ‘source ~/.bash_aliases’ >> ~/.bashrc
ENTRYPOINT [“/bin/bash”, “-c”, “bin/bundle”]
CMD [“bash”]
We then want to build this container using the following:
docker build -t python_builder .
This creates a utility container where we can run the bundling process for python’s modules. It is using a centos7 base image.
You will see in the Dockerfile that the ENTRYPOINT references a bin/bundle script, this is just the pip install command which tells python which modules it needs to download and build, here is the script:
#!/bin/bash
rm -rf modules || true
pip install -r requirements.txt -t modules
This script relies on there being a requirements.txt file, an example looks like this:
aws_xray_sdk
boto3
datadog
doubles
flask
flask_ask
To tie it all together here is the script to execute the bundle and creation of the Lambda code artifact:
#!/bin/bash
echo “Generatating python modules directory …”
echo “—————————————–“
echo “”
docker run -v ${PWD}:/working -it python_builder
echo “”
echo “Create code artifact …”
echo “————————“
echo “”
rm -f code.zip || true
zip -r code.zip . –exclude ‘.git‘ –exclude ‘tests’
echo “Completed!”
echo “”
This script mounts the current working directory into the container at /working, it does the bundle of python modules and puts them into ./modules. The last step is it zips it up into a file code.zip.
The rest of the process is to create a Lambda function in your AWS console, select the language (python 3.6) and choose to upload a file.
When creating a Lambda function it asks for a handler, this is just the name of the file and the method Lambda uses to initiate the call.
I hope this helps you get a leg up so you can just go build!
Recent Comments