Using layers will allow the dependency to be more reusable and potentially easier to maintain and deploy.
The short version is:
- create a
requirements.txt from pip freeze or similar that looks like this:
pandas==0.23.4
pytz==2018.7
- create
get_layer_packages.sh bash script to be run by docker which looks like this:
#!/bin/bash
export PKG_DIR="python"
rm -rf ${PKG_DIR} && mkdir -p ${PKG_DIR}
docker run --rm -v $(pwd):/foo -w /foo lambci/lambda:build-python3.6 \
pip install -r requirements.txt --no-deps -t ${PKG_DIR}
- Run the stuff from above like this in the terminal:
chmod +x get_layer_packages.sh
./get_layer_packages.sh
zip -r my-Python36-Pandas23.zip .
I'm not so experienced with python and spent a decent chunk of time messing around with zipping pandas and virtual envs up, and have never really used docker for anything IRL, but this process is actually far more accessible (and better documented) than the venv > zip > upload process I was using before.