FaaS, the acronym for Function as a Service, is a cloud computing service that allows developers to build, run, and manage applications as functions without having to maintain their own infrastructure. In such an approach, developers can be fully busy with the business logic of their applications.
Most of the major public cloud offers FaaS solutions, such as Amazon's AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions. We strongly recommend using those platforms as they were designed and tested over the past years.
For those who are interested in an open-source serverless solution that clones what most public cloud providers can do, Kubeless is a good choice. To know more about Kubeless, we recommend going to their website and following their installation instructions based on your operating system.
For this tutorial, we used a machine running Linux Mint 19.3 with python version 3.7. We installed both minikube and kubectl binaries beforehand using tutorials from the Kubernetes webpage. It is important to mention you need a docker container or a virtual machine environment to run Kubernetes (minikube) on Linux.
Then start your minikube installation by:
$ minikube start
After that, we go to the main part of this tutorial, you need to deploy kubeless on the Kubernetes cluster:
$ export RELEASE=$(curl -s https://api.github.com/repos/kubeless/kubeless/releases/latest | grep tag_name | cut -d '"' -f 4)
$ kubectl create ns kubeless
$ kubectl create -f https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-$RELEASE.yaml
You can check if pods are running by either look the pods or deployments:
$ kubectl get pods -n kubeless
$ kubectl get deployment -n kubeless
Check if the rules from .yaml file were correctly applied:
$ kubectl get customresourcedefinition
Now, you can create a simple python function with the name test.py to test with:
def hello(event, context):
return 'Hello World'
To deploy your function simply run the following command:
$ kubeless function deploy hello --runtime python3.7 --from-file test.py --handler test.hello
Testing your function is easy, simply run the following command:
$ kubeless function call hello
Or you can use curl:
$ curl -L localhost:8080/api/v1/namespaces/default/services/hello:http-function-port/proxy
Now you’ve got your function running.
If you need to add dependencies, simply append them to the end of the deploy command
--dependencies requirements.txt
To autoscale your function, you would need to use kubectl instead of Kubeless, due to problems with autoscaling in Kubeless. In the example below, we autoscaled the function hello to a maximum number of 10 pods, and the autoscale would take place after CPU usage reaches 10% on the current node.
$ kubectl autoscale deployment hello --min=1 --max=10 --cpu-percent=10