VGS Engineering Blog

The latest updates from our developer community

Secure Compute Part 3: Serverless Functions with OpenFaaS

To simplify the experience of developing and packaging applications, we would like to make use of serverless functions. In this blog, we will demonstrate how we enhance this experience with OpenFaaS serverless functions, without compromising on security.

Custom Applications made simple with OpenFaaS#

OpenFaaS is a function as a service framework for building and deploying serverless functions on your cluster. It is built on top of containers: it lets you focus on writing your application’s code, while it handles the packaging of your application into a container and the infrastructure that deploys and manages that container on your cluster. OpenFaaS integrates easily with container orchestration systems such as Docker Swarm and Kubernetes. Its simplicity makes it easy for cluster managers to deploy, and simplifies the development experience for users.

Let us see how easy by trying it out on a Kind cluster!

First install the prerequisites if you do not have them.

# Install kind
$ curl -Lo ./kind
$ chmod +x ./kind
$ sudo mv ./kind /usr/local/bin/kind

# Install arkade
$ curl -SLsf | sudo sh

# Install OpenFaaS CLI
$ curl -sL | sudo sh

Login to docker with your credentials to use your docker registry to push function images. Install and configure docker if needed.

$ docker login
$ export DOCKER_USER=your_docker_username

Now let us create the cluster, deploy OpenFaaS and run our functions!

# Create cluster
$ kind create cluster

# Deploy OpenFaaS
$ arkade install openfaas --set faasnetesd.imagePullPolicy=IfNotPresent

# Wait for OpenFaaS to deploy
$ kubectl -n openfaas get pods --watch

# Forward gateway port
$ kubectl port-forward -n openfaas svc/gateway 8080:8080 > /dev/null 2>&1 &

# Login using the OpenFaaS CLI
$ OPENFAAS_PASS=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
$ echo -n $OPENFAAS_PASS | faas-cli login --username admin --password-stdin

# Create new function
$ faas-cli new my-func --lang python3
$ cat << EOF | tee my-func/
def handle(req):
  print("Wow! That was easy.")
  return req

# Configure function deployment with docker registry
$ sed -i -e "s;image: my-func:latest;image: $DOCKER_USER/my-func:latest;" my-func.yml

# Build & Deploy Function
$ faas-cli up -f my-func.yml

# Test function
$ curl

# Cleanup
$ kind delete cluster

Since OpenFaaS functions are built on top of containers that run in kubernetes pods, they should be able to run using the gVisor runtime. However, since the OpenFaaS deployment generates the pod configurations, we would need an enabling component in OpenFaaS that allows us to specify a function’s container runtime environment. In a recent release for OpenFaaS, function profiles where introduced that enables us to specify the runtime class of functions at deployment using annotations.

Putting it all together: Demo my Function#

Now, let us see how to deploy OpenFaaS functions with gVisor using the runsc runtime handler.

0. If you have not already, please checkout [Part 2](link to part 2) of this blog series to create an EKS cluster and configure gVisor properly. You will also need to have Docker installed and configured with your credentials.

$ docker login
$ export DOCKER_USER=your_docker_username

1. Label the second node with app=openfaas, this will be the node where the OpenFaaS deployment will run.

$ export openfaas_node_name=$(kubectl get nodes -o jsonpath='{.items[1]}')
$ kubectl label node $openfaas_node_name app=openfaas

2. Install Arkade and the OpenFaaS CLI.

# Install arkade
$ curl -SLsf | sudo sh

# Install OpenFaaS CLI
$ curl -sL | sudo sh

3. Deploy OpenFaaS, and wait till all pods are running successfully.

$ arkade install openfaas \
	--clusterrole \
	--set operator.create=true \

# Wait for OpenFaaS to deploy
$ kubectl -n openfaas get pods --watch

4. To communicate with the OpenFaaS gateway directly from our machine, let uss port forward a local port address to the active gateway service port.

$ kubectl port-forward -n openfaas svc/gateway 8080:8080 > /dev/null 2>&1 &

5. We will use the OpenFaaS CLI to interact with our cluster. To authenticate properly, we need to get the password and login.

$ OPENFAAS_PASS=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
$ echo -n $OPENFAAS_PASS | faas-cli login --username admin --password-stdin

6. Create a gVisor profile to apply to OpenFaaS functions.

$ cat << EOF | tee profile.yaml
kind: Profile
  name: gvisor
  namespace: openfaas
  # Configuration values can be set as key-value properties
  runtimeClassName: gvisor

$ kubectl apply -f profile.yaml

7. Create a python3 function using the OpenFaaS CLI.

# Create a new function
$ faas-cli new demo-function --lang python3

$ cat << EOF | tee demo-function/
def handle(req):
  print("You requested \"{}\" from my distroless function".format(req))

8. Configure the function deployment with docker registry and gVisor profile through annotations.

# Point container to docker registry
$ sed -i -e "s;image: demo-function:latest;image: $DOCKER_USER/demo-function:latest;" demo-function.yml

# Append gVisor profile
$ cat <<EOF | tee -a demo-function.yml
        com.openfaas.profile: gvisor

9. Build, Push and Deploy the function using the OpenFaaS CLI.

$ faas-cli up -f demo-function.yml

10. Finally, let us test the function by sending it a request.

$ curl -d “input”

And there you go! You have just successfully deployed OpenFaaS, created a serverless function, and executed it using gVisor’s runsc runtime.

Next: Part 4 - Network Policies with Calico#

In this blog post we experienced the simplicity and enhanced usability of serverless functions, and how to enable them using the OpenFaaS deployment. In part 4 of this blog series, we will be looking into kubernetes network policies and how to define networking rules that properly secure the network in our cluster!


  • OpenFaaS:
  • Distroless:
author profile
Mohamad El Hajj
Mohamad joined VGS in the summer of 2020 as an engineering intern to explore different avenues for potential secure serverless compute platforms. His work spanned different technologies including gVisor, firecracker, OpenFaaS, AWS Lambda and AWS Fargate. This blog series will demonstrate his findings on gVisor and OpenFaaS, part of a larger collaboration here at VGS in the area of confidential computing.