Building your own serverless functions with k3s and OpenFaaS on Raspberry Pi

No Comments

In recent years, lots of new programming paradigms have emerged – going from monolithic architectures towards microservices and now serverless functions. As a result, less code needs to be deployed, and updating an application becomes easier and faster as only a part has to be built and deployed. When serverless functions are mentioned, AWS Lambda is not a far cry as it’s the most prominent player in the serverless world. Other cloud providers have similar offerings, for example Google Cloud Functions or Azure Functions. But using them, we will put ourselves into a vendor lock. This isn’t necessarily a problem if your entire infrastructure is hosted on one of those platforms. If you, however, want to stay independent of cloud providers, an open-source solution that can be deployed on your own infrastructure can be beneficial. Cut to OpenFaaS, an open source serverless framework that runs on Kubernetes or Docker Swarm.

In this blog post, we will focus on setting up a Raspberry Pi cluster, using Ansible for reproducible provisioning, k3s as a lightweight Kubernetes distribution, and OpenFaaS to run serverless functions.

A notable mention is the blog post Will it cluster? k3s on your Raspberry Pi by the founder of OpenFaaS Alex Ellis, it covers the manual installation of k3s and OpenFaaS on a Raspberry Pi cluster as well as the deployment of a microservice.

Building the Raspberry Pi cluster

For this cluster, we use 4 Raspberry Pis 3, a USB power supply, an ethernet switch, a USB ethernet adapter, and some cables:

4x Raspberry Pi 3
4x SD card (min. 8GB, 32GB for master)
1x TeckNet® 50W 10A 6-Port USB PSU
1x D-Link GO SW-5E Ethernet switch
5x 0,25m Ethernet cables
4x Anker 0,3m USB cable
1x Ugreen USB to ethernet adapter
1x USB power cable for ethernet switch
(1x 16×2 LCD display)

Plug everything together and the final result looks like this:

Raspberry Pi 3 cluster

The printable stl files of the case are available in the repository, if you want to print it yourself.

One note on the ethernet connection: Connect all Raspberries to the ethernet switch and plug the USB ethernet adapter in the Pi that will be our master node. Our setup will have all Raspberries in their own network and can be accessed via the master node. The advantage of this is that only one device will be connected to the outside world and therefore the cluster is portable since the internal IPs don’t change when we connect it to another network. The external IP of the master node will also be displayed on the 16×2 LCD display, but more on that later. The architecture will look like this:

cluster architecture

Next, we have to prepare the SD cards. Download and install Raspbian Buster Lite on all SD cards and activate ssh by default by putting a file named `ssh` on the boot partition.

Provisioning the cluster with Ansible

In this part part of the blog, we will provision our cluster using the automation tool Ansible. Using it, we have a reproducible setup for our cluster in case we want to add a new node or reset it. Lots of useful information is on the Ansible homepage. For provisioning a Raspberry Pi cluster Larry Smith Jr. has a repository with lots of helpful tasks.

Ansible allows you to write your infrastructure configuration and provisioning in YAML files. We can define one or multiple hosts with groups, set configurations for a (sub)set and run tasks on all or a part of the hosts. Those tasks are then executed locally or via ssh on a remote host. We can define conditionals on tasks to execute them not always and Ansible takes care of not executing a task twice if it was previously executed.

Inventory and configuration files

First, we define our inventory. An inventory holds a list of the hosts we want to connect to and a range of variables for all, one, or a subset of hosts. Read more about inventories on their website. We create a ansible.cfg file in a new folder, which will hold all our Ansible files:

[defaults]
inventory = ./inventory/hosts.inv ①
host_key_checking = false ②

This way, we tell Ansible to use the inventory at ./inventory/hosts.inv ① and to disable host key checking ②. This is needed because using a jumphost doesn’t allow us to approve the key.

Note: A jumphost is a computer on a network to access other machines in a separate network.

Next we build our hosts.inv file:

[k3s_rpi:children] ①
k3s_rpi_master
k3s_rpi_worker
 
[k3s_rpi_master] ②
k3s-rpi1 ansible_host=192.168.0.58
 
[k3s_rpi_worker] ③
k3s-rpi2 ansible_host=192.168.50.201
k3s-rpi3 ansible_host=192.168.50.202
k3s-rpi4 ansible_host=192.168.50.203

We first define a group called k3s_rpi which contains all nodes ①. The master node must have the external IP set on which we can access it from our host machine, in our case 192.168.0.58 ②. The workers get the IP in the 192.168.50.x range, which will be used inside the cluster network ③. Because we cannot access our worker nodes directly, we have to configure a jumphost and set a ssh proxy command in inventory/group_vars/k3s_rpi_worker/all.yml. All variables in this folder will be used for hosts in the k3_rpi_worker group:

ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q {{ rpi_username }}@{{ jumphost_ip }}"'

The variables rpi_username and jumphost_ip are defined in the variables file valid for all hosts inventory/group_vars/all/all.yml:

# Use python3 ①
ansible_python_interpreter: /usr/bin/python3 
 
# Defines jumphost IP address to use as bastion host to reach isolated hosts
jumphost_name: "{{ groups['k3s_rpi_master'][0] }}"
jumphost_ip: "{{ hostvars[jumphost_name].ansible_host }}"
 
# Defines IPTABLES rules to define on jumphost ②
jumphost_iptables_rules:
  - chain: POSTROUTING
    jump: MASQUERADE
    out_interface: "{{ external_iface }}"
    source: "{{ dhcp_scope_subnet }}.0/24"
    state: present
    table: nat
  - chain: FORWARD
    ctstate: RELATED,ESTABLISHED
    in_interface: "{{ external_iface }}"
    jump: ACCEPT
    out_interface: "{{ cluster_iface }}"
    state: present
    table: filter
  - chain: FORWARD
    in_interface: "{{ cluster_iface }}"
    jump: ACCEPT
    out_interface: "{{ external_iface }}"
    state: present
    table: filter
 
# Default raspberry pi login ③
rpi_username: pi
rpi_password: raspberry
 
# Defines the ansible user to use when connecting to devices
ansible_user: "{{ rpi_username }}"
 
# Defines the ansible password to use when connecting to devices
ansible_password: "{{ rpi_password }}"
 
# Defines DHCP scope subnet mask ④
dhcp_scope_netmask: 255.255.255.0
 
# Defines DHCP scope address ④
# Important: set the range to exactly the number of pi's in the cluster.
# It also has to match the hosts in the host.inv file!
dhcp_master: "{{ dhcp_scope_subnet }}.200"
dhcp_scope_start_range: "{{ dhcp_scope_subnet }}.201"
dhcp_scope_end_range: "{{ dhcp_scope_subnet }}.203"
 
# Defines dhcp scope subnet for isolated network ⑤ 
dhcp_scope_subnet: 192.168.50

master_node_ip: "{{ dhcp_master }}"

cluster_iface: eth0
external_iface: eth1

Here, all the major configuration of our cluster is defined, e.g. the Python interpreter ① that is used to install Python packages, iptables rules ②, default Raspberry Pi login ③, and network configuration for our internal network ④. The dhcp_scope_subnet ⑤ defines our subnet and therefore the IP addresses our Raspberry Pies will receive. Be careful if you change this value, you have to change the hosts.inv file accordingly.

Okay, now that we have our basic configuration set, we can start provisioning our tasty Pis. 🙂 We define our tasks in playbooks. Each playbook has a set of tasks for a specific part of the cluster setup, which we will explore in the next sections.

Master node and network setup

The playbook network.yml contains all tasks that are used to set up a DHCP server using dnsmasq and iptables rules. Those are necessary in order for our workers to access the internet via the master node. We also configure the dhcp daemon to use a static ip on the eth0 interface, which is connected to the cluster network.

---
- hosts: k3s_rpi_master
  remote_user: pi
  become: True
  gather_facts: True

  tasks:
    - name: Install dnsmasq and iptables persistance ①
      apt:
        name: "{{ packages }}"
      vars:
        packages:
        - dnsmasq
        - iptables-persistent 
        - netfilter-persistent

    - name: Copy dnsmasq config ②
      template:
        src: "dnsmasq.conf.j2"
        dest: "/etc/dnsmasq.conf"
        owner: "root"
        group: "root"
        mode: 0644

    - name: Copy dhcpcd config ②
      template:
        src: "dhcpcd.conf.j2"
        dest: "/etc/dhcpcd.conf"
        owner: "root"
        group: "root"
        mode: 0644

    - name: restart dnsmasq
      service:
        name: dnsmasq
        state: restarted
      become: true

    - name: Configuring IPTables ③
      iptables:
        table: "{{ item['table']|default(omit) }}"
        chain: "{{ item['chain']|default(omit) }}"
        ctstate: "{{ item['ctstate']|default(omit) }}"
        source: "{{ item['source']|default(omit) }}"
        in_interface: "{{ item['in_interface']|default(omit) }}"
        out_interface: "{{ item['out_interface']|default(omit) }}"
        jump: "{{ item['jump']|default(omit) }}"
        state: "{{ item['state']|default(omit) }}"
      become: true
      register: _iptables_configured
      tags:
        - rpi-iptables
      with_items: "{{ jumphost_iptables_rules }}"

    - name: Save IPTables
      command: service netfilter-persistent save
      when: _iptables_configured['changed']

  post_tasks:
    - name: Reboot after cluster setup ④
      reboot:

We first install the necessary packages ①, fill the configuration templates and copy them to our master ②, configure and save the IPTables rules so our workers can access the internet via the master node ③, and finally, reboot the master to apply all configurations ④.

The configuration for dnsmasq is very simple and easy to understand. We just tell it on which interface to run the dhcp server (eth0) and the IP range for the clients. Read more about the configuration.

interface={{ cluster_iface }}
dhcp-range={{ dhcp_scope_start_range}},{{dhcp_scope_end_range}},12h

After we executed this playbook with ansible-playbook playbooks/network.yml, all our nodes should have an internal IP in the 192.168.50.x range. Now we can start bootstrapping all nodes, installing necessary packages and so on.

Bootstrap

In this section, we bootstrap our cluster by installing necessary packages, securing access to the nodes, setting hostnames, and updating the operating system, as well as enabling unattended upgrades.

First, we have to create a new ssh-keypair on the master node to be able to shell into the workers without a password:

- hosts: k3s_rpi_master
  remote_user: "{{ rpi_username }}"
  gather_facts: True

  tasks:
    - name: Set authorized key taken from file ①
      authorized_key:
        user: pi
        state: present
        key: "{{ lookup('file', '/home/amu/.ssh/id_rsa.pub') }}"

    - name: Generate RSA host key ②
      command: "ssh-keygen -q -t rsa -f /home/{{ rpi_username }}/.ssh/id_rsa -C \"\" -N \"\""
      args:
        creates: /home/{{ rpi_username }}/.ssh/id_rsa.pub

    - name: Get public key ③
      shell: "cat /home/{{ rpi_username }}/.ssh/id_rsa.pub"
      register: master_ssh_public_key

The first step adds the public key from our host machine to the master node ① so we can authenticate via ssh. If you haven’t generated one yet, you can do it via ssh-keygen. Next, we create a keypair on the master node ② if it doesn’t exist yet and store the public key in a host variable called master_ssh_public_key ③. Host variables are only directly accessible on the host they are registered on, but we can fetch them and add them to our workers:

- hosts: k3s_rpi_worker
  remote_user: pi
  become: True
  gather_facts: True

  tasks:
    - set_fact: 
        k3s_master_host: "{{ groups['k3s_rpi_master'][0] }}"

    - set_fact: 
        master_ssh_public_key: "{{ hostvars[k3s_master_host]['master_ssh_public_key'] }}"

    - name: Set authorized key taken from master ③
      authorized_key:
        user: pi
        state: present
        key: "{{ master_ssh_public_key.stdout }}"

First, we define a variable k3s_master_host which contains the hostname of our master node k3s_rpi1 ①. Next, we get the public key from the host variable we previously defined and store it in a variable called master_ssh_public_key ②. Now we can access the stdout of the cat command in the previous part, which contains the public key, and use the authorized_key to add it to the authorized keys on our worker nodes ③. This is also the part where the host key verification would have failed when Ansible tries to connect to the workers as we cannot interactively approve it.

For unattended upgrades we use the role jnv.unattended-upgrades which we install via ansible-galaxy install jnv.unattended-upgrades.

- hosts: all
  remote_user: "{{ rpi_username }}"
  become: True
  gather_facts: True

  roles:
  - role: jnv.unattended-upgrades ①
    unattended_origins_patterns: ②
      - 'origin=Raspbian,codename=${distro_codename},label=Raspbian'

We import the role ① and configure the pattern for the unattended updates service ②.

The following tasks are used for general configuration of the nodes and are executed before the role for enabling unattended upgrades:

  pre_tasks:
    - name: Change pi password ①
      user:
        name: pi
        password: "{{ lookup('password', '{{ playbook_dir }}/credentials/{{ inventory_hostname }}/pi.pass length=32 chars=ascii_letters,digits') }}"

    - name: Put pi into sudo group ②
      user:
        name: pi
        append: yes
        groups: sudo
  
    - name: Remove excessive privilege from pi ②
      lineinfile:
        dest: /etc/sudoers
        state: present
        regexp: '^#?pi'
        line: '#pi ALL=(ALL) NOPASSWD:ALL'
        validate: 'visudo -cf %s'

    - name: Set hostname ③
      hostname:
        name: "{{ inventory_hostname }}"

    - name: set timezone ③
      copy: content='Europe/Berlin\n'
        dest=/etc/timezone
        owner=root
        group=root
        mode=0644
        backup=yes

    - name: Add IP address of all hosts to all hosts ③
      template:
        src: "hosts.j2"
        dest: "/etc/hosts"
        owner: "root"
        group: "root"
        mode: 0644

    - name: Disable Password Authentication ④
      lineinfile:
        dest=/etc/ssh/sshd_config
        regexp='^PasswordAuthentication'
        line="PasswordAuthentication no"
       state=present
        backup=yes

    - name: Expand filesystem ⑤
      shell: "raspi-config --expand-rootfs >> .ansible/sd-expanded"
      args:
        creates: .ansible/sd-expanded

    - name: Update system ⑥
      apt:
        cache_valid_time: 3600
        update_cache: yes
        upgrade: safe

    - name: Install some base packages ⑦
      apt:
        name: "{{ packages }}"
      vars:
        packages:
        - vim
        - aptitude 
        - git

We start changing the passwords of the Pi user ①, remove some excessive rights for the user ②, set the hostname ③, time and hosts, disable password authentication ④, expanding the file system ⑤, update the system ⑥, and install some base packages ⑦ needed for Kubernetes. The changed passwords are stored under playbooks/credentials.

Lastly, we restart all nodes. Due to the nature of the cluster, we restart the workers first and the master afterwards. Otherwise the ansible playbook will fail, because it cannot reach the workers if the master is rebooting:

- hosts: k3s_rpi_worker
  remote_user: "{{ rpi_username }}"
  gather_facts: True
  become: True

  tasks:
    - name: Reboot after bootstrap
      reboot:


- hosts: k3s_rpi_master
  remote_user: "{{ rpi_username }}"
  gather_facts: True
  become: True

  tasks:
    - name: Reboot after bootstrap
      reboot:

k3s and OpenFaaS

Our nodes are now set up and bootstrapped. We can install the k3s Kubernetes distribution, the dashboard and OpenFaas.

On the master node we install the k3s server and bind it to 0.0.0.0, so we can access it with kubectl from our local machine.

- hosts: k3s_rpi_master
  remote_user: pi
  become: True
  gather_facts: True

  tasks:
    - name: Install / upgrade k3s on master node ①
      shell: "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"server --bind-address 0.0.0.0\" sh -"

    - name: Get token from master ②
      shell: "cat /var/lib/rancher/k3s/server/node-token"
      register: k3s_node_token

Installing it is done with a simple curl and takes a minute or so ①. Now we have a running Kubernetes cluster with one node on our master node. 🙂 But we want to add our worker nodes too, so we save the node token needed for joining the cluster in a variable ②. Next, we install the k3s agent on the worker nodes and join it to the cluster:

- hosts: k3s_rpi_worker
   remote_user: pi
   become: True 
   gather_facts: True
   
   tasks:
     - set_fact: 
         k3s_master_host: "{{ groups['k3s_rpi_master'][0] }}"
         
     - set_fact: 
         k3s_master_token: "{{ hostvars[k3s_master_host]['k3s_node_token'].stdout }}"
        
     - name: Install / upgrade k3s on worker nodes and connect to master ③
       shell: "curl -sfL https://get.k3s.io | K3S_URL=https://{{ master_node_ip }}:6443 K3S_TOKEN={{ k3s_master_token }} sh -"

We first get the hostname of our master node ① and retrieve the token from it ②. Installing and joining is also done with a single curl command ③. We pass the master IP and the token to the install script and it takes care of installing the agent and joining the cluster. After a few minutes, we should see the nodes popping up in sudo k3s kubectl get nodes on the master node. We have our Kubernetes cluster running on our Raspberry Pies! 🙂

Kubernetes cluster running on Raspberry Pis

Now we want to deploy the Kubernetes dashboard to our cluster:

- hosts: k3s_rpi_master
  remote_user: pi
  become: True
  gather_facts: True

  tasks:
    - name: Make sure destination dir exists ①
      become: False
      file:
        path: /home/{{ rpi_username }}/kubedeployments
        state: directory

    - name: Copy dashboard admin file ②
      become: False
      copy:
        src: files/dashboard-admin.yaml 
        dest: /home/{{ rpi_username }}/kubedeployments/dashboard-admin.yaml
        owner: "{{ rpi_username }}"
        group: "{{ rpi_username }}"
        mode: '0644'

    - name: Apply dashboard admin ③
      shell: "k3s kubectl apply -f /home/{{ rpi_username }}/kubedeployments/dashboard-admin.yaml"

    - name: Install dashboard ④
      shell: "k3s kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml"

    - name: Get dashboard token ⑤
      shell: "k3s kubectl -n kube-system describe secret $(k3s kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token: | cut -d':' -f2 | xargs"
      register: dashboard_token

    - debug: 
        msg: "{{ dashboard_token.stdout }}"

    - name: Save dashboard token to credentials/dashboard_token ⑦
      become: False
      local_action: copy content={{ dashboard_token.stdout }} dest={{ playbook_dir }}/credentials/dashboard_token

First, we create a folder kubedeployments on the master node ①. We copy the dashboard-admin.yaml from playbooks/files/ file which is needed to access the dashboard ②. Then we can apply the file ③ as well as the dashboard resource ④. To access it, we have to get the token. We grep the secret from the cluster ⑤ and print it in Ansible ⑥. For later access we also store it in the playbooks/credentials/dashboard_token file on the local machine ⑦.

To connect to the Kubernetes cluster from our local machine, we copy the generated Kubernetes config file from the master to our local machine ① and fix up the IP address of the master node ②. If we copy this file to ~/.kube/config, you can access it with kubectl from our local machine. Run kubectl proxy and open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. We should be able to log in with the token provided previously and see our Kubernetes cluster.

    - name: Download kubernetes config ①
      fetch:
        src: /etc/rancher/k3s/k3s.yaml
        dest: "{{ playbook_dir }}/credentials/k3s.yaml"
        flat: yes

    - name: Set correct IP in downloaded kubernetes config ②
      become: False
      local_action: 
        module: lineinfile
        dest: "{{ playbook_dir }}/credentials/k3s.yaml"
        regexp: "^    server"
        line: "    server: {{ jumphost_ip }}:6443"

To install OpenFaaS, we have to clone the repository containing the charts and apply two resource files ①. The first one creates the namespaces in our cluster ②, the second one installs all services for the armhf architecture in our cluster ③.

    - name: Clone OpenFAAS kubernetes charts ①
      git:
        repo: https://github.com/openfaas/faas-netes.git
        dest: /home/{{ rpi_username }}/faas-netes

    - name: Install OpenFAAS
      shell: |
        k3s kubectl apply -f /home/{{ rpi_username }}/faas-netes/namespaces.yml ②
        k3s kubectl apply -f /home/{{ rpi_username }}/faas-netes/yaml_armhf ③

http://master_node_ip:31112 should open the OpenFaaS dashboard:

OpenFaaS dashboard

We now can deploy our first function! OpenFaaS has a function store integrated, meaning that we can deploy pre-built functions to our cluster. Just click on Deploy New Function, select nodeinfo and hit deploy. This is a function that returns some information about one for our nodes:

serverless function returns some information about one for our nodes

Now we can also start developing our own functions. We’ll talk about this in a later blog post, if you are curious you can read the official documentation here: https://docs.openfaas.com/cli/templates/

LCD display monitoring

We also want to add an LCD display to show some information about our cluster, for example the external IP, how many k3s nodes are available and how many functions are deployed in OpenFaaS. For this we connect a 16×2 LCD with an i2c interface to our Raspberry Pi. It has 4 pins, we connect 5V to pin 2 or 4, GND to pin 6, the i2c data lines SDA to pin 3 and SCL to pin 5. On pinout.xyz we can find a schematic of the pinlayout of the Pi.

Now we have to enable i2c on our master node to be able to communicate with the display. We create a new playbook file lcd.yml

- hosts: k3s_rpi_master
  remote_user: "{{ rpi_username }}"
  gather_facts: True
  become: True

  tasks:
    - name: Check if i2c is enabled ①
      shell: "raspi-config nonint get_i2c"
      register: i2c_disabled

    - name: Enable i2c ②
      shell: "raspi-config nonint do_i2c 0"
      when: i2c_disabled.stdout == "1"

    - name: Reboot after enabling i2c ③
      when: i2c_disabled.stdout == "1"
      reboot:

We first check if i2c is already enabled ①. If not, we enable it ② and restart our Pi ③. To control the display, we use a small Python script. Therefore, we need some dependencies installed to access i2c and the k3s cluster:

    - name: Install python pip, smbus and i2c-tools 
      apt:
        name: "{{ packages }}"
      vars:
        packages:
        - python3-pip
        - python3-smbus
        - i2c-tools

    - name: Install kubernetes python package
      pip:
        name: kubernetes
        executable: pip3

In order to have access to our cluster, we have to set the Kubernetes config. This is essentially the same we did before, this time we copy the file locally:

    - name: Copy kube config and fix ip
      shell: "cp /etc/rancher/k3s/k3s.yaml /home/{{ rpi_username }}/.kube/config && chown {{rpi_usernam\}\\.[0-9]\\{1,3\\}\\.[0-9]\\{1,3\\}/127.0.0.1/g' /home/{{ rpi_username }}/.kube/config"

    - name: Create k3s_lcd directory
      become: False
      file:
        path: /home/{{ rpi_username }}/k3s_status_lcd
        state: directory

Lastly, we copy the script ① and systemd service files ② and enable them ④:

    - name: Copy k3s_status_lcd files ①
      become: False
      copy:
        src: "{{ item }}"
        dest: /home/{{ rpi_username }}/k3s_status_lcd
        owner: "{{ rpi_username }}"
        group: "{{ rpi_username }}"
        mode: '0644'
      with_fileglob:
        - ../../k3s_status_lcd/*

    - name: Install k3s-status service ②
      template:
        src: "../../k3s_status_lcd/k3s-status.service.j2"
        dest: "/etc/systemd/system/k3s-status.service"
        owner: "root"
        group: "root"
        mode: 0644

    - name: Install k3s-shutdown script ③
      template:
        src: "../../k3s_status_lcd/k3s-lcd-shutdown.sh.j2"
        dest: "/lib/systemd/system-shutdown/k3s-lcd-shutdown.sh"
        owner: "root"
        group: "root"
        mode: 0744

    - name: Start k3s-status service ④
      systemd:
        state: restarted
        enabled: yes
        daemon_reload: yes
        name: k3s-status

We also have a shutdown script that is executed right before the Raspberry turns off or reboots ③. It is placed in the /lib/systemd/system-shutdown/ folder and is executed right before the shutdown signal. This way, we know when it’s safe to unplug it. 🙂

The source files for the status LCD can be found here: https://github.com/amuttsch/rpi-k3s-openfaas/tree/master/k3s_status_lcd

And this is how it looks when the Raspberry Pi boots:

Conclusion

In this blog post, we built the hardware for a cluster made with 4 Raspberry Pis and provisioned it, using Ansible with a k3s Kubernetes cluster running OpenFaaS to run serverless functions. We also added a status LCD display to see the current status of our cluster and the functions running on it. If you don’t want to execute all Ansible playbooks sequentaially, you can run the deploy.yml playbook. This one executes all previously mentioned playbooks in order. After waiting a few minutes, we have a fully configured Kubernetes cluster running OpenFaaS on our Raspberry Pis!

In the next post, we’ll dive deeper into OpenFaaS and how to develop and deploy custom functions on it.

Links for further information:
k3s – Lightweight Kubernetes
OpenFaaS – Serverless Functions, Made Simple
Will it cluster? k3s on your Raspberry Pi
Ansible – Raspberry Pi Kubernetes Cluster
GOTO 2018 • Serverless Beyond the Hype • Alex Ellis

Andreas Muttscheller

Backend software engineer at codecentric. Interested in concurrent and distributed programming, CI/CD and performance optimization.

Comment

Your email address will not be published. Required fields are marked *