Serverless computing, especially implemented through Function-as-a-Service (FaaS) plat-forms, has recently been gaining popularity as an application deployment model in which functions are automatically instantiated when called and scaled when needed. When a warm start deployment mode is used, the FaaS platform gives users the perception of constantly available resources. Conversely, when a cold start mode is used, containers running the application’s modules are automatically destroyed when the application has been executed. The latter can lead to considerable resource and cost savings. In this paper, we explore the suitability of both modes for deploying Internet of Things (IoT) applications considering a low resources testbed comparable to an edge node. We discuss the implementation and the experimental analysis of an IoT serverless platform that includes typical IoT service elements. A performance study in terms of resource consumption and latency is presented for the warm and cold start deployment mode, and implemented using OpenFaaS, a well-known open-source FaaS framework which allows to test a cold start deployment with precise inactivity time setup thanks to its flexibility. This experimental analysis allows to evaluate the aptness of the two deployment modes under different operating conditions: Exploiting OpenFaaS minimum inactivity time setup, we find that the cold start mode can be convenient in order to save edge nodes limited resources, but only if the data transmission period is significantly higher than the time needed to trigger containers shutdown.

Experimental analysis of the application of serverless computing to IoT platforms

Benedetti P.
Software
;
Femminella M.
Validation
;
Reali G.
Writing – Original Draft Preparation
;
Steenhaut K.
Writing – Review & Editing
2021

Abstract

Serverless computing, especially implemented through Function-as-a-Service (FaaS) plat-forms, has recently been gaining popularity as an application deployment model in which functions are automatically instantiated when called and scaled when needed. When a warm start deployment mode is used, the FaaS platform gives users the perception of constantly available resources. Conversely, when a cold start mode is used, containers running the application’s modules are automatically destroyed when the application has been executed. The latter can lead to considerable resource and cost savings. In this paper, we explore the suitability of both modes for deploying Internet of Things (IoT) applications considering a low resources testbed comparable to an edge node. We discuss the implementation and the experimental analysis of an IoT serverless platform that includes typical IoT service elements. A performance study in terms of resource consumption and latency is presented for the warm and cold start deployment mode, and implemented using OpenFaaS, a well-known open-source FaaS framework which allows to test a cold start deployment with precise inactivity time setup thanks to its flexibility. This experimental analysis allows to evaluate the aptness of the two deployment modes under different operating conditions: Exploiting OpenFaaS minimum inactivity time setup, we find that the cold start mode can be convenient in order to save edge nodes limited resources, but only if the data transmission period is significantly higher than the time needed to trigger containers shutdown.
2021
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11391/1482364
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 30
  • ???jsp.display-item.citation.isi??? 26
social impact