Welcome to this tutorial on building a Function as a Service (FaaS) system using Rust.
If you’ve been exploring cloud computing, you’ve likely come across FaaS platforms like AWS Lambda or Google Cloud Functions. These platforms allow developers to deploy single functions that are run in response to events without maintaining the underlying infrastructure.
In this article, we’ll be creating our own simple FaaS platform using Rust, a language known for its performance and safety guarantees. This tutorial aims to provide a clear and concise walkthrough, offering a practical way to understand FaaS and how to implement it.
Let’s dive in!
Understanding Function As A Service (FaaS)
Before diving into the nitty-gritty of our Rust implementation, it’s crucial to have a solid grasp of the FaaS concept. Let’s take a moment to break it down.
What is FaaS?
Function as a Service, commonly abbreviated as FaaS, represents a cloud computing service model that allows developers to execute and manage code in response to particular events without the complexity of building and maintaining the underlying infrastructure. Think of it as a way to run your back-end code without dealing with servers.
Key Concepts:
- Event-driven: FaaS functions are typically executed in response to events. These events could range from HTTP requests to changes in database entries or even file uploads.
- Stateless: Each function invocation is independent. This means the system does not retain any persistent local state between function executions. Any external state must be stored in a database or other external storage system.
- Short-lived: FaaS functions are designed to execute quickly. Long-running operations might be better suited for different architectures.
- Automatic Scaling: One of the most significant advantages of FaaS is its ability to scale automatically. As the number of requests increases, the platform can handle them by instantaneously launching more instances of the function. This is managed without any intervention from the developer.
- Billing Model: With FaaS platforms, you’re typically billed for the actual amount of resources consumed during the execution of your functions rather than pre-allocated server sizes or uptime. This can lead to cost savings since you only pay for what you use.
With the basics out of the way, let’s get our hands dirty and start building!
Essential Crates for Our FaaS Project
In our journey to implement a Function as a Service platform using Rust, we’ll be leveraging several powerful crates from the Rust ecosystem:
- hyper: A fast and efficient HTTP client and server framework. In our project,
hyperforms the backbone of our server infrastructure, allowing us to handle incoming requests efficiently. - tokio: An asynchronous runtime tailored for Rust. It’s instrumental in managing I/O-bound tasks and other asynchronous operations. For our FaaS platform,
tokioserves as our primary runtime, ensuring that file I/O and network requests are executed smoothly. - reqwest: An elegant high-level HTTP client built atop
hyper. Whilehyperprovides the building blocks for HTTP communication,reqwestoffers a more ergonomic interface. In our client application, it streamlines the process of sending HTTP requests. - libloading: This crate offers safe and performant dynamic linking capabilities for Rust. The core functionality of our FaaS platform, which involves dynamically loading and invoking functions from shared libraries, hinges on
libloading. - multipart: Tailored for parsing and generating multipart (notably
multipart/form-data) requests and responses,multipartis crucial in our setup. It manages the file upload process, enabling users to submit their function libraries seamlessly. - anyhow: Handling errors in Rust becomes a breeze with
anyhow. In our application, it plays a pivotal role in managing and presenting errors in a way that's user-friendly.
Server Setup and Logic
Setup
Before diving into the code, ensure you have the latest Rust and Cargo installed. You can do this via rustup. Once ready, create a new project:
cargo new rust_faas_server
cd rust_faas_server
Next, add the required dependencies to your Cargo.toml:
[dependencies]
hyper = "0.14"
tokio = { version = "1", features = ["full"] }
reqwest = "0.11"
libloading = "0.7"
multipart = "0.17"
anyhow = "1.0"
Run cargo build to ensure everything is set up correctly.
Server Logic
The heart of our server is the hyper crate. We'll be crafting endpoints to upload and invoke functions.
1. Starting the Server
To kick things off, we’ll create a simple HTTP server using hyper. The server will listen for incoming requests, routing them to the appropriate handler based on their path.
use hyper::{service::make_service_fn, server::Server};
use hyper::{Body, Request, Response};
use std::convert::Infallible;
#[tokio::main]
async fn main() {
let make_svc = make_service_fn(|_conn| {
async { Ok::<_, Infallible>(service_fn(route_request)) }
});
let addr = ([127, 0, 0, 1], 8080).into();
let server = Server::bind(&addr).serve(make_svc);
println!("Listening on http://{}", addr);
server.await.expect("Server failed");
}
This code sets up a basic server that listens on http://127.0.0.1:8080/. All incoming requests are routed to the route_request function.

2. Routing the Requests
We aim to support two main operations: uploading a function and invoking it. We can determine which operation to perform based on the HTTP method and path:
async fn route_request(req: Request<Body>) -> Result<Response<Body>, Infallible> {
match (req.method(), req.uri().path()) {
(&hyper::Method::POST, "/upload") => handle_upload(req).await,
(&hyper::Method::GET, path) if path.starts_with("/invoke/") => handle_invoke(req).await,
_ => Ok(Response::new(Body::from("Not Found"))),
}
}
The routing logic is straightforward: POST requests to /upload are directed to the upload handler, while GET requests to paths starting with /invoke/ are directed to the invoke handler.
3. Handling Uploads
The upload handler’s responsibility is to save the uploaded shared library and ensure it’s ready for invocation:
async fn handle_upload(mut req: Request<Body>) -> Result<Response<Body>, Infallible> {
let headers = req.headers().clone();
if let Some(content_type) = headers.get(CONTENT_TYPE) {
if let Ok(ct) = std::str::from_utf8(content_type.as_bytes()) {
if ct.starts_with("multipart/form-data") {
let boundary = ct.split("boundary=").collect::<Vec<_>>().get(1).cloned().unwrap_or_default();
let body_bytes = hyper::body::to_bytes(req.into_body()).await.unwrap();
let parts = body_bytes.split(|&b| b == b'\n')
.filter(|line| !line.starts_with(b"--") && !line.is_empty() && line != &boundary.as_bytes())
.collect::<Vec<_>>();
// Rudimentary parsing, assuming every two slices are headers and content
for i in (0..parts.len()).step_by(2) {
let headers = parts.get(i);
let content = parts.get(i + 1);
if let (Some(headers), Some(content)) = (headers, content) {
if headers.starts_with(b"Content-Disposition") {
// Extract filename and write content to a file
// This is a simple example; in a real-world scenario, you'll need more comprehensive parsing
if let Some(start) = headers.windows(b"filename=\"".len()).position(|w| w == b"filename=\"") {
let filename_start = start + b"filename=\"".len();
let filename_end = headers[filename_start..].iter().position(|&b| b == b'"').unwrap_or(0) + filename_start;
let filename = &headers[filename_start..filename_end];
let file_path = format!("./uploads/{}", std::str::from_utf8(filename).unwrap());
tokio::fs::write(&file_path, content).await.unwrap();
}
}
}
}
return Ok(Response::new(Body::from("File uploaded successfully")));
}
}
}
Ok(Response::new(Body::from("Invalid request")))
}
Here, you’ll leverage the multipart crate to process uploaded files and the libloading crate to ensure they can be loaded as shared libraries. Remember to perform security checks, as loading untrusted shared libraries can be a risk.
4. Handling Invocations
Once a function has been uploaded, it can be invoked using its name:
async fn handle_invoke(req: Request<Body>) -> Result<Response<Body>, Infallible> {
// Parse the path to get the function name
let path = req.uri().path();
let parts: Vec<&str> = path.split('/').collect();
if parts.len() < 3 {
return Ok(Response::builder().status(StatusCode::BAD_REQUEST).body(Body::from("Invalid path")).unwrap());
}
let function_name = parts[2];
// Construct the path to the uploaded library
let lib_path = format!("uploads/{}.so", function_name); // Assuming a Unix-like system; adjust extension if necessary
if !std::path::Path::new(&lib_path).exists() {
return Ok(Response::builder().status(StatusCode::NOT_FOUND).body(Body::from("Library not found")).unwrap());
}
// Load the library
let lib = unsafe { Library::new(&lib_path) }.expect("Failed to load library");
unsafe {
// Load the function symbol from the library
let function_name_bytes = function_name.as_bytes();
let func: Symbol<Func> = lib.get(function_name_bytes).expect("Failed to get symbol"); // Replace "my_function" with your function's name
// Call the function
let result = func();
Ok(Response::new(Body::from(format!("Function returned: {}", result))))
}
}
With the libloading crate, you can dynamically load functions from shared libraries and invoke them, returning the results to the client.
We’ve laid the foundation for our FaaS server using Rust. By leveraging crates like hyper, multipart, and libloading, we can securely and efficiently handle the upload and invocation of functions. In the next section, we'll focus on the client side, which will interact with our server, making it a complete FaaS ecosystem.
Building a FaaS CLI
Now that we have our FaaS server set up, we need a way to interact with it — a client tool to deploy and invoke functions. Rust offers excellent capabilities for CLI tool development thanks to its lightweight binaries and performance. In this section, we’ll walk through the development of a simple command-line interface (CLI) for deploying and invoking serverless functions on our FaaS platform.


