Published on

Learn Rust by building a RESTFul API with Actix

Hi there 👋

I am currently learning Rust. Then I decided to create a sample CRUD project with Actix to understand Rust better. When I read a Rust Book, sometimes I feel bored and have no idea. That's why I have this blog post.

Prerequisites

Simple API

In this tutorial, we will build an API that should create a new tweet given JSON data, display a tweet by id, delete by a given id and list all tweets data. Then we will have the following endpoints:

  • GET /tweets - List all tweets.
  • POST /tweets - Create a new tweet.
  • GET /tweets/:id - Get tweet detail by ID.
  • DELETE /tweets/:id - Delete a tweet with a given id.
  • PUT /tweets/:id - Edit a tweet.

Create a New Project.

We create a new project with cargo new and setup Rust server with actix

cargo new rust-actix-crud

Dependencies

The Cargo.toml file will list your Dependencies.

Cargo.toml
[package]
name = "rust-actix-crud"
version = "0.1.0"
edition = "2021"

[dependencies]
actix-web = "4"
chrono = { version = "0.4.19", features = ["serde"] }
diesel = { version = "1.4.4", features = ["postgres", "r2d2", "chrono"] }
dotenv = "0.15.0"
env_logger = "0.9.0"
serde = { version = "1.0.136", features = ["derive"] }
serde_json = "1.0"
  • actix-web - Web framework for Rust.
  • diesel - ORM and Query Builder for Rust.
  • dotenv - allows you to load environment variables from .env file or system.
  • env_logger - Implements a logger that con be configured via env.
  • serde - A serialized/deserialize data structures in Rust.
  • serde_json - A JSON serialization file format - to read a raw JSON.
  • chrono - Date and time library using with Diesel.
  • r2d2 - A generic connection pool for Rust.

Install all dependencies with Cargo:

cargo update

Hello Actix

If you go to actix.rs, You will see the sample Hello World app:

use actix_web::{get, web, App, HttpServer, Responder};

#[get("/hello/{name}")]
async fn greet(name: web::Path<String>) -> impl Responder {
    format!("Hello {name}!")
}

#[actix_web::main] // or #[tokio::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {
        App::new()
            .route("/hello", web::get().to(|| async { "Hello World!" }))
            .service(greet)
    })
    .bind(("127.0.0.1", 8080))?
    .run()
    .await
}

We can start a server, and the website will be ready at http://localhost:8080/hello/YOUR_NAME

cargo run

Now, create all routes in src/main.rs. We will update it later.

src/main.rs
use actix_web::{delete, get, post, put, web, App, HttpResponse, HttpServer, Responder};

#[get("/tweets")]
async fn tweet_index() -> impl Responder {
    HttpResponse::Ok().body("Tweet#index")
}

#[post("/tweets")]
async fn tweet_create() -> impl Responder {
    HttpResponse::Ok().body("Tweet#new")
}

#[get("/tweets/{id}")]
async fn tweet_show(id: web::Path<String>) -> impl Responder {
    HttpResponse::Ok().body(format!("Tweet#show {}", id))
}

#[put("/tweets/{id}")]
async fn tweet_update(id: web::Path<String>) -> impl Responder {
    HttpResponse::Ok().body(format!("Tweet#edit {}", id))
}

#[delete("/tweets/{id}")]
async fn tweet_destroy(id: web::Path<String>) -> impl Responder {
    HttpResponse::Ok().body(format!("Tweet#delete {}", id))
}

#[actix_web::main] // or #[tokio::main]
async fn main() -> std::io::Result<()> {
    env_logger::init_from_env(env_logger::Env::new().default_filter_or("info"));

    HttpServer::new(|| {
        App::new()
            .route("/", web::get().to(|| async { "Actix REST API" }))
            .service(tweet_index)
            .service(tweet_create)
            .service(tweet_show)
            .service(tweet_update)
            .service(tweet_destroy)
    })
    .bind(("127.0.0.1", 8080))?
    .run()
    .await
}

All routes are ready, we will move them in a different module called handlers to keep our code well organized (I remove prefix tweet_), update src/handlers.rs:

src/handlers.rs
use actix_web::{delete, get, post, put, web, HttpResponse, Responder};

#[get("/tweets")]
async fn index() -> impl Responder {
  HttpResponse::Ok().body("Tweet#index")
}

#[post("/tweets")]
async fn create() -> impl Responder {
  HttpResponse::Ok().body("Tweet#new")
}

#[get("/tweets/{id}")]
async fn show(id: web::Path<String>) -> impl Responder {
  HttpResponse::Ok().body(format!("Tweet#show {}", id))
}

#[put("/tweets/{id}")]
async fn update(id: web::Path<String>) -> impl Responder {
  HttpResponse::Ok().body(format!("Tweet#edit {}", id))
}

#[delete("/tweets/{id}")]
async fn destroy(id: web::Path<String>) -> impl Responder {
  HttpResponse::Ok().body(format!("Tweet#delete {}", id))
}

and now file src/main.rs will add the following:

src/main.rs
use actix_web::{web, App, HttpServer};

mod handlers;

#[actix_web::main] // or #[tokio::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {
        App::new()
            .route("/", web::get().to(|| async { "Actix REST API" }))
            .service(handlers::index)
            .service(handlers::create)
            .service(handlers::show)
            .service(handlers::update)
            .service(handlers::destroy)
    })
    .bind(("127.0.0.1", 8080))?
    .run()
    .await
}

Set Up PostgreSQL

Install and start PostgreSQL

brew install PostgreSQL

# Start background service for pg
brew services start postgresql
# Or, if you don't want/need a background service you can just run:
/opt/homebrew/opt/postgresql/bin/postgres -D /opt/homebrew/var/postgres

Login to Postgres and create a new user and password with permission to create and access database.

psql postgres

CREATE ROLE actix_tweet_user WITH LOGIN PASSWORD 'your_password';
ALTER ROLE actix_tweet_user CREATEDB;

Log out of the default user and log in to the new user.

\q
psql -d postgres -U actix_tweet_user

Create a new database named actix_tweet_db

CREATE DATABASE actix_tweet_db
\q

I use TablePlus for MySQL/PostgreSQL client on Mac OS

Set Up Docker.

If you prefer Docker, you can use this docker-compose.yml

docker-compose.yml
version: '3.8'

services:
  pg:
    image: postgres:14.2-alpine
    container_name: docker_pg
    restart: always
    environment:
      POSTGRES_USER: custom_user
      POSTGRES_PASSWORD: supersecret_password
    ports:
      - '15432:5432'
    volumes:
      - db:/var/lib/postgresql/data
volumes:
  db:
    driver: local

and then run this command:

docker-compose up -d

As you noticed, I use port 15432 to avoid a duplicate port number if you already have Postgres installed.

Try to verify to connect postgres on docker:

psql postgresql://custom_user:supersecret_password@localhost:15432/postgres
# or
psql -U custom_user -W -h localhost -p 15432

Setup Diesel

Install Diesel CLI

cargo install diesel_cli --no-default-features --features postgres

Now setup Database Url with connection string in .env file:

DATABASE_URL=postgres://actix_tweet_user:your_password@localhost:5432/actix_tweet_db
diesel setup

In this step, we will create our database if it doesn't already exist and create a migrations folder to manage our database schema.

Let's create Tweet to store our tweet detail:

diesel migration generate create_tweets

You will see the output of two entry files inside migrations/** folder.

then edit up.sql:

CREATE TABLE tweets (
  id SERIAL NOT NULL PRIMARY KEY,
  message VARCHAR(140) NOT NULL,
  created_at TIMESTAMP NOT NULL
)

and then run:

diesel migration run

It generated a new file into src/schema.rs :

src/schema.rs
table! {
    tweets (id) {
        id -> Int4,
        message -> Varchar,
        created_at -> Timestamp,
    }
}

and then update down.sql with the following code:

DROP TABLE tweets;

We can now roll back our migration correctly by redoing:

diesel generate redo

You can look at diesel.toml file for how to configure a file.

Connect with PostgreSQL

After we created Diesel migrations and generated src/schema.rs. Then add a code to connect to PostgreSQL, and we use [R2D2] for connection pooling.

Let us modify the src/main.rs file and add the following to the top:

src/main.rs
#[macro_use]
extern crate diesel;

use actix_web::{web, App, HttpServer};
use diesel::prelude::*;
use diesel::r2d2::{self, ConnectionManager};

// We define a custom type for connection pool to use later.
pub type DbPool = r2d2::Pool<ConnectionManager<PgConnection>>;

Inside main() function, we create a connection pool by get DATABASE_URL from the .env file that we created earlier.

src/main.rs
mod handlers;

#[actix_web::main] // or #[tokio::main]
async fn main() -> std::io::Result<()> {
    // Loading .env into environment variable.
    dotenv::dotenv().ok();

    // set up database connection pool
    let database_url = std::env::var("DATABASE_URL").expect("DATABASE_URL");
    let manager = ConnectionManager::<PgConnection>::new(database_url);
    let pool: DbPool = r2d2::Pool::builder()
        .build(manager)
        .expect("Failed to create pool.");

    HttpServer::new(move || {
        App::new()
            .app_data(web::Data::new(pool.clone()))
            .route("/", web::get().to(|| async { "Actix REST API" }))
            .service(handlers::index)
            .service(handlers::create)
            .service(handlers::show)
            .service(handlers::update)
            .service(handlers::destroy)
    })
    .bind(("127.0.0.1", 8080))?
    .run()
    .await
}

Now, you can start a server by connecting to PostgreSQL. You can verify by running:

cargo run

Open a browser to test at http://localhost:8080/tweets/1

Then, I need to add some info log when the client makes some requests. Update the src/main.rs file with the following code:

src/main.rs
#[macro_use]
extern crate diesel;

use actix_web::{middleware, web, App, HttpServer};
use diesel::prelude::*;
use diesel::r2d2::{self, ConnectionManager};

pub type DbPool = r2d2::Pool<ConnectionManager<PgConnection>>;

mod handlers;

#[actix_web::main] // or #[tokio::main]
async fn main() -> std::io::Result<()> {
    // Loading .env into environment variable.
    dotenv::dotenv().ok();

    env_logger::init_from_env(env_logger::Env::new().default_filter_or("info"));

    // set up database connection pool
    let database_url = std::env::var("DATABASE_URL").expect("DATABASE_URL");
    let manager = ConnectionManager::<PgConnection>::new(database_url);
    let pool: DbPool = r2d2::Pool::builder()
        .build(manager)
        .expect("Failed to create pool.");

    HttpServer::new(move || {
        App::new()
            .app_data(web::Data::new(pool.clone()))
            .wrap(middleware::Logger::default())
            .route("/", web::get().to(|| async { "Actix REST API" }))
            .service(handlers::index)
            .service(handlers::create)
            .service(handlers::show)
            .service(handlers::update)
            .service(handlers::destroy)
    })
    .bind(("127.0.0.1", 8080))?
    .run()
    .await
}

Now, you can see log info on the console when you start a server:

[2022-03-11T19:19:42Z INFO  actix_server::builder] Starting 10 workers
[2022-03-11T19:19:42Z INFO  actix_server::server] Actix runtime found; starting in Actix runtime
[2022-03-11T19:19:45Z INFO  actix_web::middleware::logger] 127.0.0.1 "GET /tweets/43 HTTP/1.1" 200 13 "-" "Mozilla/5.0 (Macintosh;..." 0.000569

Define a model

Now, we need to define our Struct to match with schema. (you need to correct ordering a field.)

Create a file /src/models.rs :

src/models.rs
use serde::{Deserialize, Serialize};

use crate::schema::tweets;

#[derive(Debug, Serialize, Deserialize, Queryable)]
pub struct Tweet {
  pub id: i32,
  pub message: String,
  pub created_at: chrono::NaiveDateTime,
}

#[derive(Debug, Insertable)]
#[table_name = "tweets"]
pub struct NewTweet<'a> {
  pub message: &'a str,
  pub created_at: chrono::NaiveDateTime,
}

#[derive(Debug, Serialize, Deserialize)]
pub struct TweetPayload {
  pub message: String,
}

We use a different structure, NewTweet, for input data to the database because the id is an auto-increment.

  • Tweet - derives Queryable with De/Serialize data.
  • NewTweet - derives Insertable and refer table_name="tweets"
  • TweetPayload - use De/Serialize data for user payload.

and TweetPayload for JSON payload from the client request. e.g. {message: "data"}

after we have schema.rs and models then add modules to src/main.rs:

src/main.rs
#[macro_use]
extern crate diesel;

use actix_web::{middleware, web, App, HttpServer};
use diesel::prelude::*;
use diesel::r2d2::{self, ConnectionManager};

// We define a custom type for connection pool to use later.
pub type DbPool = r2d2::Pool<ConnectionManager<PgConnection>>;

mod handlers;
mod models;
mod schema;

#[actix_web::main] // or #[tokio::main]
async fn main() -> std::io::Result<()> {
  //...
}

Create a new Tweet.

Time to create a new tweet, open src/handlers.rs and create add_a_tweet:

src/handlers.rs
fn add_a_tweet(_message: &str, conn: &PgConnection) -> Result<Tweet, DbError> {
  use crate::schema::tweets::dsl::*;

  let new_tweet = NewTweet {
    message: _message,
    created_at: chrono::Local::now().naive_local(),
  };

  let res = diesel::insert_into(tweets)
    .values(&new_tweet)
    .get_result(conn)?;
  Ok(res)
}

Then update create() function :

src/handlers.rs
use super::DbPool;

use actix_web::{delete, get, post, put, web, Error, HttpResponse, Responder};
use diesel::prelude::*;

use crate::models::{NewTweet, Tweet, TweetPayload};

type DbError = Box<dyn std::error::Error + Send + Sync>;

#[post("/tweets")]
async fn create(
  pool: web::Data<DbPool>,
  payload: web::Json<TweetPayload>,
) -> Result<HttpResponse, Error> {
  let tweet = web::block(move || {
    let conn = pool.get()?;
    add_a_tweet(&payload.message, &conn)
  })
  .await?
  .map_err(actix_web::error::ErrorInternalServerError)?;

  Ok(HttpResponse::Ok().json(tweet))
}

Test by adding a new tweet:

curl -d '{"message": "I tweet from curl"}' -H "Content-type: application/js
on" -X POST http://localhost:8080/tweets

# You will get the result:
{"id":1,"message":"I tweet from curl","created_at":"2022-03-12T03:41:11.704416"}%

Query all tweets

Next, implement a query in the current file src/handlers.rs, create a function to query a database:

src/handlers.rs
fn find_all(conn: &PgConnection) -> Result<Vec<Tweet>, DbError> {
  use crate::schema::tweets::dsl::*;

  let items = tweets.load::<Tweet>(conn)?;
  Ok(items)
}

Update the index() function for route GET /tweets:

src/handlers.rs
#[get("/tweets")]
async fn index(pool: web::Data<DbPool>) -> Result<HttpResponse, Error> {
  let tweets = web::block(move || {
    let conn = pool.get()?;
    find_all(&conn)
  })
  .await?
  .map_err(actix_web::error::ErrorInternalServerError)?;

  Ok(HttpResponse::Ok().json(tweets))
}

Test by requesting an API or open a browser at http://localhost:8080/tweets

curl http://localhost:8080/tweets

# result
[
  {"id":1,"message":"I tweet from postman.","created_at":"2022-03-12T03:37:57.565890"},
  {"id":2,"message":"I tweet from postman 2.","created_at":"2022-03-12T03:39:59.713083"},
  {"id":3,"message":"I tweet from curl","created_at":"2022-03-12T03:41:11.704416"}
]%

Find a tweet by ID

src/handlers.rs
fn find_by_id(tweet_id: i32, conn: &PgConnection) -> Result<Option<Tweet>, DbError> {
  use crate::schema::tweets::dsl::*;

  let tweet = tweets
    .filter(id.eq(tweet_id))
    .first::<Tweet>(conn)
    .optional()?;

  Ok(tweet)
}

and then update show() function with route #[get("/tweets/{id}")] :

src/handlers.rs
#[get("/tweets/{id}")]
async fn show(id: web::Path<i32>, pool: web::Data<DbPool>) -> Result<HttpResponse, Error> {
  let tweet = web::block(move || {
    let conn = pool.get()?;
    find_by_id(id.into_inner(), &conn)
  })
  .await?
  .map_err(actix_web::error::ErrorInternalServerError)?;

  Ok(HttpResponse::Ok().json(tweet))
}

Open a browser to test : http://localhost:8080/tweets/1

curl http://localhost:8080/tweets/1
{"id":1,"message":"I tweet from postman.","created_at":"2022-03-12T03:37:57.565890"}%

curl http://localhost:8080/tweets/2
{"id":2,"message":"I tweet from postman 2.","created_at":"2022-03-12T03:39:59.713083"}%

Edit a tweet

Create update_tweet function, use tweet_id to query the data that we want to update, and then update with new message

src/handlers.rs
fn update_tweet(tweet_id: i32, _message: String, conn: &PgConnection) -> Result<Tweet, DbError> {
  use crate::schema::tweets::dsl::*;

  let tweet = diesel::update(tweets.find(tweet_id))
    .set(message.eq(_message))
    .get_result::<Tweet>(conn)?;
  Ok(tweet)
}

Update a function update() for route #[put("/tweets/{id}")]:

src/handlers.rs
#[put("/tweets/{id}")]
async fn update(
  id: web::Path<i32>,
  payload: web::Json<TweetPayload>,
  pool: web::Data<DbPool>,
) -> Result<HttpResponse, Error> {
  let tweet = web::block(move || {
    let conn = pool.get()?;
    update_tweet(id.into_inner(), payload.message.clone(), &conn)
  })
  .await?
  .map_err(actix_web::error::ErrorInternalServerError)?;

  Ok(HttpResponse::Ok().json(tweet))
}

Test it:

curl -d '{"message": "I tweet from curl (updated)"}' -H "Content-type: application/json" -X PUT http://localhost:8080/tweets/1

# result
{"id":1,"message":"I tweet from curl (updated)","created_at":"2022-03-12T03:37:57.565890"}%

Delete a tweet

The last one, we add a delete_tweet to delete a tweet data:

src/handlers.rs
fn delete_tweet(tweet_id: i32, conn: &PgConnection) -> Result<usize, DbError> {
  use crate::schema::tweets::dsl::*;

  let count = diesel::delete(tweets.find(tweet_id)).execute(conn)?;
  Ok(count)
}

and then update function for route :

src/handlers.rs
#[delete("/tweets/{id}")]
async fn destroy(id: web::Path<i32>, pool: web::Data<DbPool>) -> Result<HttpResponse, Error> {
  let result = web::block(move || {
    let conn = pool.get()?;
    delete_tweet(id.into_inner(), &conn)
  })
  .await?
  .map(|tweet| HttpResponse::Ok().json(tweet))
  .map_err(actix_web::error::ErrorInternalServerError)?;

  Ok(result)
}

Test it:

curl -X DELETE http://localhost:8080/tweets/2
# result
1%

Compile & Build optimized a package:

cargo build --release

Deploy API to Heroku

Now we have everything working locally, so time to deploy and set up the API on server. We use Heroku for our hosting.

Create New app

To create a new app on Heroku, you can create via:

  1. Using Heroku CLI
  2. Using git & Github

Install Heroku CLI

brew install heroku/brew/heroku

And then, login to Heroku using CLI. It will open a browser window, which you can use to log in.

heroku login

Create a new app using the command line (CLI) or create from Heroku dashboard:

heroku create rust-actix-crud
Heroku Create new app

Make sure that your app must be unique (You can use heroku create to create a random for you.)

Deploy using Heroku CLI

heroku git:remote -a your-heroku-app-name
git push heroku master

But you will fail to build and deploy because Heroku doesn't support it.

Then we use Heroku buildpack for Rust

Note: for the issue with emk/rust we will link to github directly - LINK

Set buildpack to your heroku app:

heroku buildpacks:set https://github.com/emk/heroku-buildpack-rust

Create a Procfile pointing to the release version that we created.

web: ./target/release/rust-actix-crud

add commit and push to deploy an application:

git add Procfile
git commit -m 'Setup buildpack'

git push heroku master

But the server is failed again because we don't have PostgreSQL yet.

Add PostgreSQL Add-on

Heroku Add Postgres
Heroku Add Postgres2

Config Vars will be updated DATABASE_URL variable with the Postgres add-on.

Heroku Config Vars

Create a file RustConfig for Diesel migrations during a release on Heroku:

RUST_INSTALL_DIESEL=1
DIESEL_FLAGS="--no-default-features --features postgres"

then update Procfile

web: ./target/release/rust-actix-crud
release: ./target/release/diesel migration run

Finally, update the code by remove hardcode port and use PORT from environment variable and bind to 0.0.0.0 instead of 127.0.0.1 :

src/main.rs
#[actix_web::main] // or #[tokio::main]
async fn main() -> std::io::Result<()> {
    // Loading .env into environment variable.
    dotenv::dotenv().ok();

    env_logger::init_from_env(env_logger::Env::new().default_filter_or("info"));

    // set up database connection pool
    let database_url = std::env::var("DATABASE_URL").expect("DATABASE_URL");
    let manager = ConnectionManager::<PgConnection>::new(database_url);
    let pool: DbPool = r2d2::Pool::builder()
        .build(manager)
        .expect("Failed to create pool.");

    let port = std::env::var("PORT").expect("$PORT is not set.");

    HttpServer::new(move || {
        App::new()
            .app_data(web::Data::new(pool.clone()))
            .wrap(middleware::Logger::default())
            .route("/", web::get().to(|| async { "Actix REST API" }))
            .service(handlers::index)
            .service(handlers::create)
            .service(handlers::show)
            .service(handlers::update)
            .service(handlers::destroy)
    })
    .bind(("0.0.0.0", port.parse().unwrap()))?
    .run()
    .await
}

🎉🎉🎉 Congratulations on finishing this tutorial!

Lesson Learned

After I finished this app, even though it's a small project, I learned many things; here are the things I learned and have encountered when creating this app.

  • I need to learn more about : std::result
  • Issue about Trait with Diesel Timestamp until I use chrono::NaiveDateTime to fix it.
  • I learned that Diesel Int4 is i32 not i64 that I got an error. so i64 is BigInt
  • #[derive(Queryable)] must define struct fields to match the columns in the SQL table.
  • Import collisions about Diesel schema, then use it inside a function's scope instead.
  • I need to read about : Actix - Errors to handle error cases.

I hope you enjoyed reading about this tutorial and feel inspired to make your own learning journey with Rust.

You can view the Source Code or Demo on Heroku. Thanks for reading.

References

Happy Coding ❤️

Authors