PDF Rust Web Programming Third Edition Early Access Maxwell Flitton
PDF Rust Web Programming Third Edition Early Access Maxwell Flitton
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK
ISBN: 978-1-83588-776-9
www.packt.com
Table of Contents
Rust Web Programming, Third Edition: A hands-on guide to
Rust for modern web development, with microservices and
nanoservices
1. 1 A Quick Introduction to Rust
1. Before you begin: Join our book community on Discord
2. Technical requirements
3. What is Rust?
1. Why is Rust revolutionary?
4. Reviewing data types and variables in Rust
1. Using strings in Rust
2. Using integers and floats
3. 8 byte integers
4. 16 byte integers
5. Introducing Floats
6. Storing data in arrays
7. Mapping data with enums
8. Storing data in vectors
9. Mapping data with HashMaps
10. Handling results and errors
5. Controlling variable ownership
1. Copying variables
2. Moving variables
3. Immutable borrowing of variables
4. Mutable borrowing of variables
5. Scopes
6. Running through lifetimes
6. Building Structs
7. Summary
8. Questions
9. Answers
10. Further Reading
2. 2 Useful Rust Patterns for Web Programming
1. Technical requirements
2. Verifying with traits
3. Metaprogramming with macros
4. Mapping Messages with Macros
5. Configuring our functions with traits
6. Checking Struct State with the Compiler
7. Summary
8. Questions
9. Answers
3. 3 Designing Your Web Application in Rust
1. Technical requirements
2. Managing a software project with Cargo
1. Basic Rust Compilation
2. Building with Cargo
3. Shipping crates with Cargo
4. Documenting with Cargo
5. Interacting with Cargo
3. Structuring code through Nanoservices
1. Building to-do structs
2. Managing structs with an API
3. Storing Tasks with our Data Access Layer
4. Creating a Task using our DAL
5. Summary
6. Questions
7. Answers
4. 4 Async Rust
1. Technical requirements
2. Understanding asynchronous programming
3. Understanding async and await
4. Implementing our own async task queue
5. Exploring high-level concepts of tokio
6. Implementing a HTTP server in Hyper
7. Summary
8. Questions
9. Answers
5. 5 Handling HTTP Requests
1. Technical requirements
2. Launching a basic web server
3. Connecting the core to the server
4. Refactoring our to-do items
5. Serving our to-do items
6. Handling errors in API endpoints
7. Summary
8. Questions
9. Answers
6. 6 Processing HTTP Requests
1. Technical requirements
2. Passing parameters via the URL
3. Passing data via POST body
4. Deleting resources using the DELETE method
5. Updating resources using the PUT method
6. Extracting data from HTTP request headers
7. Summary
8. Questions
9. Answers
7. 7 Displaying Content in the Browser
1. Technical requirements
2. Building out Development Setup
3. Serving frontend from Rust
4. Connecting backend API endpoints to the frontend
5. Creating React Components
6. Inserting Styles with CSS
7. Summary
8. Questions
9. Answers
8. 8 Injecting Rust in the Frontend with WASM
1. Technical requirements
2. Setting Up Our WASM Build
3. Loading WASM in the front-end
4. Loading WASM on the local machine
5. Building a WASM kernel
6. Building a WASM library
7. Building a WASM client
8. Summary
9. Questions
10. Answers
9. 9 Data Persistence with PostgreSQL
1. Technical requirements
2. Building our PostgreSQL database
1. Why we should use a proper database
2. Why use Docker?
3. How to use Docker to run a database
4. Running a database in Docker
5. Exploring routing and ports in Docker
6. Running docker in the background with bash scripts
3. Adding SQLX to our Data Access Layer
4. Defining our Database Transactions
5. Connecting our Transactions to the Core
6. Connecting our Transactions to the Server
7. Creating Our Database Migrations
8. Refactoring our Frontend
9. Summary
10. Questions
11. Answers
12. Appendix
10. 10 Managing user sessions
1. Technical requirements
2. Building our Auth Server
3. Data Access Layer
4. Core
5. Networking Layer
6. Defining Our User Data Model
7. Storing Passwords
8. Verifying Passwords
9. Creating Users
10. Defining our create user database transactions
11. Defining our core create API endpoint
12. Defining our networking create API endpoint
13. Refactoring our JWT
14. Restructuring our JWT
15. Creating a get key function
16. Encoding our token
17. Decoding our token
18. Building our Login API
19. Getting users via email in data access
20. Creating our Core Login API
21. Mounting our core login to our server
22. Interacting with our auth server
23. Adding Authentication to our frontend
24. Build a login API call
25. Adding tokens to our API calls
26. Build a login form component
27. Connect the login form to the app
28. Summary
29. Questions
30. Answers
31. Appendix
11. 11 Communicating Between Servers
1. Technical requirements
2. Getting users from auth with unique ID
3. Adding get by unique ID to dal
4. Adding get by unique ID to core
5. Adding get by unique ID to networking
6. Making auth accessible to other servers
7. Tethering users to to-do items
8. Linking our to-do items to users in the database
9. Adding user IDs to data access transactions
10. Adding user IDs to core functions
11. Adding user IDs to networking functions
12. Testing our server-to-server communication with bash
13. Summary
14. Questions
15. Answers
12. 12 Caching auth sessions
1. Technical requirements
2. What is caching
3. Setting up Redis
4. Building our Redis module
5. Defining the user session
6. Building the login process
7. Building the logout process
8. Building the update process
9. Building our Redis client
10. Building the login/logout client
11. Building the update client
12. Connecting our cache
13. Building our cache Kernel
14. Calling the Kernel from our to-do server
15. Calling the Kernel from our auth server
16. Summary
17. Questions
18. Answers
13. 13 Observability through logging
1. Technical requirements
2. What are RESTful services?
3. Building frontend code on command
4. What is logging?
5. Logging via the terminal
6. Defining a logger
7. Creating a logging middleware
8. Integrating our logger into our servers
9. Logging via a database
10. What is an actor?
11. Building our logging actor
12. Update logging functions
13. Configuring our logging database
14. Summary
1. Cover
2. Table of contents
Rust Web Programming, Third
Edition: A hands-on guide to Rust
for modern web development,
with microservices and
nanoservices
Welcome to Packt Early Access. We’re giving you an exclusive
preview of this book before it goes on sale. It can take many
months to write a book, but our authors have cutting-edge
information to share with you today. Early Access gives you an
insight into the latest developments by making chapter drafts
available. The chapters may be a little rough around the edges
right now, but our authors will update them over time.
You can dip in and out of this book or follow along from start to
finish; Early Access is designed to be flexible. We hope you
enjoy getting to know more about the process of writing a Packt
book.
https://packt.link/EarlyAccess/
What is Rust?
Why is Rust revolutionary?
Reviewing data types and variables in Rust
Controlling variable ownership
Building structs
Technical requirements
What is Rust?
Rust is a cutting-edge systems programming language that has
been making waves since Mozilla Research introduced it in
2010. With a focus on safety, concurrency, and performance,
Rust is a formidable alternative to traditional languages like C
and C++. Its most notable feature, the ownership system,
enforces rigorous memory safety rules at compile time. This
approach effectively eradicates common pitfalls like null
pointer dereferencing and buffer overflows, all without needing
a garbage collector.
There are other ways to exploit a program that does not have
correctly managed memory. On top of increased vulnerabilities,
it takes more code and time to solve a problem in a low-level
language. As a result of this, C++ web frameworks do not take
up a large share of web development. Instead, it usually makes
sense to go for high-level languages such as Python, Ruby, and
JavaScript. Using such languages will generally result in the
developer solving problems safely and quickly.
Memory safety
With more data processing, traffic, and complex tasks lifted into
the web stack, Rust, with its growing number of web
frameworks and libraries, has now become a viable choice for
web development. This has led to some truly amazing results in
the web space for Rust. In 2020, Shimul Chowdhury ran a series
of tests against servers with the same specs but different
languages and frameworks. The results can be seen in the
following figure (note that the Rust frameworks comprise Actix
Web and Rocket):
Figure 1.1 – Results of different frameworks and languages by
Shimul Chowdhury (found at
https://www.shimul.dev/en/blog/2020/benchmarking-flask-falcon-
actix-web-rocket-nestjs/)
C 1.00
Rust 1.03
JavaScript 4.45
PHP 29.30
Ruby 69.91
Python 75.88
Table 1.1 – Energy Rating of Languages from AWS Report (found
at https://aws.amazon.com/blogs/opensource/sustainability-
with-rust/)
Rust was second only to C in energy consumption and startup
time. AWS are not the only fans of Rust. In 2024, the
Whitehouse recommended Rust over C and C++ for future
projects in the "Back to the Building Blocks: A Pat Toward
Secure and Measurable Software" report. Finally, the
compatibility of Rust to other systems has created an inflection
point where we can integrate Rust. For instance, PostgreSQL
and Redis now support modules that can be written in Rust and
uploaded. Cloudflare has written a battle tested load balancer in
Rust serving more than 40 million internet requests per second.
Key Points
Trade-off Balance
Rust balances speed, resource efficiency, development
speed, and safety without compromising any aspect.
Low-level Control
Web Development
Integration
Rust's compatibility with systems like PostgreSQL and
Redis, and its use in high-performance components like
Cloudflare's load balancer, demonstrate its versatile
integration capabilities.
Some people might hear about these quirks and wonder why
they should bother with the language at all. This is
understandable, but these quirks are why Rust is such a
paradigm-shifting language. Working with borrow checking and
wrestling with concepts such as lifetimes and references gives
us the high-level memory safety of a dynamic language such as
Python. However, we can also get memory safe low-level
resources such as those delivered by C and C++. This means that
we do not have to worry about dangling pointers, buffer
overflows, null pointers, segmentation faults, data races, and
other issues when coding in Rust. Issues such as null pointers
and data races can be hard to debug. The borrow checking rules
enforced are a good trade-off as we must learn Rust's quirks to
get the speed and control of non-memory safe languages, but
we do not get the headaches these non-memory-safe languages
introduce.
If you have never visited the Rust playground before, you will
see the following layout once you are there:
fn main() {
println!("hello world");
}
If you were to code in Python, you would probably see this used
in a Flask application.
10 | print(message);
| ^^^^^^^ doesn't have a size known at c
|
= help: the trait `Sized` is not implemented for `
= note: all function arguments must have a statica
fn print(message: String) {
println!(""{}"", message);
}
fn main() {
let message = String::from(""hello world"");
print(message);
}
Considering that we know that String has a set size while our
string literal varies, we can deduce that the stack memory is
used for predictable memory sizes and is allocated ahead of
time when the program runs. The stack memory allocation
order is decided on compilation and optimized by the compiler.
Our heap memory is dynamic, and therefore memory is
allocated when it is needed.
Now that we know the basics of strings, we can use the different
ways in which they are created, as seen in the following code:
# slower method
data = ["one", "two", "three", "four"]
string = ""
for i in data:
string += i
# faster method
"".join(data)
8 byte integers
16 byte integers
This would work. However, let us add our 16-bit integer with
our 8-bit integer using the following code:
We will cover traits later in this chapter. For now, all we must
understand is that we cannot add the two different integers. If
they were both the same type, then we could.
Introducing Floats
One last point about integer sizes in Rust is that they are not
continuous. The supported sizes are shown in the following
table:
8 2^8 256
16 2^16 65536
32 2^32 4294967296
64 2^64 1.8446744e+19
fn main() {
let int_array: [i32; 3] = [1, 2, 3];
for i in int_array {
println!("{}", i);
}
println!("{}", int_array[1]);
}
1
2
3
2
In the preceding printout, we see that the loop works and that
we can access the second integer with square brackets.
Although the memory size of the array is fixed, we can still
change it. This is where mutability comes in.
fn main() {
let mut mutable_array: [i32; 3] = [1, 2, 0];
mutable_array[2] = 3;
println!("{:?}", mutable_array);
println!("{}", mutable_array.len());
}
In the preceding code, we can see that the last integer in our
array is updated to 3 . We then print out the full array and then
print out the length. You may have also noted that the first print
statement of the preceding code now employs {:?} . This calls
the Debug trait. If Debug is implemented for the thing that we
are trying to print, then the full representation of the thing we
are printing is then displayed in the console. You can also see
that we print out the result of the length of the array. Running
this code will give the following printout:
[1, 2, 3]
3
fn main() {
let slice_array: [i32; 100] = [0; 100];
println!("length: {}", slice_array.len());
println!("slice: {:?}", &slice_array[5 .. 8]);
}
length: 100
slice: [0, 0, 0]
enum SomeValue {
StringValue(String),
IntValue(i32)
}
In the preceding code, we can see that we wrap our strings and
integers in our enum. Now, looping through and getting it out is
going to be another task. For instance, there are things that we
can do to an integer that we cannot do to a string and vice
versa. Considering this, we are going to have to use a match
statement when looping through the array, as seen in the
following code:
for i in multi_array {
match i {
SomeValue::StringValue(data) => {
println!("The string is: {}", data);
},
SomeValue::IntValue(data) => {
println!("The int is: {}", data);
}
}
}
In the preceding code, we can see that we use the vec! macro
to create the vector of strings. You may have noticed with
macros such as vec! and println! that we can vary the
number of inputs. We will cover macros later in the chapter.
Running the preceding code will result in the following
printout:
We can also create an empty vector with the new function from
the Vec struct with
let _empty_vector: Vec<&str> = Vec::new(); .
You may be wondering when to use vectors and when to use
arrays. Vectors are more flexible. You may be tempted to reach
for arrays for performance gains. At face value, this seems
logical as it is stored in the stack. Accessing the stack is going to
be quicker because the memory sizes can be computed at
compile time, making the allocation and deallocation simpler
compared to the heap. However, because it is on the stack it
cannot outlive the scope that it is allocated. Moving a vector
around would require moving a pointer around. However,
moving an array requires copying the whole array. Therefore,
copying fixed-size arrays is more expensive than moving a
vector. If you have a small amount of data that you only need in
a small scope and you know the size of the data, then reaching
for an array does make sense. However, if you're going to be
moving the data around, even if you know the size of the data,
using vectors is a better choice.
#[derive(Debug)]
enum CharacterValue {
Name(String),
Age(i32),
Items(Vec<String>)
}
use std::collections::HashMap;
fn main() {
let mut profile: HashMap<&str, CharacterValue> =
}
profile.insert("name", CharacterValue::Name("Maxwell"
profile.insert("age", CharacterValue::Age(34));
profile.insert("items", CharacterValue::Items(vec![
"laptop".to_s
"book".to_str
"coat".to_str
]));
println!("{:?}", profile);
We can see that we have inserted all the data that we need.
Running this would give us the following printout:
{"items": Items(["laptop", "book", "coat"]), "age": A
"name": Name("Maxwell")}
match profile.get("name") {
Some(value_data) => {
match value_data {
CharacterValue::Name(name) => {
println!("the name is: {}", name)
},
_ => panic!("name should be a string"
}
},
None => {
println!("name is not present");
}
}
In the preceding code, we can check to see if there is a name in
the keys. If there is not, then we just print out that it was not
present. If the name key is present, we then move on to our
second check, which prints out the name if it is
CharacterValue::Name . However, there is something wrong if
the name key is not housing CharacterValue::Name . So, we
add only one more check in match , which is _ . This is a catch
meaning anything else. We are not interested in anything other
than CharacterValue::Name . Therefore, the _ catch maps to a
panic! macro, which essentially throws an error.
match profile.get("name").unwrap() {
CharacterValue::Name(name) => {
println!("the name is: {}", name);
},
_ => panic!("name should be a string")
}
Using the unwrap function is risky, and we should try and avoid
it as much as possible. We can avoid using the unwrap function
by handling our results and errors which we cover in the next
section.
fn main() {
println!("{:?}", error_check(false));
println!("{:?}", error_check(false).is_err());
println!("{:?}", error_check(true));
println!("{:?}", error_check(true).is_err());
}
Strings:
Type Casting:
Mutability:
Arrays:
Vectors:
Enums:
Result Type:
Used for functions that can return an error. Example:
Error Handling:
Knowing the rules is one thing but, to practically work with the
rules in Rust code, we need to understand copying, moving, and
borrowing in more detail.
Copying variables
In Figure 1.4, we can see that the path of One is still solid, which
denotes that it has not been interrupted and can be handled as
if the copy did not happen. Path Two is merely a copy, and there
is also no difference in the way in which it can be utilized as if it
were self-defined.
Note that if the variable has a copy trait, then the variable will
automatically be copied without us having to write any extra
code, as seen in the following code:
let one: i8 = 10;
let two: i8 = one + 5;
println!("{}", one);
println!("{}", two);
10
15
Moving variables
fn print(value: String) {
println!("{}", value);
}
fn main() {
let one = "one".to_string();
print(one);
println!("{}", one);
}
We can see in Figure 1.6 that two borrows the value from one .
It must be noted that when one is borrowed from, one is
locked and cannot be accessed until the borrow is finished.
To perform a borrow operation, we apply a prefix with & . This
can be demonstrated with the following code:
fn print(value: &String) {
println!("{}", value);
}
fn main() {
let one = "one".to_string();
print(&one);
println!("{}", one);
}
one
one
print(&one, one);
----- ---- ^^^ move out of `one` occurs here
| |
| borrow of `one` occurs here
borrow later used by call
Error Prevention:
Copying:
Moving:
Immutable Borrowing:
Mutable Borrowing
Only one mutable borrow at a time is allowed, enabling
safe value alterations
Dereference Operator:
Scopes
println!("{}", two);
^^^ not found in this scope
fn main() {
let one = &"one";
let two: &str;
{
println!("{}", one);
two = &"two";
}
println!("{}", one);
println!("{}", two);
}
In the preceding code, we can see that we do not use let when
assigning the value because we have already declared the
variable in the outer scope. Running the preceding code gives
us the following printout:
one
one
two
fn main() {
let one: &i8;
{
let two: i8 = 2;
one = &two;
} // -----------------------> two lifetime stops
println!("r: {}", one);
}
As we can see, the first and second lifetimes have the same
notation of a . They both must be present for the duration of
the function. Note that the function returns an i8 integer with
the lifetime of a . If we were to try and use lifetime notation on
function parameters without a borrow, we would get some very
confusing errors. In short, it is not possible to use lifetime
notation without a borrow. This is because if we do not use a
borrow, the value passed into the function is moved into the
function. Therefore, its lifetime is the lifetime of the function.
This seems straightforward; however, when we run it, we get
the following error:
println!("{}", outcome);}
^^^^^^^ use of possibly-uninitialized
The error occurs because all the lifetimes of the parameters
passed into the function and the returned integer are all the
same. Therefore, the compiler does not know what could be
returned. As a result, two could be returned. If two is returned,
then the result of the function will not live long enough to be
printed. However, if one is returned, then it will. Therefore,
there is a possibility of not having a value to print after the
inner scope is executed. In a dynamic language, we would be
able to run code that runs the risk of referencing variables that
have not been initialized yet. However, with Rust, we can see
that if there is a possibility of an error like this, it will not
compile.
In the short term, it might seem like Rust takes longer to code,
but as the project progresses, this strictness will save a lot of
time by preventing silent bugs. In conclusion of our error, there
is no way of solving our problem with the exact function and
main layout that we have. We would either move our printing
of the outcome into the inner scope, or clone the integers and
pass them into the function.
Variable Ownership:
Scope Rules:
Function Scope:
Lifetime Mismatch:
Function Lifetimes:
We have now covered all that we need to know for now to write
productive code in Rust without the borrow checker getting in
our way. We now need to move on to creating bigger building
blocks for our programs so we can focus on tackling the
complex problems we want to solve with code. We will start this
with a versatile building block of programs, structs.
Building Structs
#[derive(Debug)]
struct Human<'a> {
name: &'a str,
age: i8,
current_thought: &'a str
}
In the preceding code, we can see that our string literal fields
have the same lifetime as the struct itself. We have also applied
the Debug trait to the Human struct, so we can print it out and
see everything. We can then create the Human struct and print
the struct out using the following code:
fn main() {
let developer = Human{
name: "Maxwell Flitton",
age: 34,
current_thought: "nothing"
};
println!("{:?}", developer);
println!("{}", developer.name);
}
We can see that our fields are what we expect. However, we can
change our string slice fields to strings to get rid of lifetime
parameters. We may also want to add another field where we
can reference another Human struct under a friend field.
However, we may also have no friends. We can account for this
by creating an enum that is either a friend or not and assigning
this to a friend field, as seen in the following code:
#[derive(Debug)]
enum Friend {
HUMAN(Human),
NIL
}
#[derive(Debug)]
struct Human {
name: String,
age: i8,
current_thought: String,
friend: Friend
}
#[derive(Debug)]
enum Friend {
HUMAN(Box<Human>),
NIL
}
So, now our enum states whether the friend exists or not, and if
so, it has a memory address if we need to extract information
about this friend. We can achieve this with the following code:
fn main() {
let another_developer = Human{
name: "Caroline Morton".to_string(),
age:30,
current_thought: "I need to code!!".to_string
friend: Friend::NIL
};
let developer = Human{
name: "Maxwell Flitton".to_string(),
age: 34,
current_thought: "nothing".to_string(),
friend: Friend::HUMAN(Box::new(another_develo
};
match &developer.friend {
Friend::HUMAN(data) => {
println!("{}", data.name);
},
Friend::NIL => {}
}
}
#[derive(Debug)]
struct Human {
name: String,
age: i8,
current_thought: Option<String>,
friend: Friend
}
impl Human {
fn new(name: &str, age: i8) -> Human {
return Human{
name: name.to_string(),
age: age,
current_thought: None,
friend: Friend::NIL
}
}
}
Rust goes one better. It has traits, which we will explore in the
next chapter.
Key Points
Summary
With Rust, we have seen that there are some traps when
coming from a dynamic programming language background.
However, with a little bit of knowledge of referencing and basic
memory management, we can avoid common pitfalls and write
safe, performant code quickly that can handle errors.
Questions
Answers
Further Reading
https://packt.link/EarlyAccess/
Knowing the syntax and borrowing rules of Rust can get us
building programs. However, unlike dynamic programming
languages, Rust has a strict type system. If you do not know how
to get creative with traits, this can lead to you creating a lot of
excessive code to solve problems. In this chapter, we will cover
how to enforce certain parameter checks with traits increasing
the flexibility of the parameters of functions accepting structs.
We will also explore metaprogramming with macros to reduce
the amount of repetitive code we have to write. These macros
will also enable us to simplify and effectively communicate with
other developers what our code does. We will also utilize the
compiler to check the state of structs as they evolve, improving
the safety of the program.
Technical requirements
For detailed instructions, please refer to the file found here:
https://github.com/PacktPublishing/Rust-Web-Programming-
3E/tree/main/chapter011
We can see enums can empower structs so that they can handle
multiple types. This can also be translated for any type of
function or data structure. However, this can lead to a lot of
repetition. Take, for instance, a User struct. Users have a core
set of values, such as a username and password. However, they
could also have extra functionality based on roles. With users,
we must check roles before firing certain processes based on
the traits that the user has implemented. We can wrap up
structs with traits by creating a simple program that defines
users and their roles with the following steps:
struct AdminUser {
username: String,
password: String
}
struct User {
username: String,
password: String
}
We can see in the preceding code that the User and AdminUser
structs have the same fields. For this exercise, we just need two
different structs to demonstrate the effect traits have on them.
Now that our structs are defined, we can move on to our next
step, which is creating the traits.
1. The total traits that we have are create, edit, and delete. We
will be implementing these traits in our structs, using them to
assign permissions to our users. We can create these three
traits with the following code:
trait CanEdit {
fn edit(&self) {
println!("admin is editing");
}
}
trait CanCreate {
fn create(&self) {
println!("admin is creating");
}
}
trait CanDelete {
fn delete(&self) {
println!("admin is deleting");
}
}
We can see that the functions for the traits only take in self .
We cannot make any references to the fields in the functions to
self as we do not know what structs will be implemented.
However, we can override functions when we implement the
trait to the struct if needed.
From our previous step, we can remember that all the functions
already worked for the admin by printing out that the admin is
doing the action. Therefore, we do not have to do anything for
the implementation of the traits for the admin. We can also see
that we can implement multiple traits for a single struct. This
adds a lot of flexibility. In our user implementation of the
CanEdit trait, we have overwritten the edit function so that
we can have the correct statement printed out.
Now that we have defined the functions, we can use them in the
main function in the next step.
1. We can test to see if all the traits work with the following
code:
fn main() {
let admin = AdminUser{
username: "admin".to_string(),
password: "password".to_string()
};
let user = User{
username: "user".to_string(),
password: "password".to_string()
};
create(&admin);
edit(&admin);
edit(&user);
delete(&admin);
}
We can see that the functions that accept traits are used just like
any other function.
admin is creating
admin is editing
A standard user is editing
admin is deleting
Here, we are saying that the user must have the permission to
create and delete entries. This leads me onto my opinion that
traits are more powerful than object inheritance. To qualify this,
let us think about building a game. In one function, we deal
damage from one player to another. We could build a class that
is a player character. This class has the methods take and deal
damage. We then build out different types of characters such as
Orc, Elf, Human etc. that all inherit this class. This all seems
reasonable, and we start coding away. However, what about
buildings? Buildings could theoretically take damage, but they
are not player characters. Also, buildings by themselves cannot
really deal damage. We are now stuck rewriting our structure
or writing new functions that accommodate buildings and
having from if else if conditional logic on what function to call.
However, if we have a function where one parameter must
have the deal damage trait implemented, and the other
parameter must have the take data trait implemented, our
participants of this function can come and go with little friction.
In my experience, developers who complain about Rust being a
rigid language are not utilizing traits. Because of traits, I have
found rust to be more flexible than many object orientated
languages.
macro_rules! capitalize {
($a: expr) => {
let mut v: Vec<char> = $a.chars().collect();
v[0] = v[0].to_uppercase().nth(0).unwrap();
$a = v.into_iter().collect();
}
}
fn main() {
let mut x = String::from("test");
capitalize!(x);
println!("{}", x);
}
| capitalize!(32);
| ---------------- in this macro invocation
|
= help: the trait `std::iter::FromIterator<char>` is
#[derive(Debug)]
pub struct ContractOne {
input_data: String,
output_data: Option<String>
}
#[derive(Debug)]
pub struct ContractTwo {
input_data: String,
output_data: Option<String>
}
#[derive(Debug)]
pub enum ContractHandler {
ContractOne(ContractOne),
ContractTwo(ContractTwo),
}
#[macro_export]
macro_rules! register_contract_routes {
(
$handler_enum:ident,
$fn_name:ident,
$( $contract:ident => $handler_fn:path ),*) =
. . .
};
}
The signature is a little daunting, so we will focus on the hardest
expression. Once we understand that, everything else will fall
into place. To understand the line
$( $contract:indent => $handler_fn:path,*) , we must
break it down.
Inside our macro, we define our function, and loop through all
of our data contract and function mappings with the following
code:
The $(...)* is the loop. We can see that we unwrap the data
contract in the match statement, pass the unwrapped contract
into the mapped function, and then wrap the response of that
mapped function into our enum again, and return it.
We can now call our macro with our handler enum, contracts,
and functions with the following code:
register_contract_routes!(
ContractHandler,
handle_contract,
ContractOne => handle_contract_one,
ContractTwo => handle_contract_two
);
fn main() {
let contract_one = ContractOne {
data: "Contract One".to_string(),
};
let outcome = handle_contract(
ContractHandler::ContractOne(contract_one)
);
println!("{:?}", outcome);
}
Contract One
ContractOne(ContractOne {
input_data: "Contract One",
output_data: Some("Output Data")
})
#[derive(Debug)]
pub struct User {
name: String,
age: u32
}
We then define a trait that lays out the signature of getting users
with the following code:
trait GetUsers {
fn get_users() -> Vec<User>;
fn get_user_by_name(name: &str) -> Option<User> {
let users = Self::get_users();
for user in users {
if user.name == name {
return Some(user);
}
}
None
}
}
This is not an optimal implementation, as in normal database
queries, we would perform the filter in the database and return
the filtered data. However, for our example, this is easier to
implement. We must note that the reference to Self in the
get_user_by_name function is capitalized, meaning that we are
referring to the struct implementing the trait, as opposed to an
instance of that struct. Therefore, we do not need to create an
instance of our struct to call the get_user_by_name function.
We can then implement our trait for a database engine like the
example code below:
#[derive(Debug)]
pub struct GetUserContract {
pub name: String,
pub users: Option<User>
}
#[derive(Debug)]
pub enum ContractHandler {
ContractOne(ContractOne),
ContractTwo(ContractTwo),
GetUserContract(GetUserContract)
}
register_contract_routes!(
ContractHandler,
handle_contract,
ContractOne => handle_contract_one,
ContractTwo => handle_contract_two,
GetUserContract => handle_get_user_by_name::<Post
);
Here, we can see that we have slotted our Postgres handler into
the mapping. We can test to see if this works with the code
below:
fn main() {
. . .
let get_user_contract = GetUserContract {
name: "John".to_string(),
users: None
};
let outcome = handle_contract(
ContractHandler::GetUserContract(
get_user_contract
)
);
println!("{:?}", outcome);
}
This gives us a lot of power. We can slot any struct into that
handle function if the struct has implemented the GetUsers
trait. This struct could be a HTTP request to another server, a file
handle, an in-memory store, or another database. We will
exploit this approach again throughout the book, as over
multiple chapters, we will build out a data access layer that can
support multiple different storage engines.
use std::marker::PhantomData;
struct InProgress;
struct Committed;
struct RolledBack;
struct Transaction<State> {
id: u32,
state: PhantomData<State>,
}
fn process_in_progress_transaction(tx: &Transaction<I
println!(
"Processing transaction {} which is in progre
tx.id
);
}
fn main() {
let tx = Transaction::<InProgress>::new(1);
let tx = tx.commit();
process_in_progress_transaction(&tx);
}
process_in_progress_transaction(&tx);
------------------------------- ^^^
expected `&Transaction<InProgress>`,
found `&Transaction<Committed>`
User sessions: when the user has logged out, the state changes.
Or the state could be a user role, so only certain parts of the
codebase are available to user structs that have a certain role.
Resource Allocation: Here we could ensure that resources are
used effectively. We could have states such as Available ,
InUse , or Released .
We could keep going on but with this list, we get the picture of
how the type-state pattern could be implemented and what
problems it could solve.
Summary
Questions
Answers
https://packt.link/EarlyAccess/
We previously explored the syntax of Rust, enabling us to tackle
memory management quirks and build data structures.
However, just knowing syntax and memory management is not
enough to effectively build fully working programs. As any
experienced engineer will tell you, structuring code across
multiple files and directories is an important aspect of building
software.
Technical requirements
https://github.com/PacktPublishing/Rust-Web-Programming-
3E/tree/main/chapter02
rustc hello_world.rs
.\hello_world.exe
./hello_world
We can see that there is only one Rust file, and this is the
main.rs file that is housed in the src directory. If you open
the main.rs file, you will see that this is the same as the file
that we made in the previous section. It is an entry point with
the default code printing out hello world to the console. The
dependencies and metadata for our project are defined in the
Cargo.toml file.
cargo run
When you do this, you will see the project compile and run with
the following printout:
Now that we have got our basic builds done, we can start to use
Cargo to utilize third-party crates.
[dependencies]
rand = "0.8.5"
Now that we've defined our dependency, we can use the rand
crate to build a random number generator.
// src/main.rs
use rand::prelude::*;
fn generate_float(generator: &mut ThreadRng) -> f64 {
let placeholder: f64 = generator.gen();
return placeholder * 10.0
}
fn main() {
let mut rng: ThreadRng = rand::thread_rng();
let random_number = generate_float(&mut rng);
println!("{}", random_number);
}
Now that our code is built, we can run our program with the
cargo run command. While Cargo is compiling, it pulls code
from the rand crate and compiles that into the binary. We can
also note that there is now a cargo.lock file. As we know that
cargo.toml is for us to describe our own dependencies,
cargo.lock is generated by Cargo and we should not edit it
ourselves as it contains exact information about our
dependencies.
Speed and safety are not the only benefits of picking a language
such as Rust to develop in. Over the years, the software
engineering community keeps learning and growing. Simple
things such as good documentation can make or break a
project. To demonstrate this, we can define Markdown language
within the Rust file with the following code:
// src/main.rs
/// This function generates a float number using a nu
/// generator passed into the function.
///
/// # Arguments
/// * generator (&mut ThreadRng): the random number
/// generator to generate the random number
///
/// # Returns
/// (f64): random number between 0 -> 10
fn generate_float(generator: &mut ThreadRng) -> f64 {
let placeholder: f64 = generator.gen();
return placeholder * 10.0
}
├── Cargo.toml
└── to_do
└── core
├── Cargo.toml
└── src
└── main.rs
Here, we have defined our to_do service. Inside the to_do
service, we have a core module. The core module is where
we run our core logic which is handling the creation of to-do
items. Later, we will build out the data access module and
networking modules for the to_do service.
# File: ./to_do/core/Cargo.toml
[package]
name = "core"
version = "0.1.0"
edition = "2021"
[dependencies]
# File: ./Cargo.toml
[workspace]
resolver = "2"
members = [
"to_do/core"
]
Now, let us get some basic interaction with our system where
we can pass in commands to our program using the command
line. To enable our program to have some flexibility depending
on the context, we need to be able to pass parameters into our
program and keep track of the parameters in which the
program is running. We can do this using the std (standard
library) identifier with the code below in our file:
Here, we can see that our args vector has the arguments that
we passed in. This is not surprising as many other languages
also accept arguments passed into the program via the
command line. We must note as well that the path to the binary
is also included.
# File: to_do/core/Cargo.toml
[dependencies]
clap = { version = "4.5.2", features = ["derive"] }
At the time of writing this, the clap crate requires the rustc
1.74 minimum. If you need to update your Rust version, run
the rustup update command.
fn main() {
let args = Args::parse();
println!("{:?}", args.first_name);
println!("{:?}", args.last_name);
println!("{:?}", args.age);
}
Now that we have a working example of how to pass command-
line arguments, we can interact with our application to see how
it displays by running the following command:
"maxwell"
"flitton"
34
We can see that the parsing works as we have two strings and
an integer. The reason why crates such as clap are useful is
that they are essentially self-documenting. Developers can look
at the code and know what arguments are being accepted and
view the metadata around them. Users can get help on the
inputs by merely passing in the help parameter. This approach
reduces the risk of the documentation becoming outdated as it
is embedded in the code that executes it. If you accept
command-line arguments, it is advised that you use a crate such
as clap for this purpose.
Right now, we only have two structs for to-do items: ones that
are waiting to be done and others that are already done.
However, we might want to introduce other categories. For
instance, we could add a backlog category, or an on-hold task for
tasks that have been started but for one reason or another are
blocked.
├── Cargo.toml
└── to_do
└── core
├── Cargo.toml
└── src
├── api
│ └── mod.rs
├── enums.rs
├── main.rs
└── structs
├── base.rs
├── done.rs
├── mod.rs
└── pending.rs
Here we can see that we have defined two modules, api and
structs in the core of the to_do service. You might note that
there is a mod.rs in the api and structs directories. The
mod.rs enables us to declare files in the module. For instance,
we can declare the files in the structs module with the
following code:
This will work in the code when it comes to defining the status
of the task. However, if we want to write to a file or database, we
are going to have to build a method to enable our enum to be
represented in a string format. We can do this by implementing
the Display trait for TaskStatus . First, we must import the
format module to implement the Display trait with the
following code:
println!("{}", TaskStatus::DONE);
println!("{}", TaskStatus::PENDING);
let outcome = TaskStatus::DONE.to_string();
println!("{}", outcome);
DONE
PENDING
DONE
Now that we have our Base struct, we can build our Pending
and Done structs. This is when we use composition to utilize
our Base struct in our
/to_do/ core/src/structs/pending.rs file with the following
code:
We can see that there is not much difference from the Pending
struct definition apart from the TaskStatus enum having a
DONE status. At this point it might seem excessive to write two
separate structs. Right now, we are in discovery phase. If the
functionality of our structs increase in the future, our structs
are decoupled, meaning we can update the struct functionality
without any pain. However, if the complexity does not explode,
we could investigate refactoring the structs into one struct.
However, we must also note that this approach is not the only
legitimate way. Some developers like to start in just one page
and branch out when the complexity increases. I personally do
not like this approach as I have seen code get highly coupled
before the decision is made to break it out, making the refactor
harder. No matter what approach you take, if you keep track of
the complexity, keep the code decoupled, and can refactor
when needed, you are all good.
shopping
DONE
laundry
PENDING
├── Cargo.toml
└── to_do
└── core
├── Cargo.toml
└── src
├── api
│ ├── basic_actions
│ │ ├── create.rs
│ │ └── mod.rs
│ └── mod.rs
Our api module is already defined in our main.rs file. So, all
we need is the following declarations to have our create in the
api module plugged in:
Here, we can see that the code needed to interact with our
create API is greatly simplified. Running the code gives us the
following printout:
Pending: washing
Now that we have our API working, we need to build out a basic
data storage system to store our to-do tasks.
└── to_do
├── core
│ ├── Cargo.toml
│ └── src
│ . . .
└── dal
├── Cargo.toml
└── src
├── json_file.rs
└── lib.rs
# File: ./Cargo.toml
[workspace]
resolver = "2"
members = [
"to_do/core",
"to_do/dal"
]
Before we write any dal code, we need to define the
Cargo.toml for the dal library which takes the following
form:
# File: to_do/dal/Cargo.toml
[package]
name = "dal"
version = "0.1.0"
edition = "2021"
[features]
json-file = ["serde_json", "serde"]
[dependencies]
serde_json = { version="1.0.114", optional = true }
serde = { version="1.0.197", optional = true }
We now need to declare our JSON file code in our lib.rs file
with the code below:
# File: to_do/core/Cargo.toml
[dependencies]
dal = { path = "../dal", features = ["json-file"] }
If we run a build, we can see that our entire system compiles
with no problems, meaning that we can now access our JSON
file storage functions within our core module! Now that
everything is plugged in, we can build our functions that
interact with out JSON file for storage.
// File: to_do/dal/src/json_file.rs
fn get_handle() -> Result<File, String> {
let file_path = env::var("JSON_STORE_PATH").unwra
"./tasks.json".to_string()
);
let file = OpenOptions::new()
.read(true)
.write(true)
.create(true)
.open(&file_path)
.map_err(
|e| format!("Error opening file: {}", e)
)?;
Ok(file)
}
Here, we can see that we look for an environment variable
called JSON_STORE_PATH . If this variable is not defined, we then
default to the path being ./tasks.json .
We can now move onto the most basic function which is merely
reading the file and returning all the results. This function is
defined using the code below:
// File: to_do/dal/src/json_file.rs
pub fn get_all<T: DeserializeOwned>()
-> Result<HashMap<String, T>, String> {
let mut file = get_handle()?;
let mut contents = String::new();
file.read_to_string(&mut contents).map_err(
|e| format!("Error reading file: {}", e)
)?;
let tasks: HashMap<String, T> = serde_json::from_
&contents).map_err(|e| format!("Error parsing
)?;
Ok(tasks)
}
We also want to save all the tasks that we have to the file. This is
where we pass in the hashmap into the function and write it to
the file with the following code to save all our to-do items:
// File: to_do/dal/src/json_file.rs
pub fn save_all<T: Serialize>(tasks: &HashMap<String,
-> Result<(), String> {
let mut file = get_handle()?;
let json = serde_json::to_string_pretty(tasks).ma
|e| format!("Error serializing JSON: {}", e)
)?;
file.write_all(json.as_bytes()).map_err(
|e| format!("Error writing file: {}", e)
)?;
Ok(())
}
We can see that the save function has the same use of generics
that the get function has. We now have everything we need.
However, our application will be doing a lot of operations on a
single task. We can define these operations below, so we do not
have to repeat them anywhere else in the application. These
functions are slightly repetitive with some variance. It would be
a good idea to try and complete these functions yourself. If you
do, then hopefully they look like the functions below:
// File: to_do/dal/src/json_file.rs
pub fn get_one<T: DeserializeOwned + Clone>(id: &str)
-> Result<T, String> {
let tasks = get_all::<T>()?;
match tasks.get(id) {
Some(t) => Ok(t.clone()),
None => Err(format!("Task with id {} not foun
}
}
pub fn save_one<T>(id: &str, task: &T)
-> Result<(), String>
where
T: Serialize + DeserializeOwned + Clone,
{
let mut tasks = get_all::<T>().unwrap_or_else(
|_| HashMap::new()
);
tasks.insert(id.to_string(), task.clone());
save_all(&tasks)
}
pub fn delete_one<T>(id: &str) -> Result<(), String>
where
T: Serialize + DeserializeOwned + Clone,
{
let mut tasks = get_all::<T>().unwrap_or(
HashMap::new()
);
tasks.remove(id);
save_all(&tasks)
}
Our data access layer is now fully defined. We can move onto
utilizing the data access layer in the core.
# File: to_do/core/Cargo.toml
[dependencies]
dal = { path = "../dal", features = ["json-file"] }
serde = { version = "1.0.197", features = ["derive"]
clap = { version = "4.5.4", features = ["derive"] }
We can now start writing code. We now need to enable our
TaskStatus to deserialize and serialize so we can write the task
status of the to-do item to the JSON file. We also want our status
to construct from a string.
We can then then apply the serde traits with the code below:
// File: to_do/core/src/enums.rs
#[derive(Serialize, Deserialize, Debug, Clone)]
pub enum TaskStatus {
DONE,
PENDING
}
// File: to_do/core/src/api/basic_actions/create.rs
use dal::json_file::save_one;
With this data access function, our create API function now
has the following form:
// File: to_do/core/src/api/basic_actions/create.rs
pub fn create(title: &str, status: TaskStatus)
-> Result<ItemTypes, String> {
let _ = save_one(&title.to_string(), &status)?;
match &status {
TaskStatus::PENDING => {
Ok(ItemTypes::Pending(Pending::new(&title
},
TaskStatus::DONE => {
Ok(ItemTypes::Done(Done::new(&title)))
},
}
}
// File: to_do/core/src/main.rs
#[derive(Parser, Debug)]
#[command(version, about, long_about = None)]
struct Args {
#[arg(short, long)]
title: String,
#[arg(short, long)]
status: String,
}
fn main() -> Result<(), String> {
let args = Args::parse();
let status_enum = TaskStatus::from_string(
&args.status
)?;
let to_do_item = create(
&args.title,
status_enum
)?;
println!("{}", to_do_item);
Ok(())
}
We can see that our main function returns the same result
signature, the TaskStatus::from_string and create
functions can use the ? operator. Now we can test to see if our
system works with the following terminal commands:
{
"washing": "DONE",
"coding": "PENDING"
}
Summary
Questions
Answers
https://packt.link/EarlyAccess/
We are so close to building Rust code that handles HTTP
requests. However, before we do that, we should really
understand async programming in Rust, as the average web
framework utilizes async code. However, it must be noted that
you do not need to understand async code fully to code web
servers in Rust. I have met plenty of web programmers building
adequate servers who do not know how async code works
under the hood. Feel free to skip this chapter if you are
stretched for time or do not want to understand async rust code
at a deeper level. If you are unsure, I would highly advise that
you complete this chapter as understanding how async code
works will give you a stronger ability to debug issues and avoid
pitfalls. It will also enable you to fully utilize async Rust fully,
diversifying the solutions you can offer in web programming.
https://help.ubidots.com/en/articles/2165289-learn-how-to-
install-run-curl-on-windows-macosx-linux
Understanding asynchronous
programming
number 1 is running
number 2 is running
number 3 is running
time elapsed 6.0109845s
result 6
number 1 is running
number 3 is running
number 2 is running
time elapsed 2.002991041s
result 6
We already know what a Box is; however, the dyn Any + Send
seems new. dyn is a keyword that we use to indicate what type
of trait is being used. Any + Send are two traits that must be
implemented. The Any trait is for dynamic typing, meaning that
the data type can be anything. The Send trait means that it is
safe to be moved from one thread to another. Send also means
that it is safe to copy from one thread to another. Now that we
understand this, we could handle the results of threads by
merely matching the Result outcome, and then downcasting
the error into a String to get the error. There is nothing stopping
you from logging failures of threads or spinning up new threads
based on the outcomes of previous threads. Thus, we can see
how powerful the Result struct is. There is more we can do
with threads such as give them names or pass data between
them with channels. However, the focus of this book is web
programming, not an entire book on async Rust.
[dependencies]
tokio = { version = "1.36.0", features = ["full"] }
With the preceding crate installed, we can import what we need
in our main.rs using the following code:
We can run our future and time it in the main function with the
following code:
#[tokio::main(worker_threads = 1)]
async fn main() {
let now = time::Instant::now();
let future_one = do_something(1);
let outcome = future_one.await;
println!("time elapsed {:?}", now.elapsed());
println!("Here is the outcome: {}", outcome);
}
number 1 is running
time elapsed 2.00018789s
Here is the outcome: 2
#[tokio::main(worker_threads = 1)]
async fn main() {
let now = time::Instant::now();
let future_one = do_something(1);
let two_seconds = time::Duration::new(2, 0);
thread::sleep(two_seconds);
let outcome = future_one.await;
println!("time elapsed {:?}", now.elapsed());
println!("Here is the outcome: {}", outcome);
}
number 1 is running
time elapsed 4.000269667s
Here is the outcome: 2
Thus, we can see that our future does not execute until we
apply an executor using await .
We can send our async task to the executor straight away and
then wait on it later with the code below:
#[tokio::main(worker_threads = 1)]
async fn main() {
let now = time::Instant::now();
let future_one = tokio::spawn(do_something(1));
let two_seconds = time::Duration::new(2, 0);
thread::sleep(two_seconds);
let outcome = future_one.await.unwrap();
println!("time elapsed {:?}", now.elapsed());
println!("Here is the outcome: {}", outcome);
}
number 1 is running
time elapsed 2.005152292s
Here is the outcome: 2
Here we can see that our time elapsed has halved! This is
because our tokio::spawn sends the task to the tokio worker
thread to be executed while the main thread processes the sleep
function in the main function. However, if we increase the
number of tasks spawning by one with the code below:
[dependencies]
async-task = "4.7.0"
futures-lite = "2.2.0"
once_cell = "1.19.0"
flume = "0.11.0"
We are now ready to move onto our first step: creating an async
task queue.
struct AsyncSleep {
start_time: Instant,
duration: Duration,
}
impl AsyncSleep {
fn new(duration: Duration) -> Self {
Self {
start_time: Instant::now(),
duration,
}
}
}
Here we can see that we calculate the elapsed time. If the time
has elapsed, we then return a ready. If not, we return a pending.
Calculating the elapsed time means that our sleeping future will
temporarily block the executor to see if the time has elapsed. If
the time has not elapsed, then the executor moves onto the
next future to poll and will come back later to check if the time
has elapsed again by polling it again. This means that we can
process thousands of sleep functions on our single thread. With
this in mind, we can now move onto running our async code in
our main function.
Here, we can see that our async function merely sleeps twice
and prints out simple progress statements as the async function
progresses.
We can test our async code in our main function with the code
below:
fn main() {
let handle_one = spawn_task(sleeping(1));
let handle_two = spawn_task(sleeping(2));
let handle_three = spawn_task(sleeping(3));
println!("before the sleep");
std::thread::sleep(Duration::from_secs(5));
println!("before the block");
future::block_on(handle_one);
future::block_on(handle_two);
future::block_on(handle_three);
}
Here, we can see that we spawn three async tasks that are all
going to sleep for 5 seconds each. We then block the main
thread. This is to test that our async sleeps are truly async. If
they are not, then we will not get through all our sleep
functions before the main sleep is finished.
Here, we can see that all our sleep functions execute before the
sleep in the main thread has finished, this means that our
system is truly async! However, if this was a bit of a headache
for you, do not worry, in Rust web programming, you will not be
asked to manually implement async code for your server. Most
of the async functionality has been built for you, but we do need
to have a grasp of what's going under the hood when we are
calling these async implementations.
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let addr = env::args()
.nth(1)
.unwrap_or_else(|| "127.0.0.1:8080".to_string
let listener = TcpListener::bind(&addr).await?;
println!("Listening on: {}", addr);
loop {
// Asynchronously wait for an inbound socket.
let (mut socket, _) = listener.accept().await
tokio::spawn(async move {
// process the TCP request
});
}
}
Here, we can see that the TCP listener is defined in the main
thread. We also run a continuous loop in the main thread
listening for new incoming TCP requests. When we get a new
TCP request, we spawn a new async task to process that request,
and then go back to listening for more incoming TCP requests.
This means that our main thread is not held up processing
requests. Instead, our main thread can spend all the time
listening for requests.
[dependencies]
hyper = { version = "1.2.0", features = ["full"] }
tokio = { version1.36.0= "1", features = ["full"] }
http-body-util = "0.1.1"
hyper-util = { version = "0.1.3", features = ["full"]
Here, we can see that we just print out the header of the
request, and the body. After printing out the request data, we
merely return a simple hello world message. Our handle
function would be a good place to perform routing to other
async functions but for our example, we are just going to return
the simple message no matter what endpoint, or method you
use. We can see that the error type is Infallible . This
basically means that it cannot happen. Infallible is
essentially a placeholder to map results. This makes sense in
our case as you can see that we have no unwraps to access the
data we are printing. Throughout the book we will build our
own custom error handling, but for this simple example,
Infallible is useful to us.
For our main function, we run our server with the code below:
#[tokio::main]
async fn main()
-> Result<(), Box<dyn std::error::Error + Send +
let addr = SocketAddr::from(([127, 0, 0, 1], 3000
let listener = TcpListener::bind(addr).await?;
loop {
let (stream, _) = listener.accept().await?;
let io = TokioIo::new(stream);
tokio::task::spawn(async move {
. . .
});
}
}
let io = TokioIo::new(stream);
The TokioIo struct is from the Hyper utils crate. This TokioIo
struct is essentially a wrapper around the implementation of
the Tokio IO traits.
Now that we have adapted the tokio stream of bytes from the
request, we can now work with the hyper crate. Inside our |
block we handle the stream of bytes with the following code:
use hyper::server::conn::http2;
use hyper::rt::Executor;
use std::future::Future;
#[derive(Clone)]
struct TokioExecutor;
impl<F> Executor<F> for TokioExecutor
where
F: Future + Send + 'static,
F::Output: Send + 'static,
{
fn execute(&self, future: F) {
tokio::spawn(future);
}
}
We can now swap our HTTP 1 builder out for our HTTP 2
builder with the code below:
Summary
Questions
Answers
https://packt.link/EarlyAccess/
After the detour of understanding async on a deeper level, we
are now going back to our project that we were working on in
chapter two. So far, we have structured our to-do module in a
flexible, scalable, and re-usable manner. However, this can only
get us so far in terms of web programming. We want our to-do
module to reach multiple people quickly without the user
having to install Rust on their own computers. We can do this
with a web framework. In this chapter, we will build on the core
logic of our to-do items and connect this core logic to a server.
By the end of this chapter, you will be able to build a server that
has a data access layer, core layer, and networking layer. You
will also get some exposure in handling error across all cargo
workspaces, and refactoring some of our code as our
requirements for our server get more defined over time.
Technical requirements
You can find the full source code that will be used in this
chapter here:
├── Cargo.toml
└── to_do
├── core
│ ├── . . .
├── dal
│ ├── . . .
└── networking
└── actix_server
├── Cargo.toml
└── src
└── main.rs
With this new layout, our Cargo.toml at the root of our project
should have the following workspaces defined:
# File: ./Cargo.toml
[workspace]
resolver = "2"
members = [
"to_do/core",
"to_do/dal",
"to_do/networking/actix_server"
]
# File: ./to_do/networking/actix_server/Cargo.toml
[package]
name = "actix_server"
version = "0.1.0"
edition = "2021"
[dependencies]
tokio = { version = "1.36.0", features = ["full"] }
actix-web = "4.5.1"
And we can now write our server main.rs file. First, we need
to use the following structs and traits with the code below:
We will see how these structs and traits are used as and when
we use them. Recall from the previous chapter, servers typically
handle incoming requests by passing these requests as async
tasks into the async runtime to be handled. It should not be a
surprise that our server API endpoint is an async task defined
by the following code:
// File: ./to_do/networking/actix_server/src/main.rs
async fn greet(req: HttpRequest) -> impl Responder {
let name = req.match_info().get("name").unwrap_or
format!("Hello {}!", name)
}
With the preceding code, we can see thar our API endpoint
receives the HTTP request and returns anything that has
implemented the Responder trait. We then extract the name
from the endpoint of the URL or return a "World" if the name is
not in the URL endpoint. Our view then returns a string. To see
what we can automatically return as a response, we can check
the Responder trait in the Actix web docs and scroll down to
the Implementations on Foreign Types section as seen in
figure 4.1.
Figure 4.1 – Implementations of Foreign types [Source: Actix web
(2024) ( https://docs.rs/actix-
web/latest/actix_web/trait.Responder.html#foreign-impls )]
// File: ./to_do/networking/actix_server/src/main.rs
#[tokio::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(greet))
.route("/{name}", web::get().to(greet))
.route("/say/hello", web::get().to(||
async { "Hello Again!" }))
})
.workers(4)
.bind("127.0.0.1:8080")?
.run()
.await
}
http://127.0.0.1:8080/
Hello World!
http://127.0.0.1:8080/maxwell
Hello maxwell!
http://127.0.0.1:8080/say/hello
Hello Again!
And here we have it, we have a server running! But our server
is isolated. In the next section, we are going to connect our
server to the core module. However, before we
└── to_do
└── core
└── src
├── api
│ ├── basic_actions
│ │ ├── create.rs
│ │ ├── delete.rs
│ │ ├── get.rs
│ │ ├── mod.rs
│ │ └── update.rs
│ └── mod.rs
├── lib.rs
└── . . .
We can see that we are adding a file for each basic action that
we expect to perform on a to-do task. It also must be noted that
we added a src/lib.rs file. We can delete the src/main.rs
file if we want to as our core module, we now be used by other
cargo workspaces such as the networking layer. The networking
layer will interact with the core module through the
src/lib.rs file. The module file in the basic actions module
now takes the following form:
//! File: to_do/core/src/api/basic_actions/mod.rs
pub mod create;
pub mod get;
pub mod delete;
pub mod update;
└── to_do
└── networking
└── actix_server
├── . . .
└── src
├── api
│ ├── basic_actions
│ │ ├── create.rs
│ │ ├── delete.rs
│ │ ├── get.rs
│ │ ├── mod.rs
│ │ └── update.rs
│ └── mod.rs
└── main.rs
In the api module file we expose the basic actions with the code
below:
And all our code is stitched up and ready to talk to each other.
The only thing left to do is declare our core module in the
Cargo.toml file with the following code:
// File: to_do/core/src/structs.rs
impl fmt::Display for ToDoItem {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Res
match self.status {
TaskStatus::PENDING => write!(
f, "Pending: {}",
self.title
),
TaskStatus::DONE => write!(
f, "Done: {}",
self.title
),
}
}
}
{
"coding": {
"title": "coding",
"status": "DONE"
},
"washing": {
"title": "washing",
"status": "PENDING"
}
}
Here, we can see that we still can access the task by the title as
the key, but value has the title and status of the task. We did not
change any of the code in the data access layer. Instead, we are
seeing how our to-do item serializes. We are now ready to
return all our to-do items to the browser.
Serving our to-do items
We are now at the stage of serving all our items to the browser.
However, before we touch any of the server code, we need to
add another struct from our core module. We need a container
that houses two lists of items for the pending and done items
which takes the following form:
// File: to_do/core/src/structs.rs
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct AllToDOItems {
pub pending: Vec<ToDoItem>,
pub done: Vec<ToDoItem>
}
You might recall that we load the data from the JSON file in
hashmap form, meaning that we are going to need a
from_hashmap function for our AllToDoItems struct with the
code below:
// File: to_do/core/src/structs.rs
impl AllToDOItems {
pub fn from_hashmap(all_items: HashMap<String, To
-> AllToDOItems {
let mut pending = Vec::new();
let mut done = Vec::new();
for (_, item) in all_items {
match item.status {
TaskStatus::PENDING => pending.push(i
TaskStatus::DONE => done.push(item)
}
}
AllToDOItems {
pending,
done
}
}
}
// File: to_do/core/src/api/basic_actions/get.rs
use dal::json_file::get_all as get_all_handle;
use crate::structs::{
ToDoItem,
AllToDOItems
};
pub async fn get_all() -> Result<AllToDOItems, String
Ok(AllToDOItems::from_hashmap(
get_all_handle::<ToDoItem>()?
))
}
Wow, there is not much code there. However, we can see what
type of data is being loaded as the value of the hashmap with
the get_all_handle::<ToDoItem>() . We exploit the ?
operator to reduce the need for a match statement, and then
directly feed the data into the from_hashmap function.
Here, our core API function is fusing the logic of the core
module, and the data access layer. The data from our core API
function is then passed to the Actix API function which can then
return the data to the browser as seen in figure 4.2.
Figure 4.2 – Pathway for getting all to-do items
We can now build our Actix API function with the code below:
//! File: to_do/networking/actix_server
//! /src/api/basic_actions/get.rs
use core::api::basic_actions::get::get_all as get_all
use actix_web::HttpResponse;
pub async fn get_all() -> HttpResponse {
let all_items = match get_all_core().await {
Ok(items) => items,
Err(e) => return HttpResponse::InternalServer
};
HttpResponse::Ok().json(all_items)
}
The match statement does bloat the code a little bit, but we will
handle this in the next section. We can see that if there is any
error, we return it with an internal server error response code.
We now must connect our get_all async function to our
server. We should keep our API module endpoint definitions
isolated as we want to be able to version control the different
API modules. We can define the get all URI with the following
code:
Now that we have defined our endpoint factory for our basic
actions, we need one last factory that collects all the other
factories and calls them with the code below:
//! File: to_do/networking/actix_server/src/api/mod.r
pub mod basic_actions;
use actix_web::web::ServiceConfig;
pub fn views_factory(app: &mut ServiceConfig) {
basic_actions::basic_actions_factory(app);
}
With this, we only need to configure our server with our views
factory with the following code:
If we run our server and hit our endpoint in the browser, we get
the result seen in figure 4.3.
Figure 4.3 – Return data from get all API endpoint
And there we have it; our server is now returning the to-do
items in a sorted fashion. However, remember that we were
relying on a match statement in our web API function. We can
reduce the need for this by defining our own error handing so
we can exploit the ? operator in our API function. We have
nearly wrapped up this chapter. We just need to handle those
errors for the web server API function.
[workspace]
resolver = "2"
members = [
"glue",
"nanoservices/to_do/core",
"nanoservices/to_do/dal",
"nanoservices/to_do/networking/actix_server"
]
And our glue module should have the file layout below:
└── glue
├── Cargo.toml
└── src
├── errors.rs
└── lib.rs
We can now define the Cargo.toml file of the glue module with
the following code:
# File: glue/Cargo.toml
[dependencies]
actix-web = { version = "4.5.1", optional = true }
serde = { version = "1.0.197", features = ["derive"]
thiserror = "1.0.58"
[features]
actix = ["actix-web"]
// File: glue/src/errors.rs
#[derive(Error, Debug, Serialize, Deserialize, Partia
pub enum NanoServiceErrorStatus {
#[error("Requested resource was not found")]
NotFound,
#[error("You are forbidden to access requested re
Forbidden,
#[error("Unknown Internal Error")]
Unknown,
#[error("Bad Request")]
BadRequest,
#[error("Conflict")]
Conflict,
#[error("Unauthorized")]
Unauthorized
}
We can now define our nanoservice error with the code below:
// File: glue/src/errors.rs
#[derive(Serialize, Deserialize, Debug, Error)]
pub struct NanoServiceError {
pub message: String,
pub status: NanoServiceErrorStatus
}
impl NanoServiceError {
pub fn new(message: String, status: NanoServiceEr
-> NanoServiceError {
NanoServiceError {
message,
status
}
}
}
We now need to implement a response trait for our nanoservice
error, but before we can do this, we must implement the
Display trait for our nanoservice error with the following
code:
// File: glue/src/errors.rs
impl fmt::Display for NanoServiceError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt:
write!(f, "{}", self.message)
}
}
// File: glue/src/errors.rs
#[cfg(feature = "actix")]
impl ResponseError for NanoServiceError {
fn status_code(&self) -> StatusCode {
. . .
}
fn error_response(&self) -> HttpResponse {
. . .
}
}
For our status code function, we need to merely map our status
numb with the Actix web status code with the following code:
// File: glue/src/errors.rs
fn status_code(&self) -> StatusCode {
match self.status {
NanoServiceErrorStatus::NotFound =>
StatusCode::NOT_FOUND,
NanoServiceErrorStatus::Forbidden =>
StatusCode::FORBIDDEN,
NanoServiceErrorStatus::Unknown =>
StatusCode::INTERNAL_SERVER_ERROR,
NanoServiceErrorStatus::BadRequest =>
StatusCode::BAD_REQUEST,
NanoServiceErrorStatus::Conflict =>
StatusCode::CONFLICT,
NanoServiceErrorStatus::Unauthorized =>
StatusCode::UNAUTHORIZED
}
}
// File: glue/src/errors.rs
fn error_response(&self) -> HttpResponse {
let status_code = self.status_code();
HttpResponse::build(status_code).json(self.messag
}
// File: glue/src/errors.rs
#[macro_export]
macro_rules! safe_eject {
($e:expr, $err_status:expr) => {
$e.map_err(|x| NanoServiceError::new(
x.to_string(),
$err_status)
)
};
($e:expr, $err_status:expr, $message_context:expr
$e.map_err(|x| NanoServiceError::new(
format!("{}: {}", $message_context, x
$err_status
)
)
};
}
Here, we can see that if we pass in an expression and the error
status, we will map the error to our nanoservice error. If we also
pass in a message context into the macro, we can also add the
context of where the error is happening. This will save us from
repeating ourselves.
# File: nanoservices/to_do/dal/Cargo.toml
[dependencies]
. . .
glue = { path = "../../../glue"}
// File: nanoservices/to_do/dal/src/json_file.rs
. . .
use glue::errors::{
NanoServiceError,
NanoServiceErrorStatus
};
use glue::safe_eject;
We can now replace all our String error return types with our
nanoservices error. For instance, our handle function takes the
following form:
// File: nanoservices/to_do/dal/src/json_file.rs
fn get_handle() -> Result<File, NanoServiceError> {
let file_path = env::var("JSON_STORE_PATH")
.unwrap_or("./tasks.json".to_stri
let file = safe_eject!(OpenOptions::new()
.read(true)
.write(true)
.create(true)
.open(&file_path),
NanoServiceErrorStatus::Unknown,
"Error reading JSON file"
)?;
Ok(file)
}
And our get all function is now defined with the code below:
// File: nanoservices/to_do/dal/src/json_file.rs
pub fn get_all<T: DeserializeOwned>()
-> Result<HashMap<String, T>, NanoServiceError> {
let mut file = get_handle()?;
let mut contents = String::new();
safe_eject!(
file.read_to_string(&mut contents),
NanoServiceErrorStatus::Unknown,
"Error reading JSON file to get all tasks"
)?;
let tasks: HashMap<String, T> = safe_eject!(
serde_json::from_str(&contents),
NanoServiceErrorStatus::Unknown,
"Error parsing JSON file"
)?;
Ok(tasks)
}
We can see how our safe_eject! macro lifts out the repetitive
code and we just focus on defining the expression to be
evaluated, the error status, and the optional context message of
where the error is happening that will be presented alongside
the error message. Our safe_eject! macro is mainly used for
third party results, as we aim for all our functions to return the
nanoservice error. We could go through all examples, but this
would needlessly bloat the chapter. Luckily the Rust compiler
will warn you when an error type does not match as we are
using the ? operator throughout the codebase. In your core
layer you will also need to add the glue package when
converting all results to return a nanoservice error when
erroring.
Once the core is filled out with our nanoservice errors, we can
move onto our Actix server. Unlike the other two layers we will
have the actix feature for our glue package with the following
Cargo.toml file code:
# File: nanoservices/to_do/networking/actix_server/Ca
glue = { path = "../../../../glue", features = ["acti
With our glue package our create function file for our API
endpoint takes the form below:
Summary
Questions
Answers
https://packt.link/EarlyAccess/
Up to this point, we have utilized the Actix web framework to
serve basic views. However, this can only get us so far when it
comes to extracting data from the request and passing data
back to the user. In this chapter, we will fuse code from Chapter
3, Designing Your Web Application in Rust, and Chapter 5,
Handling HTTP Requests, to build server views that process to-
do items. We will then explore JSON serialization for
extracting data and returning it to make our views more user
friendly. We also extract data from the header with middleware
before it hits the view. We will explore the concepts around
data serialization and extracting data from requests by building
out the create, edit, and delete to-do items endpoints for our to-
do application.
Once you have finished this chapter you will be able to build a
basic Rust server that can send and receive data in the URL,
body using JSON, and the header of the HTTP request. This is
essentially a fully functioning API Rust server without a proper
database for data storage, authentication of users, or displaying
of content in the browser. However, these concepts are covered
in the next three chapters. You are on the home run for having
a fully working Rust server up and running. Let's get started!
Technical requirements
You can find the full source code that will be used in this
chapter here:
Passing parameters via the URL is the easiest way to pass data to
the server. We can explore this concept by passing the name of
a to-do item and returning that single to-do item to the client.
We can also see that we use the remove function to extract the
item. This means that we do not have to clone the item that we
are extracting, and we do not worry that the item has been
removed from the HashMap because we are not going to write
that HashMap to the file.
We can see that we are using the status and the HTTP request.
Here, our incoming HTTP request needs to be accepted, and the
name parameter must be extracted from the URL.
Now, if we have that to-do item in our JSON file and we run our
server and then visit our individual get URL in the browser, we
will get a response like the following:
Figure 5.1 – Our response from our individual GET endpoint
Figure 5.2 – Our not found response from our individual GET
endpoint
And here we have it, our get individual to-do item is now fully
working.
Even though we can get our to-do items, we cannot yet do much
with them. We must start altering the state of our to-do items.
Our next step is building a create to-do item via post and JSON
data.
While passing parameters via the URL is simple, there are a few
problems with the URL approach. URLs can be cached by the
browser and the user might end up selecting an autocompleted
URL with data in the URL. This is good if we are using the URL to
pass parameters to get data like visiting your profile on a social
media application, however, we do not want to accidentally
select a cached URL that alters data in the application.
Passing data in the URL is also limited to simple data types. For
instance, if we wanted to pass a list of HashMap s to the server, it
would be hard to pass such a data struct through the URL
without doing some other form of serialization. This is where
POST requests and JSON bodies come in. We can pass JSON data
via the HTTP request body.
We can now wrap the core create function in our server. First,
we need to use the following:
We these structs and traits, we can define our create API web
endpoint with the code below:
Wow that is compact! But there is a lot going on here. The Json
struct implements traits that extract data from the body of the
HTTP request before the create function is called. If the HTTP
request body can be serialized into a ToDoItem struct, then this
is done, and we have a ToDoItem struct passed into the create
function from the HTTP request body. If an ToDoItem struct
cannot be constructed from the JSON body of the HTTP request,
then a bad request response is returned to the client with the
serialization error message.
The Json<ToDoItem> works because we have implemented the
Deserialize trait for the ToDoItem struct. We then insert our
item into the JSON file we are currently using as storage with
the create_core function. Finally, we get all the data from the
JSON file with the get_all_core function and return the items
in a JSON body. Again, this is not optimal, but it will work for
now.
We now need to plug our create API function into our web
server. We start by importing the post function with the
following code:
//! File: nanoservices/to_do/networking/actix_server
//! /src/api/basic_actions/mod.rs
. . .
use actix_web::web::{ServiceConfig, get, scope, post}
. . .
And we then we define our view in our factory with the code
below:
We can now run our server with our new create endpoint.
While our server is running, we can test our endpoint with the
following CURL command in our terminal:
After the CURL command has been run, we then get the
following response:
{
"pending":[
{"title":"writing","status":"PENDING"},
{"title":"washing","status":"PENDING"}
],
"done":[
{"title":"coding","status":"DONE"}
]
}
{
"coding": {
"title": "coding",
"status": "DONE"
},
"writing": {
"title": "writing",
"status": "PENDING"
},
"washing": {
"title": "washing",
"status": "PENDING"
}
}
We can also test our serialization of the JSON body with the
following command:
And there we have it, we can see that our POST create HTTP
endpoint is now working. So, we can now get our items and add
new ones. However, what if we create an item by accident and
need to delete it? This is where DELETE methods come in.
Deleting resources using the DELETE
method
We can now define our core delete function with the code
below:
We can then wrap the core delete function into our networking
layer. The approach should run along the same lines as the get
individual item API endpoint, as we are extracting the name
parameter out of the URL. Now would be a good time to try and
build the delete endpoint function for the server yourself.
Thanks to all the effort that we put into our error handling; we
know that there is something wrong with the parsing of the
JSON file. Our safe_eject! macro has a string that has the
context added to the error. So, if we go to our
nanoservices/to_do/dal/src/json_file.rs file and search
for "Error parsing JSON file" , we will see that the error is
occurring when we try and serialize the data loaded from the
JSON file. If we inspect our JSON file, we should have something
like the following:
{
"coding": {
"title": "coding",
"status": "DONE"
},
"washing": {
"title": "washing",
"status": "PENDING"
}
} "washing": {
"title": "washing",
"status": "PENDING"
}
}
{
"pending":[
{"title":"washing","status":"PENDING"}
],
"done":[
{"title":"coding","status":"DONE"}
]
}
In this section we got to see how useful the context of the error
was as our error bubbles up to the HTTP response, so while our
system is not panicking, we also do not get a stack trace because
the server has not crashed.
Deleting the writing to-do item is all well, but we also need to be
able to edit the status of the to-do item. In our next section we
will achieve this with PUT methods.
Here, we can see that we get all the items, and check to see if
the item we are trying to update is in the state. If it's not, we
return an error. If it is, we then update the item with the new
status and save it.
And define the API endpoint view in the API factory with the
following code:
We can then run our server, and perform the CURL terminal
command below:
{
"pending":[
],
"done":[
{"title":"coding","status":"DONE"},
{"title":"washing","status":"DONE"}
]
}
This printout shows that our update API endpoint works. And
this is it, we have all our endpoints to handle to-do items. But
what about extracting data from headers? While we do not
need to extract anything from HTTP request headers to handle
our to-do items, we will need to extract data from headers for
things such as authentication, and extracting data from headers
does come under the concept of processing HTTP requests.
We can see that all our imports are reliant on the actix feature
being enabled. This is because the token is merely a string, and
all the imports are needed to enable the header extraction of
that string for an actix web server.
Now that we have extracted the data from the header, we can
then convert the data to a string and return it with the code
below:
And our token is now ready to extract data from the header in
the middleware.
We can now update our create API endpoint with the code
below:
And our response from the CURL gives us the JSON below:
{
"pending":[
{"title":"writing","status":"PENDING"}
],
"done":[
{"title":"washing","status":"DONE"},
{"title":"coding","status":"DONE"}
]
}
And we can see that our
Summary
Questions
Answers
https://packt.link/EarlyAccess/
We are now at the stage where we can build a web application
that can manage a range of HTTP requests with different
methods and data. This is useful, especially if we are building a
server for microservices. However, we might also want non-
programmers to interact with our application to use it. To
enable non-programmers to use our application, we must
create a graphical user interface. However, it must be noted that
this chapter does not contain much Rust. This is because other
languages exist to render a graphical user interface. We will
mainly use HTML, JavaScript, and CSS. These tools are mature
and widely used for frontend web development. Whilst I
personally love Rust (otherwise I wouldn't be writing a book on
it), we must use the right tool for the right job. At the point of
writing this book, we can build a frontend application in Rust
using the Yew framework. However, being able to fuse more
mature tools into our Rust technical stack is a more valuable
skill.
By the end of this chapter, you know how frontend assets are
served and you will be able to utilize this knowledge to get Rust
code to serve JS frontend applications. You will also be able to
build a React application with different components and insert
CSS into that application so the users can interact with our
application. To get the frontend talking to the backend, you will
understand how to make API calls in the frontend to the
backend.
Technical requirements
You can find the full source code that will be used in this
chapter here:
https://esbuild.github.io/getting-started/
─── ingress
└── frontend
├── esbuild.js
├── ts.config.json
├── package.json
├── public
│ └── index.html
└── src
└── index.tsx
// File: ingress/frontend/esbuild.js
const esbuild = require('esbuild');
const cssModulesPlugin = require('esbuild-css-modules
With these imports, we then define the build with the code
below:
// File: ingress/frontend/esbuild.js
esbuild.build({
plugins: [cssModulesPlugin()],
entryPoints: ['src/index.tsx'],
bundle: true,
outfile: 'public/bundle.js',
format: 'esm',
define: {
'process.env.NODE_ENV': '"production"',
},
minify: true,
sourcemap: true,
loader: {
'.js': 'jsx',
'.tsx': 'tsx',
'.ts': 'ts',
'.wasm': 'binary',
'.css': 'css'
},
}).catch(() => process.exit(1));
plugins: Here we add our CSS plugin, but we can also add
other plugins if we want them.
entryPoints: Defines the entry point for the build. So, if any
files are linked to the public/bundle.js file via imports in
the code, these files will be included in the build.
bundle: Instructs esbuild to bundle all dependencies into a
single output file. This makes is easier for us to deploy.
outfile: Where the bundle will be put once the build has
finished.
format: Sets the format module format of the output bundle
to ES module standard for JavaScript.
define: replaces instances of process.env.NODE_ENV to
"production" in the code for optimizations.
minify: Enables minification to cut out unnecessary symbols
and whitespace making the bundle file smaller.
sourcemap: Maps the bundled code back to source files
which is handy for debugging.
loader: Specifies the types of files that we load into the
builder.
// File: ingress/frontend/package.json
{
"name": "es_react",
"version": "1.0.0",
"scripts": {
"build": "node esbuild.js",
"serve": "serve public"
},
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
"esbuild": "^0.19.11",
"esbuild-css-modules-plugin": "^3.1.0",
"serve": "^14.2.1"
}
}
{
"compilerOptions": {
"target": "es5",
"lib": ["dom", "dom.iterable", "esnext"],
"allowJs": true,
"skipLibCheck": true,
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"module": "CommonJS",
"moduleResolution": "node",
"resolveJsonModule": true,
"isolatedModules": true,
"noEmit": true,
"jsx": "react-jsx"
},
"include": ["src"]
}
// File: ingress/frontend/src/index.tsx
import React from 'react';
import ReactDOM from "react-dom/client";
const App = () => {
return (
<div>
<h1>Hello World</h1>
</div>
);
};
const root = ReactDOM.createRoot(document.getElementB
root.render(<App />);
// File: ingress/frontend/serve.json
{
"rewrites": [
{ "source": "**", "destination": "./public/inde
]
}
This means that we route all our requests to the server to the
index.html file. Therefore, all requests are going to load the
HTML file which will then download the bundle JavaScript file
which will then insert the React app into the "root" element of
the index.html file.
We can now run our server. First, we must install the node
modules for the build and serving of our application with the
following command:
npm install
Our logs in our console will show a printout along the same
lines as the following:
Here, we can see that our server gets the index.html file, and
then gets the bundle.js file. This makes sense as our
index.html file loads the bundle.js file. We can also inspect
the network panel in our browser, and this will give us the same
log as seen in Figure 6.2.
Figure 6.2 – Network logs of serving our react app
We can see that the HTML document is served, and then the
JavaScript script is served. And here we have it! Our
development environment is ready to build and serve our react
application.
We could have just used the create react app tool to create a
React application and serve it. However, there is a reason why
you're working through a book as opposed to speed reading a
tutorial online. Here we are taking the time to go through who
our stack works and interacts with each other. This gives us
more flexibility and ability to build a system when there are
multiple moving parts. For instance, we could reconfigure our
serve.json file to have the following configuration:
{
"rewrites": [
{
"source": "/test",
"destination": "./public/test.html"
},
{
"source": "**",
"destination": "./public/index.html"
}
]
}
ingress
├── Cargo.toml
├── frontend
│ ├── esbuild.js
│ ├── package.json
│ ├── serve.json
│ ├── public
│ │ ├── bundle.js
│ │ ├── bundle.js.map
│ │ └── index.html
│ └── src
│ └── index.tsx
└── src
└── main.rs
By the end of the book, our ingress will accept all incoming
requests and either serving frontend assets or backend
endpoints. All our nanoservices will essentially be compiled into
the ingress workspace. In this chapter, we will only focus on the
serving of frontend assets. However, by chapter 11 building
RESTful services, our entire system will be compiled into the
ingress.
# File: ingress/Cargo.toml
[package]
name = "ingress"
version = "0.1.0"
edition = "2021"
[dependencies]
rust-embed = "8.3.0"
mime_guess = "2.0.4"
actix-web = "4.5.1"
tokio = {
version = "1.35.0",
features = ["macros", "rt-multi-thread"]
}
We are now at the stage where we can write our main.rs file in
our ingress workspace to define our server. Before we write any
server code, we must import the following structs and traits:
// File: ingress/src/main.rs
use actix_web::{
web, App, HttpServer, Responder, HttpResponse, Ht
};
use rust_embed::RustEmbed;
// File: ingress/src/main.rs
async fn index() -> HttpResponse {
HttpResponse::Ok().content_type("text/html")
.body(include_str!("../index.ht
}
The include_str! macro embeds the contents of the file as a
string so once the Rust code is compiled, we do not need to
move the index.html file with the Rust binary, we only need
the compiled Rust binary to serve the HTML. Next, we embed
the ingress/frontend/public directory with the code below:
// File: ingress/src/main.rs
#[derive(RustEmbed)]
#[folder = "./frontend/public"]
struct FrontendAssets;
// File: ingress/src/main.rs
fn serve_frontend_asset(path: String) -> HttpResponse
let file = match Path::new(&path).file_name() {
Some(file) => file.to_str().unwrap(),
None => return HttpResponse::BadRequest()
.body("404 Not Fo
};
match FrontendAssets::get(file) {
Some(content) => HttpResponse::Ok()
.content_type(mime_guess::from_path(&file
.first_or_octet_stream().as_ref())
.append_header(
("Cache-Control", "public, max-age=60
)
.body(content.data),
None => HttpResponse::NotFound().body("404 No
}
}
// File: ingress/src/main.rs
async fn catch_all(req: HttpRequest) -> impl Responde
. . .
index().await
}
The for the first check in our catch function, if the request has a
/api/ in the URL, then we return a not found because this
request was clearly intended for a backend endpoint. If the
request has hit the catch function, this means that the request
has not matched with any of the backend endpoints. Our first
check is defined with the code below:
// File: ingress/src/main.rs (function = catch_all)
if req.path().contains("/api/") {
return HttpResponse::NotFound().finish()
}
For our next check, we can serve the frontend asset if there is a
/frontend/public path in the request with the following code:
Our next check is to inspect the file type with the code below:
And finally, if the request passes all these checks, we can just
serve the index.html file with index.await return as shown
in the initial signature of the catch_all function.
We are finally at the point of defining our server that has our
catch_all function as the default service with the following
code:
// File: ingress/src/main.rs
#[tokio::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new()
.default_service(web::route().to(catch_al
})
.bind("0.0.0.0:8001")?
.run()
.await
}
Figure 6.3 – Our Basic React App view from our Rust server
# File: ingress/Cargo.toml
[dependencies]
. . .
actix-cors = "0.7.0"
to_do_server = {
path = "../nanoservices/to_do/networking/actix_se
package = "actix_server"
}
We are going to use actix-cors to enable requests from other
locations. Right now, this will not be an issue as our frontend
will be making requests from the localhost on the same
computer at the server. However, when we deploy our
application, the React application will be running on a user's
computer and their requests will come from their computer, not
the same computer that our server is running on. We have also
been compiling our to-do nanoservice into our ingress binary.
However, it must be noted that we assign an alias of our
actix_server package to to_do_server . This is because we
might have another nanoservice that has the networking layer
called actix_server and we do not want clashes.
// File: ingress/src/main.rs
. . .
use to_do_server::api::views_factory as to_do_views_f
use actix_cors::Cors;
We can now attach the views factory and CORs to our server
with the code below:
// File: ingress/src/main.rs
#[tokio::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
let cors = Cors::default().allow_any_origin()
.allow_any_method()
.allow_any_header()
App::new()
.configure(to_do_views_factory)
.wrap(cors)
.default_service(web::route()
.to(catch_all))
})
.bind("0.0.0.0:8001")?
.run()
.await
}
And this is it, our server is now ready to serve our to-do
endpoints to anywhere in the world if we had our server
running on a public server that others can make requests to.
However, there are a couple of moving parts. For instance, you
must build the frontend before embedding it. If you make a
change in the frontend, there is a risk you can rebuild your
server without rebuilding the frontend. This can lead to time
wasted trying to figure out the simple mistake to why your
changes are not showing. We can automate this with the
following bash script for running a server in the scripts
directory of the ingress:
#!/usr/bin/env bash
# File: ingress/scripts/run_server.sh
# navigate to directory
SCRIPTPATH="$( cd "$(dirname "$0")" ; pwd -P )"
cd $SCRIPTPATH
cd ..
cd frontend
npm install
npm run build
cd ..
cargo clean
cargo run
We are now less likely to miss a step in our server build process
saving us from confusion. We can now move onto the frontend
which has the following new files and directories:
├── . . .
├── src
│ ├── api
│ │ ├── create.ts
│ │ ├── delete.ts
│ │ ├── get.ts
│ │ ├── update.ts
│ │ ├── url.ts
│ │ └── utils.ts
│ ├── index.tsx
│ └── interfaces
│ └── toDoItems.ts
. . .
// File: ingress/frontend/package.json
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0",
"axios": "^1.6.8"
}
// File: ingress/frontend/src/interfaces/toDoItems.ts
export enum TaskStatus {
PENDING = 'PENDING',
DONE = 'DONE'
}
export interface ToDoItem {
title: string;
status: TaskStatus;
}
export interface ToDoItems {
pending: ToDoItem[];
done: ToDoItem[];
}
These interfaces are merely the structure of the data that we are
passing between the server and the frontend. With these
interfaces, we will be able to create items, update them, delete
items, and get all items.
There is one more interface that we must construct, and this is
the URL. We could define our URL as and when we need
however, this makes it harder for us to maintain. If we have all
our routes defined in one place, we can update mechanisms
around constructing the URL, and we can keep track of all our
URL endpoints in one place. Seeing as our URL will be used
when making API calls, we can define our URL interface in the
api/url.ts file with the following outline:
// File: ingress/frontend/src/api/url.ts
export class Url {
baseUrl: string;
create: string;
getAll: string;
update: string;
constructor() {
. . .
}
static getBaseUrl(): string {
. . .
}
deleteUrl(name: string): string {
return `${this.baseUrl}api/v1/delete/${name}`
}
}
Here we can see that our | it slightly different from the other
endpoints and this is because the name is in the URL. All the
other URLs are defined in the constructor with the code below:
// File: ingress/frontend/src/api/url.ts
constructor() {
this.baseUrl = Url.getBaseUrl();
this.create = `${this.baseUrl}api/v1/create`;
this.getAll = `${this.baseUrl}api/v1/get/all`;
this.update = `${this.baseUrl}api/v1/update`;
}
// File: ingress/frontend/src/api/url.ts
static getBaseUrl(): string {
let url = window.location.href;
if (url.includes("http://localhost:3000/")) {
return "http://0.0.0.0:8001/";
}
return window.location.href;
}
We can now build the API utils that handle these interfaces.
1. The utils file is about abstracting some repetitive code that all
other API calls will use so we do not have to repeat ourselves.
Before we write our generic API call functions, we need to
import the following:
// File: ingress/frontend/src/api/utils.ts
import axios, {AxiosResponse} from "axios";
// File: ingress/frontend/src/api/utils.ts
async function handleRequest<T, X>(
promise: Promise<AxiosResponse<X>>,
expectedResponse: number) {
let response: AxiosResponse<X>;
. . .
}
Here we can see that our general understanding of async
programming is helping us out. In this handle function, we
accept a promise of a generic request. In our Typescript code,
we can handle our promises just like we would handle our
futures in async Rust code. Inside the function, we await our
API call promise, and handle the outcome with the following
code:
// File: ingress/frontend/src/api/utils.ts
let response: AxiosResponse<X>;
try {
response = await promise;
} catch (error) {
return {
status: 500,
error: 'Network or other error occurred'
};
}
// File: ingress/frontend/src/api/utils.ts
export async function postCall<T, X>(
url: string, body: T,
expectedResponse: number) {
let response = axios.post<X | string>(
url,
body,
{
headers: {
'Content-Type': 'application/json',
'token': "jwt"
},
validateStatus: () => true
});
return handleRequest(response, expectedResponse);
}
// File: ingress/frontend/src/api/utils.ts
export async function getCall<X>(
url: string,
expectedResponse: number) {
let response = axios.get<X | string>(
url,
{
headers: {
'Content-Type': 'application/json',
'token': "jwt"
},
validateStatus: () => true
});
return handleRequest(response, expectedResponse);
}
And with this, our utils for API calls is fully defined, we can use
these utils to define our GET API call.
1. For our GET API call, our function takes the form below:
// File: ingress/frontend/src/api/get.ts
import { ToDoItems } from "../interfaces/toDoItems";
import {getCall} from "./utils";
import { Url } from "./url";
export default async function getAll() {
let response = await getCall<ToDoItems>(
new Url().getAll,
200
);
return response;
}
And that is it! We can see how implementing new API calls are
going to be easy. However, before we implement more
endpoints, we must check to see if our API calls work. To test
this, we add our API GET call in our app.
// File: ingress/frontend/src/index.tsx
import React, { useState } from 'react';
import ReactDOM from "react-dom/client";
import getAll from './api/get';
import {ToDoItems} from "./interfaces/toDoItems";
// File: ingress/frontend/src/index.tsx
const App = () => {
const [data, setData] = useState(null);
React.useEffect(() => {
. . .
}, []);
return (
<div>
{
data ? <div>Data loaded: {JSON.string
}
</div> : <div>Loading...</div>}
</div>
);
};
Here we have the useState which handles the state for the
App component. Our state is going to be the to-do items. We
also render "Loading…" if our state is not loaded for the App
component. If we have loaded the data, we render the data
instead. Our React.useEffect fires once the App component
has been loaded. In the React.useEffect , we make the GET
call with the following code:
// File: ingress/frontend/src/index.tsx
React.useEffect(() => {
const fetchData = async () => {
const response = await getAll<ToDoItems>();
setData(response.data);
}
fetchData();
}, []);
And there we have it! We can see that our API call is working,
and we can access our data from the server.
We now only need to build the create, delete, and update API
calls. At this stage this is a good time to try and build these
functions yourself as there is a bit of repetition with some slight
variance. This is a good time to test and concrete what you
know about creating API calls.
// File: ingress/frontend/src/api/utils.ts
export async function deleteCall<X>(
url: string,
expectedResponse: number) {
let response = axios.delete<X | string>(
url,
{
headers: {
'Content-Type': 'application/json',
'token': "jwt"
},
validateStatus: () => true
});
return handleRequest(response, expectedResponse);
}
In the same file, our PUT function is built with the code below:
// File: ingress/frontend/src/api/utils.ts
export async function putCall<T, X>(
url: string, body: T,
expectedResponse: number) {
let response = axios.put<X | string>(
url,
body,
{
headers: {
'Content-Type': 'application/json',
'token': "jwt"
},
validateStatus: () => true
});
return handleRequest(response, expectedResponse);
}
// File: ingress/frontend/src/api/create.ts
import { ToDoItem, ToDoItems, TaskStatus }
from "../interfaces/toDoItems";
import { postCall } from "./utils";
import { Url } from "./url";
export async function createToDoItemCall(title: strin
const toDoItem: ToDoItem = {
title: title,
status: TaskStatus.PENDING
};
return postCall<ToDoItem, ToDoItems>(
new Url().create,
toDoItem,
201
);
}
// File: ingress/frontend/src/api/update.ts
import { ToDoItem, ToDoItems, TaskStatus }
from "../interfaces/toDoItems";
import { putCall } from "./utils";
import { Url } from "./url";
export async function updateToDoItemCall(
name: string, status: TaskStatus) {
const toDoItem: ToDoItem = {
title: name,
status: status
};
return putCall<ToDoItem, ToDoItems>(
new Url().update,
toDoItem,
200
);
}
And now we have defined all the API endpoints that we need.
However, we need to interact with these API calls. We must
create some to-do item and form components so we can call
these API functions with data.
// File: ingress/frontend/src/components/CreateItemFo
import React, { useState } from 'react';
import { createToDoItemCall } from "../api/create";
interface CreateToDoItemProps {
passBackResponse: (response: any) => void;
}
Here we can see that we initially define our state which is just a
string as all we are doing is keeping track of the name of the to-
do item that we are creating. We are assuming that all to-do
items being created will be pending and not complete. We then
define the function that updates the state of the to-do item title
every time there is a change in the input HTML element where
the user is inputting the title. This means that everytime the
user changes the input contents for the title of the do-to item,
the entire React component has access to that data, and we can
alter the component however we want. We then define our API
call and the tsx that the component is rendering.
For the create API call, our function takes the form below:
// File: ingress/frontend/src/components/CreateItemFo
const createItem = async () => {
await createToDoItemCall(title).then(response =>
setTitle("");
if (response.data) {
passBackResponse(response.data);
} else if (response.error) {
console.log(response);
console.log(
`Error ${response.status}: ${response
);
}
});
};
Here we can see that we pass the title state into the API call, and
then reset the state of the title. We then execute the function
that was passed in via props if the API call was successful or
print out the error if the API call was unsuccessful.
For our render statement, we have the following code:
// File: ingress/frontend/src/components/CreateItemFo
return (
<div className="inputContainer">
<input type="text" id="name"
placeholder="create to do item"
value={title}
onChange={handleTitleChange}/>
<button className="actionButton"
id="create-button"
onClick={createItem}>Create</button>
</div>
);
Here we render the title state in the input value so the user can
see the state of the title. We then bind our listener to that input
and bind the API call function to our button. We have
referenced some CSS class names in the tsx. Do not worry, even
though we have not created them, the frontend will not crash if
we do not have the CSS classes. We will define our CSS classes in
the next section of this chapter. However, it is easier to
reference the CSS classes now when we are building the React
components to save us bloating the chapter by going back and
referencing the CSS classes in the components later.
Our form is now ready to import and use, but before we add this
form into our main application component, we should define
our to-do item component in our ToDoItem.tsx file. First, we
need the following imports and interface:
// File: ingress/frontend/src/components/ToDoItem.tsx
import React, {useEffect, useState} from 'react';
import {updateToDoItemCall} from "../api/update";
import { deleteToDoItemCall } from "../api/delete";
import {TaskStatus} from "../interfaces/toDoItems";
interface ToDoItemProps {
title: string;
status: string;
id: number;
passBackResponse: (response: any) => void;
}
Here we can see that we are passing in the title, status, and ID of
the to-do item. This makes sense as we want to render the item,
and handle operations on the to-do item in the backend via
making API calls in the to-do item component. With this
interface in mind, our to-do item component takes the form
below:
// File: ingress/frontend/src/components/ToDoItem.tsx
export const ToDoItem: React.FC<ToDoItemProps> = (
{ title, status, id, passBackResponse }) => {
const [itemTitle, setTitle] = useState<string>(ti
const [button, setButton] = useState<string>('');
useEffect(() => {
const processStatus = (status: string): strin
return status === "PENDING" ? "edit" : "d
};
setButton(processStatus(status));
}, [status]);
const sendRequest = async (): void => {
. . .
};
return (
. . .
);
}
Now that our state is defined, we can move onto the API call for
editing the item. Here, must check the type of button, and make
either the create or delete API depending on the button type. At
this stage it is a good time to try and build this API call yourself.
There is some conditional logic that you must consider, and if
you get this right then you are truly comfortable making API
calls to our backend. Remember to look at the to-do item
component for a starting point.
// File: ingress/frontend/src/components/ToDoItem.tsx
const sendRequest = async (): void => {
if (button === "edit") {
. . .
} else {
. . .
}
};
If our button is an edit button, we have the API call below:
// File: ingress/frontend/src/components/ToDoItem.tsx
await updateToDoItemCall(
itemTitle,
TaskStatus.DONE
).then(
response => {
if (response.data) {
passBackResponse(response.data);
}
else if (response.error) {
console.log(response);
}
}
)
// File: ingress/frontend/src/components/ToDoItem.tsx
await deleteToDoItemCall(itemTitle).then(
response => {
if (response.data) {
passBackResponse(response.data);
}
else if (response.error) {
console.log(response);
}
}
)
And now our to-do item component will be able to make API
calls to the backend by themselves when their button is called!
Finally, we must render the to-do item with the following return
statement:
// File: ingress/frontend/src/components/ToDoItem.tsx
return (
<div className="itemContainer" id={id}>
<p>{itemTitle}</p>
<button className="actionButton"
onClick={sendRequest}>
{button}
</button>
</div>
);
Here we can see that we merely render the button type and
bind our API call to the button on click.
And here we have it, the components are finished, and all we
need to do is place them in our application. In our index file for
our application, we now have the following imports:
// File: ingress/frontend/src/index.tsx
import React, { useState } from 'react';
import ReactDOM from "react-dom/client";
import getAll from './api/get';
import {ToDoItems} from "./interfaces/toDoItems";
import { ToDoItem } from "./components/ToDoItem";
import { CreateToDoItem } from './components/CreateIt
// File: ingress/frontend/src/index.tsx
const App = () => {
const [data, setData] = useState(null);
function reRenderItems(items: ToDoItems) {
setData(items);
}
React.useEffect(() => {
const fetchData = async () => {
const response = await getAll<ToDoItems>(
setData(response.data);
}
fetchData();
}, []);
if (!data) {
return <div>Loading...</div>;
}
else {
return (
. . .
);
}
};
You will be able to input a title of a new to-do item in the input,
and all buttons will work. However, as we can see in Figure 6.6,
there is no styling. We will now be moving onto inserting styles
with CSS.
For our ingress HTML file, we now have the contents below:
/* File: ingress/frontend/src/App.css */
.App {
background-color: #92a8d1;
font-family: Arial, Helvetica, sans-serif;
height: 100vh;
}
We have now defined the general style for our body but what
about different screen sizes? For instance, if we were going to
access out application on our phone, it should have different
dimensions. We can see this in the following figure:
Figure 6.7 – Difference in margins between a phone and desktop
monitor
We can see in Figure 6.7 the ratio of the margin to the space that
is filled up by the to-do items list changes. With a phone there is
not much screen space so most of the screen needs to be taken
up by the to-do item; otherwise, we would not be able to read it.
However, if we are using a widescreen desktop monitor, we no
longer need most of the screen for the to-do items. In-fact, if the
ratio was the same, the to-do items would be so stretched in the
x-axis that it would be hard to read and frankly would not look
good. This is where media queries come in. We can have
different style conditions based on attributes like the width and
height of the window. We will start with the phone
specification. So, if the width of the screen is up to 500 pixels, in
our CSS file we will define the following CSS configuration for
our body:
/* File: ingress/frontend/src/App.css */
@media(max-width: 500px) {
.App {
padding: 1px;
display: grid;
grid-template-columns: 1fr;
}
}
Here we can see that the padding around the edge of the page
and each element is just one pixel. We also have a grid display.
This is where we can define columns and rows. However, we do
not use it to its full extent. We just have one column. This
means that our to-do items will take up most of the screen like
in the phone depiction of Figure 6.7. Even though we are not
using a grid in this context, I have kept it in so you can see the
relationship this has to the other configurations for larger
screens. If our screen gets a little bigger, we then split our page
into three different vertical columns; however, the ratio of the
width of the middle column to that of the columns on either
side is 5:1. This is because our screen still is not very big, and we
want our items to still take up most of the screen. We can adjust
for this by adding another media query with different
parameters:
/* File: ingress/frontend/src/App.css */
@media(min-width: 501px) and (max-width: 550px) {
.App {
padding: 1px;
display: grid;
grid-template-columns: 1fr 5fr 1fr;
}
.mainContainer {
grid-column-start: 2;
}
}
We can also see that for our mainContainer CSS class where we
house our to-do items we will overwrite the attribute
grid-column-start . If we did not, then the mainContainer
would be squeezed in the left margin at 1fr width. Instead, we
are starting and finishing in the middle at 5fr . We can make
our mainContainer span across multiple columns with a
grid-column-finish attribute.
If our screen gets larger, we then want to adjust the ratios even
more as we do not want our items width to get out of control. To
achieve this, we then define a 3 to 1 ratio with for the middle
column versus the two side columns, and then a 1 to 1 ratio
when the screen width gets higher than 1001px:
/* File: ingress/frontend/src/App.css */
@media(min-width: 551px) and (max-width: 1000px) {
.App {
padding: 1px;
display: grid;
grid-template-columns: 1fr 3fr 1fr;
}
.mainContainer {
grid-column-start: 2;
}
}
@media(min-width: 1001px) {
.App {
padding: 1px;
display: grid;
grid-template-columns: 1fr 1fr 1fr;
}
.mainContainer {
grid-column-start: 2;
}
}
Now that we have defined our general CSS for our entire
application, we can move onto our item container. Our item has
a different background color giving us the following definition:
/* File: ingress/frontend/src/App.css */
.itemContainer {
background: #034f84;
display: flex;
align-items: stretch;
justify-content: space-between;
margin: 0.3rem;
}
We can see that this class has a margin of 0.3. We are using the
rem because we want the margin to scale relatively to the font
size of the root element. The align-items ensures that all the
children in the container including the buttons stretch to fill the
container height. The flex ensures that all items grow and
shink relative to each other and can be displayed next to each
other. This will come in handy as we want our action button for
the to-do item to sit next to the title. We also want our item to
slightly change the color if our cursor hovers over it:
/* File: ingress/frontend/src/App.css */
.itemContainer:hover {
background: #034f99;
}
/* File: ingress/frontend/src/App.css */
.itemContainer p {
color: white;
display: inline-block;
margin: 0.5rem;
margin-right: 0.4rem;
margin-left: 0.4rem;
}
With our item title styled, the only item styling left is the action
button, which is either edit or delete. This action button is going
to float to the right with a different background color so we can
know where to click. To do this, we define our button style with
a class which is outlined in the following code:
/* File: ingress/frontend/src/App.css */
.actionButton {
display: inline-block;
float: right;
background: #f7786b;
border: none;
padding: 0.5rem;
padding-left: 2rem;
padding-right: 2rem;
color: white;
align-self: stretch;
}
/* File: ingress/frontend/src/App.css */
.actionButton:hover {
background: #f7686b;
color: black;
}
Now that we have covered all the concepts, we only must define
the styles for the input container. This can be done by running
the following code:
/* File: ingress/frontend/src/App.css */
.inputContainer {
background: #034f84;
display: flex;
align-items: stretch;
justify-content: space-between;
margin: 0.3rem;
margin-top: 2rem;
}
.inputContainer input {
display: inline-block;
margin: 0.4rem;
border: 2px solid transparent;
background: #ffffff;
color: #034f84;
}
While this defines the styling about our input, we want the user
to know that they have selected the input so they can type. We
can change the CSS of the input when the input is clicked using
focus as seen below:
/* File: ingress/frontend/src/App.css */
.inputContainer input:focus {
outline: none;
border-color: #f7786b;
box-shadow: 0 0 5px #f7786b;
}
And finally, we define the CSS for the header with the following
code:
/* File: ingress/frontend/src/App.css */
.header {
background: #034f84;
margin-bottom: 0.3rem;
}
.header p {
color: white;
display: inline-block;
margin: 0.5rem;
margin-right: 0.4rem;
margin-left: 0.4rem;
}
And this is it, our system is now ready to run. If we run our
server, we should get the same view as seen in Figure 6.8.
Figure 6.8 – Our finished Application
Summary
Questions
Answers
https://packt.link/EarlyAccess/
You have probably heard about WASM. However, at the time of
writing this book, the WASM ecosystem is still in the early rapid
development phases where APIs get outdated quickly, and
groups trying out a new approach cease to exist but have
remnants of their approach throughout the internet. This can
lead to frustration as you burn hours trying to figure out what
API does what. However, the promise of compile once and run
anywhere including the browser is a clear advantage so it
makes sense to understand and get comfortable with WASM.
Technical requirements
https://github.com/PacktPublishing/Rust-Web-Programming-
3E/tree/main/chapter07
You will also need wasm-pack which can be installed via the
following link:
https://rustwasm.github.io/wasm-pack/installer/
https://github.com/WebAssembly/wabt
# ingress/frontend/rust-interface/Cargo.toml
. . .
[lib]
crate-type = ["cdylib"]
[dependencies]
wasm-bindgen = "0.2.92"
We can also see that the crate type of our Rust workspace is a
cdylib . This stands for dynamic library. A dynamic library is
linked to when a program is compiling. The compiled program
then loads the dynamic library on runtime. There are pros and
cons to this. For instance, you can reuse dynamic libraries in
multiple programs and these dynamic libraries are easier to
update as they are not part of the build. Not being part of the
build also results in reducing the compile times as we do not
need to recompile the dynamic library when we are compiling
the main code. Dynamic libraries will also reduce your main
program binary size. However, this is not free as the compiler
cannot optimize the code in the dynamic library with respect to
the code of your main program. It also has additional overhead
as the program needs to load the dynamic library and
deployment can be trickier due to more moving parts.
For most of your web development projects you will not need to
build dynamic libraries as it is not worth the extra headache of
linker errors and handing multiple moving parts. However, if
you end up working on a massive codebase then dynamic
libraries become essential. Especially if there are interfaces
from other languages such as WASM. For instance, at the time
of writing this book I am currently doing research in the Kings
College London Bioengineering department in surgical robotics.
We need to interface with GPUs so interacting with C++ dynamic
libraries for these GPUs is essential, there is no getting around
this. A lot of hardware support is mainly written in C and C++
dynamic libraries.
We need to compile our Rust binary to a WASM format and
move it into our public directory. This will enable us to serve
our WASM binary when needed. We can do this via the terminal
but there are a couple of commands pointing to specific
directories. This is easy to get wrong and we also not want to
bother other developers on how to build the WASM package, so
we will just put our commands in the | under the | section with
the following code:
// FILE: ingress/frontend/package.json
"wasm": "cd rust-interface && sudo wasm-pack build --
&& cp pkg/rust_interface_bg.wasm
../public/rust_interface_bg.wasm",
We will also want to trace our WASM binary to make sure that
we are exporting the functions that we expect. We can do this
with the wasm2wat command from the wasm toolkit we
installed in the technical requirements section with the
following scripts command:
// FILE: ingress/frontend/package.json
"wasm-trace": "cd public &&
wasm2wat rust_interface_bg.wasm
> rust_interface_bg.wat"
// FILE: ingress/frontend/rust-interface/src/lib.rs
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub fn rust_generate_button_text(status: String) -> S
match status.to_uppercase().as_str() {
"PENDING" => "edit".to_string(),
"DONE" => "delete".to_string(),
_ => "an error has occured".to_string(),
}
}
// File: ingress/frontend/public/rust_interface_bg.wa
(export "memory" (memory 0))
(export "update_status" (func 9))
(export "__wbindgen_add_to_stack_pointer" (func 35))
(export "__wbindgen_malloc" (func 19))
(export "__wbindgen_realloc" (func 23))
(export "__wbindgen_free" (func 27))
Here, we will use the init to get the WASM binary. The init
function handles the request to the server for the WASM binary.
It also handles the initialization of the WASM module, which is
loading the WASM module into memory, and importing
functions needed for the WASM library to run. We can see that
we are loading the rust_generate_button_text function from
a JavaScript file. If we look at the JavaScript file in the pkg
directory of the WASM build and search for the
rust_generate_button_text function, we get the following
code. For now this is just the outline of the function but we will
cover the sections.
// File: ingress/frontend/rust-interface/pkg/rust_int
let WASM_VECTOR_LEN = 0;
. . .
export function rust_generate_button_text(status) {
let deferred2_0;
let deferred2_1;
try {
. . .
} finally {
. . .
}
}
// File: ingress/frontend/rust-interface/pkg/rust_int
try {
const retptr = wasm.__wbindgen_add_to_stack_point
. . .
}
// File: ingress/frontend/rust-interface/pkg/rust_int
const ptr0 = passStringToWasm0(
status,
wasm.__wbindgen_malloc,
wasm.__wbindgen_realloc
);
const len0 = WASM_VECTOR_LEN;
The function can then execute our function and get the result of
our function with the following code:
// File: ingress/frontend/rust-interface/pkg/rust_int
wasm.rust_generate_button_text(retptr, ptr0, len0);
var r0 = getInt32Memory0()[retptr / 4 + 0];
var r1 = getInt32Memory0()[retptr / 4 + 1];
deferred2_0 = r0;
deferred2_1 = r1;
return getStringFromWasm0(r0, r1);
Here we can see that we pass the pointer for our string buffer in
memory, and the length of the buffer. This enables our WASM
function to extract the string data from the memory and use it
in the way that we intended to use. Our WASM function then
puts the return string into the memory. The pointers for the
return string are then extracted, and this is used to get the
string from WASM. To get the string from WASM, you would
access the bytes in the memory using the pointer and the length
of the buffer, and then deserialize those bytes to a string. Finally,
the function deallocates the memory as we no longer need it
with the code below:
// File: ingress/frontend/rust-interface/pkg/rust_int
finally {
wasm.__wbindgen_add_to_stack_pointer(16);
wasm.__wbindgen_free(deferred2_0, deferred2_1, 1)
}
Here we can see that we still house our to-do items in data , but
we declare that the WASM is ready with wasmReady and we
house the Rust function that is compiled to WASM with
RustGenerateButtonText . We can the load our WASM binary
and set the WASM ready flag with the code below:
// File: ingress/frontend/src/index.tsx
React.useEffect(() => {
init().then(() => {
setRustGenerateButtonText(() => rust_generate
setWasmReady(true);
}).catch(e => console.error(
"Error initializing WASM: ", e
));
}, []);
The fetching of the WASM binary might take some time. We do
not want to run the risk of getting the items from the server
before we get the WASM function because we need the WASM
function to create the text for each to-do item. Therefore, we
must refactor our loading of the to-do items with the following
code:
// File: ingress/frontend/src/index.tsx
React.useEffect(() => {
const fetchData = async () => {
if (wasmReady) {
const response = await getAll<ToDoItems>(
setData(response.data);
}
};
if (wasmReady) {
fetchData();
}
}, [wasmReady]);
Here we can see that the useEffect will only fire when the
wasmReady changes based on the dependency declared by
[wasmReady] . We do not know the future and the wasmReady
might change back to false later, therefore, even in the
wasmReady changes, we check that the wasmReady is true
before making calls to get the to-do items from the server.
// File: ingress/frontend/src/index.tsx
<h1>Pending Items</h1>
<div>
{data.pending.map((item, index) => (
<><ToDoItem key={item.title + item.status}
title={item.title}
buttonMessage={
RustGenerateButtonText(item.s
}
id={item.id}
passBackResponse={reRenderItems}/
</>
))}
</div>
<h1>Done Items</h1>
<div>
{data.done.map((item, index) => (
<><ToDoItem key={item.title + item.status}
title={item.title}
buttonMessage={
RustGenerateButtonText(item.s
}
id={item.id}
passBackResponse={reRenderItems}/
</>
))}
</div>
// File: ingress/frontend/src/components/ToDoItem.tsx
interface ToDoItemProps {
title: string;
id: number;
passBackResponse: (response: any) => void;
buttonMessage: string;
}
export const ToDoItem: React.FC<ToDoItemProps> = (
{ title, id, passBackResponse, buttonMessage }) =
const sendRequest = async (): void => {
// The send request code is the same
. . .
};
return (
<div className="itemContainer" id={id}>
<p>{title}</p>
<button
className="actionButton"
onClick={sendRequest}>{buttonMessage}
</div>
);
}
In figure 7.2 we can see that the bundles are loaded, then the
WASM is loaded, then finally the to-do items are loaded. The fact
that our buttons are rendering prove that our WASM function
works as we intended it to.
And there we have it, we now have WASM loading and running
in the frontend of our application! We can execute Rust in the
browser!
├── kernel
│ ├── Cargo.toml
│ └── src
│ └── lib.rs
├── wasm-client
│ ├── Cargo.toml
│ └── src
│ └── main.rs
└── wasm-lib
├── Cargo.toml
├── scripts
│ └── build.sh
└── src
└── lib.rs
Seeing as both the WASM lib and WASM client rely on the
kernel, we will start with the kernel.
We could just build the same structs in both WASM clients and
libraries, and these would technically work as we are going to
serialize our structs before sending them over the WASM
boundary. However, maintaining consistency when there's
multiple duplicates of a struct is harder, and if both client and lib
end up being compiled into the same Rust program later, the
compiler will acknowledge that the structs have the same name
but are defined in different places. This will result in us
manually converting the struct from the client to the lib struct
to satisfy the compiler. Considering it being harder to maintain
consistency and the two different structs not satisfying the
compiler, it makes sense to have a single source of truth for the
data struct that is a kernel with its own workspace. Because the
kernel is in its own workspace, we can then compile it into any
program that needs to communicate with another that has also
compiled the kernel as seen in figure 7.3.
Figure 7.3 – Kernel enables easy scaling of programs
communicating.
// File: kernel/Cargo.toml
[dependencies]
serde = { version = "1.0.201", features = ["derive"]
Now with our dependency defined, we can build out the basic
data struct with the code below:
// File: kernel/src/lib.rs
use serde::{Serialize, Deserialize};
#[derive(Serialize, Deserialize, Debug)]
pub struct SomeDataStruct {
pub name: String,
pub names: Vec<String>,
}
// File: kernel/src/lib.rs
#[repr(C)]
pub struct ResultPointer {
pub ptr: i32,
pub len: i32
}
// File: wasm-lib/Cargo.toml
[dependencies]
bincode = "1.3.3"
kernel = { path = "../kernel" }
serde = { version = "1.0.201", features = ["derive"]
[lib]
crate-type = ["cdylib"]
// File: wasm-lib/src/lib.rs
#[no_mangle]
pub unsafe extern "C" fn ns_malloc(
size: u32, alignment: u32
) -> *mut u8 {
let layout = Layout::from_size_align_unchecked(
size as usize, alignment as usize
);
alloc::alloc::alloc(layout)
}
// File: wasm-lib/src/lib.rs
#[no_mangle]
pub unsafe extern "C" fn ns_free(
ptr: *mut u8, size: u32, alignment: u32
) {
let layout = Layout::from_size_align_unchecked(
size as usize, alignment as usize
);
alloc::alloc::dealloc(ptr, layout);
}
Here we can see that we get the layout again, but we apply a
deallocation as opposed to an allocation.
// File: wasm-lib/src/lib.rs
#[no_mangle]
pub extern "C" fn entry_point(
ptr: *const u8, len: usize) ->
*const ResultPointer {
. . .
}
Here we accept the pointer and length, and then return the
ResultPointer so the client code can try and extract the result.
Inside our entry point function, we get the bytes from the
memory, deserialize the bytes to our SomeDataStruct struct,
and then alter the deserialized struct with the following code:
// File: wasm-lib/src/lib.rs
let bytes = unsafe { std::slice::from_raw_parts(ptr,
let mut data_struct: SomeDataStruct = bincode::deseri
bytes
).unwrap();
data_struct.names.push("new name".to_string());
That's it, that's the logic of our WASM program done, so we now
must serialize the data, and get the length and pointer to the
serialized data with the code below:
// File: wasm-lib/src/lib.rs
let serialized_data = bincode::serialize(
&data_struct
).unwrap();
let len = serialized_data.len();
let out_ptr = serialized_data.leak().as_ptr();
You can see that we are just getting the pointer from the
serialized struct. Here, the memory is not automatically cleaned
up otherwise foreign function interfaces would be hard to
implement. We can just return the pointer, and then manually
clean up the memory later when we need. Now that we have
our pointer and length of the memory buffer, we can return the
pointer and length with the following code:
// File: wasm-lib/src/lib.rs
let result = Box::new(ResultPointer{
ptr: out_ptr as i32,
len: len as i32
});
Box::into_raw(result) as *const ResultPointer
Our library is nearly done, all the rust code is finished, but we
now must build a bash script to build and copy over the WASM
binary so we can ensure that the WASM binary is where we
want it to be. Our bash script just takes the following form:
# File: wasm-lib/scripts/build.sh
#!/bin/bash
# navigate to directory
SCRIPTPATH="$( cd "$(dirname "$0")" ; pwd -P )"
cd $SCRIPTPATH
cd ..
cargo build --release --target wasm32-wasi
cp target/wasm32-wasi/release/wasm_lib.wasm ./wasm_li
wasm2wat ./wasm_lib.wasm > ./wasm_lib.wat
This script takes a similar form to our build bash script in our
frontend. We are now ready to build the client.
// File: wasm-client/Cargo.toml
[dependencies]
serde = { version = "1.0.197", features = ["derive"]
tokio = { version = "1.37.0", features = ["full"] }
bincode = "1.3.3"
kernel = { path = "../kernel" }
wasmtime-wasi = { version = "20.0.0", features = ["pr
wasmtime = "20.0.0"
// File: wasm-client/src/main.rs
use wasmtime::{Result, Engine, Linker, Module, Store,
use wasmtime_wasi::preview1::{self, WasiP1Ctx};
use wasmtime_wasi::WasiCtxBuilder;
use std::mem::size_of;
use std::slice::from_raw_parts;
use kernel::{
SomeDataStruct,
ResultPointer
};
#[tokio::main]
async fn main() -> Result<()> {
. . .
}
// File: wasm-client/src/main.rs
let mut config = Config::new();
config.async_support(true);
let engine = Engine::new(&config).unwrap();
let module = Module::from_file(
&engine, "../wasm-lib/wasm_lib.wasm"
).unwrap();
Now that we have loaded our WASM library into our engine, we
can link preview1 and configs with the code below:
// File: wasm-client/src/main.rs
let mut linker: Linker<WasiP1Ctx> = Linker::new(&engi
preview1::add_to_linker_async(&mut linker, |t| t).unw
let pre = linker.instantiate_pre(&module)?;
let wasi_ctx = WasiCtxBuilder::new()
.inherit_stdio()
.inherit_env()
.build_p1();
Now that we have the wasmtime runtime build with our config,
we can create a memory store and create an instance of our
WASM module with the following code:
// File: wasm-client/src/main.rs
let mut store = Store::new(&engine, wasi_ctx);
let instance = pre.instantiate_async(&mut store).awai
With the instance being created, we are now at the stage where
we serialize the our data struct with the code below:
// File: wasm-client/src/main.rs
let data_struct = SomeDataStruct {
names: vec!["name1".to_string(), "name2".to_strin
name: "name3".to_string()
};
let serialized = bincode::serialize(&data_struct).unw
With our serialized struct, we can assign our struct bytes to our
WASM instance memory with the following code:
// File: wasm-client/src/main.rs
// allocate the memory for the input data
let malloc = instance.get_typed_func::<(i32, i32), i3
&mut store, "ns_malloc"
).unwrap();
let input_data_ptr = malloc.call_async(
&mut store, (serialized.len() as i32, 0)
).await.unwrap();
// write the contract to the memory
let memory = instance.get_memory(
&mut store, "memory"
).unwrap();
memory.write(
&mut store, input_data_ptr as usize, &serialized
).unwrap();
Our serialized struct is now in the memory of the WASM
instance, we can now get our entry point function from the
WASM module, and then call the entry point function to get the
pointer to the result with the code below:
// File: wasm-client/src/main.rs
let entry_point = instance.get_typed_func::<(i32, i32
&mut store, "entry_point"
).unwrap();
let ret = entry_point.call_async(
&mut store, (input_data_ptr, serialized.len() as
).await.unwrap();
We can now use the pointer to read the bytes from the memory.
First, we get the result pointer from the memory with the
following code:
// File: wasm-client/src/main.rs
let mut result_buffer = Vec::with_capacity(
size_of::<ResultPointer>()
);
for _ in 0..size_of::<ResultPointer>() {
result_buffer.push(0);
}
memory.read(
&mut store, ret as usize, &mut result_buffer
).unwrap();
let result_struct = unsafe {
&from_raw_parts::<ResultPointer>(
result_buffer.as_ptr() as *const ResultPointe
)[0]
};
Now that we have our pointer, we use this result pointer to read
the struct we want from memory with the code below:
// File: wasm-client/src/main.rs
let mut output_buffer: Vec<u8> = Vec::with_capacity(
result_struct.len as usize
);
output_buffer.resize(result_struct.len as usize, 0);
memory.read(
&mut store, result_struct.ptr as usize, &mut outp
).unwrap();
let output = bincode::deserialize::<SomeDataStruct>(
&output_buffer
).unwrap();
println!("Output contract: {:?}", output);
// File: wasm-client/src/main.rs
let free = instance.get_typed_func::<(i32, i32, i32),
&mut store, "ns_free"
).unwrap();
free.call_async(
&mut store, (input_data_ptr, serialized.len() as
).await.unwrap();
free.call_async(
&mut store, (result_struct.ptr, result_struct.len
).await.unwrap();
free.call_async(
&mut store, (ret, size_of::<ResultPointer>() as i
).await.unwrap();
Ok(())
Running `target/debug/wasm-client`
Output contract: SomeDataStruct {
name: "name3",
names: ["name1", "name2", "new name"]
}
Here, we can see that our struct was passed through to the
WASM module, altered, and then passed back to the Rust host to
be displayed. We have now managed to directly interact with
WASM without any helper crates creating the interfaces in the
WASM library. As the features and support for WASM increases,
you will be able to interact with these updates and feel
confident to run whatever the bytecode alliance throws your
way.
Summary
In this chapter get got to grips with packaging WASM for the
frontend and interacting with WASM using Rust. While we did
not build anything substantial in WASM, we focused on building
a foundational knowledge on how WASM is interacted with. At
the time of writing this book, the APIs, and features of WASM is
rapidly changing. If we built feature rich WASM program using
the current APIs, this book would age quickly, and you would be
having to Google the new APIs to figure out what you need to
change to get your system running. Keeping an eye on WASM is
a good idea. The advantage of compiling once and running
anywhere including the browser is a strong advantage to have.
Questions
Answers
https://packt.link/EarlyAccess/
By this point in the book, the frontend for our application has
been defined, and our app is working at face value. However,
we know that our app is reading and writing from a JSON file.
In this chapter, we get rid of our JSON file and introduce a
PostgreSQL database to store our data. We do this by setting up
a database development environment using Docker. We then
build data models in Rust to interact with the database,
refactoring our app so that the create, edit, and delete endpoints
interact with the database instead of the JSON file. Finally, we
exploit Rust traits so any database handle that has implemented
our database transaction traits can be swapped into the
application with minimal effort.
Technical requirements
Right now, all we do is read the whole data file, alter an item in
the whole dataset, and write the whole dataset to the JSON file.
This is not effective and will not scale well. It also inhibits us
from linking these to-do items to another data model-like user.
Plus, we can only search right now using the status. If we used a
SQL database that has a user table that is linked to a to-do item
database, we would be able to filter to-do items based on the
user, status, or title. We can even use a combination thereof.
When it comes to running our database, we are going to use
Docker. So why should we use Docker?
docker container ls -a
docker image ls
version: "3.7"
services:
postgres:
container_name: 'to-do-postgres'
image: 'postgres:11.2'
restart: always
ports:
- '5432:5432'
environment:
- 'POSTGRES_USER=username'
- 'POSTGRES_DB=to_do'
- 'POSTGRES_PASSWORD=password'
In the preceding code, at the top of the file, we have defined the
version. Older versions such as 2 or 1 have different styles in
which the file is laid out. The different versions also support
different arguments. At the time of writing this book, version 3
is the latest version. The following URL covers the changes
between each docker-compose version:
https://docs.docker.com/compose/compose-file/compose-
versioning/
{
"version": "3.7",
"services": {
"postgres": {
"container_name": "to-do-postgres",
"image": "postgres:11.2",
"restart": "always",
"ports": [
"5432:5432"
],
"environment": [
"POSTGRES_USER=username",
"POSTGRES_DB=to_do",
"POSTGRES_PASSWORD=password"
]
}
}
}
docker-compose up
As you can see, the date and time will vary. However, what we
are told here is that our database is ready to accept connections.
Yes, it is really that easy. Therefore, Docker adoption is
unstoppable. A Ctrl + C will stop our docker-compose ; thus,
shutting down our postgres container.
docker container ls -a
In the preceding output, we can see that all the parameters are
there. The ports, however, are empty because we stopped our
service.
version: "3.7"
services:
postgres:
container_name: 'to-do-postgres'
image: 'postgres:11.2'
restart: always
ports:
- '5432:5432'
environment:
- 'POSTGRES_USER=username'
- 'POSTGRES_DB=to_do'
- 'POSTGRES_PASSWORD=password'
postgres_two:
container_name: 'to-do-postgres_two'
image: 'postgres:11.2'
restart: always
ports:
- '5433:5432'
environment:
- 'POSTGRES_USER=username'
- 'POSTGRES_DB=to_do'
- 'POSTGRES_PASSWORD=password'
docker image ls
In the preceding output, we can see that our image has been
pulled from the postgres repository. We also have a
unique/random ID for the image, and we also have a date for
when that image was created.
docker-compose up -d
The preceding command just tells us which containers have
been spun up with the following output:
We can see our status when we list our containers with the
following output:
In the previous output, the other tags are the same, but we can
also see that the STATUS tag tells us how long the container has
been running, and which port it is occupying. Whilst our
docker-compose is running in the background, it does not
mean we cannot see what is going on. We can access the logs of
the container anytime by calling the logs command and
referencing the ID of the container by the following command:
docker-compose stop
docker-compose down
The down command will also stop our containers. However, the
down command will delete the container. If our database
container is deleted, we will also lose all our data.
#!/bin/bash
cd ..
docker-compose up -d
until pg_isready -h localhost -p 5432 -U username
do
echo "Waiting for postgres"
sleep 2;
done
echo "docker is now running"
docker-compose down
[breakout box]
[breakout box]
❯ sh wait_for_database.sh
[+] Running 0/0
⠋ Network web_app_default Creating 0.2s
⠿ Container to-do-postgres Started 1.5s
localhost:5432 - no response
Waiting for postgres
localhost:5432 - no response
Waiting for postgres
localhost:5432 - accepting connections
docker is now running
[+] Running 1/1
⠿ Container to-do-postgres Removed 1.2s
⠿ Network web_app_default Removed
├── Cargo.toml
└── src
├── connections
│ ├── mod.rs
│ └── sqlx_postgres.rs
├── json_file.rs
├── lib.rs
└── to_do_items
├── descriptors.rs
├── enums.rs
├── mod.rs
├── schema.rs
└── transactions
├── create.rs
├── delete.rs
├── get.rs
├── mod.rs
└── update.rs
Here, we can see that the outline of the data access layer has the
following key sections that we should note:
# file: nanoservices/to_do/dal/Cargo.toml
[features]
json-file = ["serde_json"]
sqlx-postgres = ["sqlx", "once_cell"]
[dependencies]
serde ={ version="1.0.197", features = ["derive"] }
glue = { path = "../../../glue"}
# for json-file
serde_json ={ version="1.0.114", optional = true }
# for sqlx-postgres
sqlx = {
version = "0.7.4",
features = ["postgres", "json"],
optional = true
}
once_cell = { version = "1.19.0", optional = true }
Here we can see that our data access layer is aiming to support
both SQLX and standard JSON file storage engines depending on
the feature that is selected. We can also see that we are selective
with what crates we use. We do not want to be compiling SQLX
if we are just using the json-file feature.
// file: nanoservices/to_do/dal/src/lib.rs
pub mod to_do_items;
pub mod connections;
#[cfg(feature = "json-file")]
pub mod json_file;
// file: nanoservices/to_do/dal/src/connections/mod.r
#[cfg(feature = "sqlx-postgres")]
pub mod sqlx_postgres;
// file: nanoservices/to_do/dal/src/to_do_items/schem
use std::fmt;
use glue::errors::NanoServiceError;
use serde::{Serialize, Deserialize};
use super::enums::TaskStatus;
use std::collections::HashMap;
With these imports, we can build out our to-do item structs.
However, we need two separate structs. In a postgrad database,
we want to be able to assign an ID to each row of the database
table. However, when we are inserting the to-do item into the
database, we will not know the ID, therefore, we need to have a
new to-do item that does not have an ID that takes for form
below:
// file: nanoservices/to_do/dal/src/to_do_items/schem
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct NewToDoItem {
pub title: String,
pub status: TaskStatus
}
Once we use the NewToDoItem to store the new to-do item into
the database, we can make queries to the database to get the to-
do item which is defined below:
// file: nanoservices/to_do/dal/src/to_do_items/schem
#[derive(Serialize, Deserialize, Debug, Clone, Partia
#[cfg_attr(feature = "sqlx-postgres", derive(sqlx::Fr
pub struct ToDoItem {
pub id: i32,
pub title: String,
pub status: String
}
Here, we can see that we derive the FromRow trait is our SQLX
feature is activated, as we need the FromRow trait to pass the
ToDoItem as the return type for the query.
// file: nanoservices/to_do/dal/src/to_do_items/descr
pub struct SqlxPostGresDescriptor;
pub struct JsonFileDescriptor;
And make our code for the to-do items public with the following
code:
// file: nanoservices/to_do/dal/src/to_do_items/mod.r
pub mod schema;
pub mod enums;
pub mod descriptors;
pub mod transactions;
And with this, we can move onto defining our database
transactions.
// file: nanoservices/to_do/dal/src/to_do_items/trans
pub mod create;
pub mod delete;
pub mod get;
pub mod update;
Now, that is a lot of imports, but we can see that by the features,
we are importing enough to support our file and SQLX Postgres
storage engines.
Before we write any logic for either our file or Postgres engine,
we must define our transaction signature with the trait below:
// file: nanoservices/to_do/dal/src/to_do_items/
// transactions/create.rs
pub trait SaveOne {
fn save_one(item: NewToDoItem) ->
impl Future<Output = Result<ToDoItem, NanoService
}
// file: nanoservices/to_do/dal/src/to_do_items/
// transactions/create.rs
#[cfg(feature = "sqlx-postgres")]
impl SaveOne for SqlxPostGresDescriptor {
fn save_one(item: NewToDoItem) ->
impl Future<Output = Result<ToDoItem, NanoService
{
sqlx_postgres_save_one(item)
}
}
#[cfg(feature = "json-file")]
impl SaveOne for JsonFileDescriptor {
fn save_one(item: NewToDoItem) ->
impl Future<Output = Result<ToDoItem, NanoService
{
json_file_save_one(item)
}
}
Here, we can see that we merely pass the item into an async
function that has not been awaited on, so when we call the trait
function, the future is constructed and returned so the caller
can await on the future.
// file: nanoservices/to_do/dal/src/to_do_items/
// transactions/create.rs
#[cfg(feature = "sqlx-postgres")]
async fn sqlx_postgres_save_one(item: NewToDoItem)
-> Result<ToDoItem, NanoServiceError> {
let item = sqlx::query_as::<_, ToDoItem>("
INSERT INTO to_do_items (title, status)
VALUES ($1, $2)
RETURNING *"
).bind(item.title)
.bind(item.status.to_string())
.fetch_one(&*SQLX_POSTGRES_POOL).await.map_err(|e
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
Ok(item)
}
Here we can see that we write an SQL query that inserts a new
row into the table to_do_items . We will cover the creation of
the to_do_items table in the migrations section. Once the SQL
query is defined, we bind the fields of the input item to the
query.
The ; indicates that the query has finished, and that a new
SQL query is about to run. The next SQL query in the string
that we claimed was the title of the to-do item executes an
SQL query that drops the entire table, wiping all our data.
When working with SQL queries, it is important to use the
sanitization checks that the SQL library provides to check
and project against SQL injections. SQLX functions like
bind check for special characters and escape the database
transaction instead of executing it, protecting us from SQL
injections.
For our JSON file, we are not going to be using IDs, but we must
have the same signature as the trait, therefore, our JSON file
function takes the following form:
// file: nanoservices/to_do/dal/src/to_do_items/
// transactions/create.rs
#[cfg(feature = "json-file")]
async fn json_file_save_one(item: NewToDoItem)
-> Result<ToDoItem, NanoServiceError> {
let mut tasks = get_all::<ToDoItem>().unwrap_or_e
HashMap::new()
);
let to_do_item = ToDoItem {
id: 1,
title: item.title,
status: item.status.to_string()
};
tasks.insert(
to_do_item.title.to_string(),
to_do_item.clone()
);
let _ = save_all(&tasks)?;
Ok(to_do_item)
}
Here, we can see that we are merely assigning the ID to one and
leave it at that. We could build our own ID system where we
increase the ID by one every time we insert a row, but this
would excessively bloat the chapter for a storage engine that we
are moving away from. We do not need the ID of the to-do item
to carry out the tasks, so it makes sense to keep the JSON file
API functional but warn other developers to switch over to the
Postgres database as soon as possible.
// file: nanoservices/to_do/dal/src/to_do_items/
// transactions/delete.rs
use crate::to_do_items::schema::ToDoItem;
use glue::errors::NanoServiceError;
use std::future::Future;
#[cfg(feature = "json-file")]
use super::super::descriptors::JsonFileDescriptor;
#[cfg(feature = "json-file")]
use crate::json_file::{get_all, save_all};
#[cfg(feature = "json-file")]
use std::collections::HashMap;
#[cfg(feature = "sqlx-postgres")]
use crate::connections::sqlx_postgres::SQLX_POSTGRES_
#[cfg(feature = "sqlx-postgres")]
use super::super::descriptors::SqlxPostGresDescriptor
#[cfg(any(feature = "json-file", feature = "sqlx-post
use glue::errors::NanoServiceErrorStatus;
// file: nanoservices/to_do/dal/src/to_do_items/
// transactions/delete.rs
pub trait DeleteOne {
fn delete_one(title: String) ->
impl Future<Output = Result<ToDoItem, NanoService
}
// file: nanoservices/to_do/dal/src/to_do_items/
// transactions/delete.rs
#[cfg(feature = "sqlx-postgres")]
impl DeleteOne for SqlxPostGresDescriptor {
fn delete_one(title: String) ->
impl Future<Output = Result<ToDoItem, NanoService
sqlx_postgres_delete_one(title)
}
}
#[cfg(feature = "json-file")]
impl DeleteOne for JsonFileDescriptor {
fn delete_one(title: String) ->
impl Future<Output = Result<ToDoItem, NanoService
json_file_delete_one(title)
}
}
When it comes to our Postgres query, we use the title from the
item passed into delete with the following function:
// file: nanoservices/to_do/dal/src/to_do_items/
// transactions/delete.rs
#[cfg(feature = "sqlx-postgres")]
async fn sqlx_postgres_delete_one(title: String) ->
Result<ToDoItem, NanoServiceError> {
let item = sqlx::query_as::<_, ToDoItem>("
DELETE FROM to_do_items
WHERE title = $1
RETURNING *"
).bind(item.id)
.fetch_one(&*SQLX_POSTGRES_POOL).await.map_err(|e
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
Ok(item)
}
As for our JSON file to-do items, we use the title of the to-do item
to delete the item with the code below:
// file: nanoservices/to_do/dal/src/to_do_items/
// transactions/delete.rs
#[cfg(feature = "json-file")]
async fn json_file_delete_one(title: String) ->
Result<ToDoItem, NanoServiceError> {
let mut tasks = get_all::<ToDoItem>().unwrap_or_e
HashMap::new()
);
let to_do_item = tasks.remove(
&title
).ok_or_else(|| {
NanoServiceError::new(
"Item not found".to_string(),
NanoServiceErrorStatus::NotFound
)
})?;
let _ = save_all(&tasks)?;
Ok(to_do_item)
}
# nanoservices/to_do/core/Cargo.toml
[dependencies]
dal = { path = "../dal" }
serde = { version = "1.0.197", features = ["derive"]
glue = { path = "../../../glue"}
We can now redefine our create endpoint with the code below:
// nanoservices/to_do/core/src/api/basic_actions/crea
use dal::to_do_items::schema::{NewToDoItem, ToDoItem}
use dal::to_do_items::transactions::create::SaveOne;
use glue::errors::NanoServiceError;
pub async fn create<T: SaveOne>(item: NewToDoItem)
-> Result<ToDoItem, NanoServiceError> {
let created_item = T::save_one(item).await?;
Ok(created_item)
}
// nanoservices/to_do/core/src/api/basic_actions/dele
use dal::to_do_items::transactions::delete::DeleteOne
use glue::errors::NanoServiceError;
pub async fn delete<T: DeleteOne>(id: &str)
-> Result<(), NanoServiceError> {
let _ = T::delete_one(id.to_string()).await?;
Ok(())
}
Here we can see that we still accept the title and return nothing,
and our trait is performing the action on the storage. Here, we
can see that our interface is getting more complicated but not
really doing much apart from calling a function. This is because
a to-do application is simply storing items in a database. The to-
do application was chosen to prevent excessive bloat so we can
just focus on concepts. However, our core of another more
complicated application will be making calls on different
storage engines, checking outcomes, performing calculations,
and firing off processes such as statistics for dashboards, or
emails. Remember, thanks to traits, the core is now the IO
agnostic center where your business logic is coded. Your core
would be slotted into a desktop app with frameworks like Tauri,
just a binary on a computer that uses stdio to have data piped in
and out of it. At the time of writing this, I am an Honorary
Researcher in the bioengineering department at Kings College
London. The projects are in the surgical robotics department,
and using IO agonistic cores is essential. For instance, one
operating theatre might have terrible signal due to lead lined
walls to protect against radiation from scanning equipment.
Therefore, we must interact with a cable. A lab at a central
hospital might have access to GPUs, therefore interacting with
those is very beneficial. However, we also need to consider that
not every lab/operating theatre will have access to GPUs.
// nanoservices/to_do/core/src/api/basic_actions/get.
use dal::to_do_items::schema::AllToDOItems;
use dal::to_do_items::transactions::get::GetAll;
use glue::errors::NanoServiceError;
pub async fn get_all<T: GetAll>()
-> Result<AllToDOItems, NanoServiceError> {
let all_items = T::get_all().await?;
AllToDOItems::from_vec(all_items)
}
Here we can see that we have removed the single get interface.
This is because we were not using it in our application. General
rule is that if we are not using code, we should delete it. It
cleans up the code and reduces the amount of code that we are
maintaining. However, like all rules there are exceptions. Going
back to my surgical robotics work, it would be short-sighted of
me to delete the GPU interface if we do not use it for a particular
lab that does not have access to a GPU.
// nanoservices/to_do/core/src/api/basic_actions/upda
use dal::to_do_items::schema::ToDoItem;
use glue::errors::NanoServiceError;
use dal::to_do_items::transactions::update::UpdateOne
pub async fn update<T: UpdateOne>(item: ToDoItem)
-> Result<(), NanoServiceError> {
let _ = T::update_one(item).await?;
Ok(())
}
# nanoservices/to_do/networking/actix_server/Cargo.to
[dependencies]
tokio = { version = "1.36.0", features = ["full"] }
actix-web = "4.5.1"
core = { path = "../../core" }
dal = { path = "../../dal", features = ["sqlx-postgre
glue = { path = "../../../../glue", features = ["acti
Here we can see that we have activated the sqlx-postgres
feature otherwise our handle will not have implemented all the
required traits to be used in our endpoints.
For our create endpoint, we first need to import the traits with
the code below:
// nanoservices/to_do/networking/actix_server/src
// /api/basic_actions/create.rs
use dal::to_do_items::schema::NewToDoItem;
use dal::to_do_items::transactions::{
create::SaveOne,
get::GetAll
};
Here we can see that we need two traits, because with every API
call, we like to return the updated state to the frontend. With
these two traits, we define the create function with the
following code:
// nanoservices/to_do/networking/actix_server/src
// /api/basic_actions/create.rs
pub async fn create<T: SaveOne + GetAll>(
token: HeaderToken,
body: Json<NewToDoItem>
) -> Result<HttpResponse, NanoServiceError> {
let _ = create_core::<T>(body.into_inner()).await
Ok(HttpResponse::Created().json(get_all_core::<T>
}
This works because our Postgres handle has implanted all the
traits for the storage, therefore, we both traits are satisfied. Here
is where we can see even more flexibility. If for instance we
cached the state in a datastore like Redis, and updated that
cache with every transaction, then we could split up the trait
requirements to <T: SaveOne, X: GetAll> . This means we
could pass in two Postgres handles but pass in different ones if
we need. We could even go as far as to implement a HTTP
request for one of the traits to call another service. As long that
the signature of the trait is respected the create endpoint will
accept and utilize it.
For our delete endpoint our updated code takes the form below:
// nanoservices/to_do/networking/actix_server/src
// src/api/basic_actions/delete.rs
use dal::to_do_items::transactions::{
delete::DeleteOne,
get::GetAll
};
. . .
pub async fn delete_by_name<T: DeleteOne + GetAll>(re
-> Result<HttpResponse, NanoServiceError> {
match req.match_info().get("name") {
Some(name) => {
delete_core::<T>(name).await?;
},
None => {
return Err(
NanoServiceError::new(
"Name not provided".to_string(),
NanoServiceErrorStatus::BadReques
)
)
}
};
Ok(HttpResponse::Ok().json(get_all_core::<T>().aw
}
// nanoservices/to_do/networking/actix_server/src
// /api/basic_actions/get.rs
use dal::to_do_items::transactions::get::GetAll;
. . .
pub async fn get_all<T: GetAll>()
-> Result<HttpResponse, NanoServiceError> {
Ok(HttpResponse::Ok().json(get_all_core::<T>().aw
}
// nanoservices/to_do/networking/actix_server/src
// /api/basic_actions/update.rs
use dal::to_do_items::transactions::{
update::UpdateOne,
get::GetAll
};
. . .
pub async fn update<T: UpdateOne + GetAll>(body: Json
-> Result<HttpResponse, NanoServiceError> {
let _ = update_core::<T>(body.into_inner()).await
Ok(HttpResponse::Ok().json(get_all_core::<T>().aw
}
Here, we can stop and admire the beauty that this approach and
the Rust programming language gives us. It is verbose and
succinct at the same time. Let's say two months from now you
come back to this endpoint to see what the steps are. Because
Rust is verbose, and this function just focuses on the
management and flow of the HTTP request and response, we
can instantly see that the ToDoItem is the JSON body of the
request. We can also see that this endpoint is performing a save
one and get all operation with a storage engine. We can then
see that the body is passed into a core function to update the
item, and then the get all is returned. You can see that this
scales easily. If you have more core functions, you will be able
to easily see how the HTTP request navigates through these. It
also gives you the option to easily refactor the high level,
swapping out and moving core functions like Lego blocks.
When you get to big systems with complex endpoints, you with
thank past you for taking this approach.
// nanoservices/to_do/networking/actix_server/src
// /api/basic_actions/mod.rs
. . .
use dal::to_do_items::descriptors::SqlxPostGresDescri
pub fn basic_actions_factory(app: &mut ServiceConfig)
app.service(
scope("/api/v1")
.route("get/all", get().to(
get::get_all::<SqlxPostGresDescriptor>)
)
.route("create", post().to(
create::create::<SqlxPostGresDescriptor>)
)
.route("delete/{name}", delete().to(
delete::delete_by_name::<SqlxPostGresDesc
)
.route("update", put().to(
update::update::<SqlxPostGresDescriptor>)
)
);
}
DATABASE_URL=postgres://username:password@localhost/t
When running, the SQLX client will detect the .env file and
export the content as environment variables to connect to the
database. Now that we have the environment variables to
connect to the database, we can create our first migration with
the command below:
├── migrations
│ └── 20240523084625_initial-setup.sql
When we have multiple SQL scripts, the SQLX client will assume
that the order to run SQL scripts is based off the number which
is the timestamp. These SQL scripts will be run in ascending
order. For our setup script, we must create our to-do items table
with the following code:
-- nanoservices/to_do/dal/migrations/
-- 20240523084625_initial-setup.sql
CREATE TABLE to_do_items (
id SERIAL PRIMARY KEY,
title VARCHAR(255) UNIQUE NOT NULL,
status VARCHAR(7) NOT NULL
);
// file: nanoservices/to_do/dal/src/migrations.rs
use crate::connections::sqlx_postgres::SQLX_POSTGRES_
pub async fn run_migrations() {
println!("Migrating to-do database...");
let mut migrations = sqlx::migrate!("./migrations
migrations.ignore_missing = true;
let result = migrations.run(&*SQLX_POSTGRES_POOL)
.await.unwrap();
println!(
"to-do database migrations completed: {:?}",
result
);
}
// file: nanoservices/to_do/dal/src/lib.rs
pub mod to_do_items;
pub mod connections;
#[cfg(feature = "sqlx-postgres")]
pub mod migrations;
#[cfg(feature = "json-file")]
pub mod json_file;
// nanoservices/to_do/networking/actix_server/src/mai
use actix_web::{App, HttpServer};
mod api;
use dal::migrations::run_migrations;
#[tokio::main]
async fn main() -> std::io::Result<()> {
run_migrations().await;
HttpServer::new(|| {
App::new().configure(api::views_factory)
})
.workers(4)
.bind("127.0.0.1:8080")?
.run()
.await
}
All our rust code is now done, however, you might recall that
we altered the schema for updating the to-do item. We now
accept an ID, so we must update the schema in the frontend
code before testing our integration.
The create API call for our frontend will slightly change the
interface, enabling the sending of the ID when updating the to-
do item from pending to done. This means altering the
ToDoItem interface, and providing a NewToDoItem interface
with the following code:
// ingress/frontend/src/interfaces/toDoItems.ts
export interface NewToDoItem {
title: string;
status: TaskStatus;
}
export interface ToDoItem {
id: number;
title: string;
status: TaskStatus;
}
This means that our create API call now takes the following
form:
// ingress/frontend/src/api/create.ts
import { NewToDoItem, ToDoItems, TaskStatus }
from "../interfaces/toDoItems";
import { postCall } from "./utils";
import { Url } from "./url";
export async function createToDoItemCall(title: strin
const toDoItem: NewToDoItem = {
title: title,
status: TaskStatus.PENDING
};
return postCall<NewToDoItem, ToDoItems>(
new Url().create, toDoItem, 201
);
}
And our update call now needs to pass in the ID with the
following code:
// ingress/frontend/src/api/update.ts
import { ToDoItem, ToDoItems, TaskStatus }
from "../interfaces/toDoItems";
import { putCall } from "./utils";
import { Url } from "./url";
export async function updateToDoItemCall(
name: string, status: TaskStatus, id: number
) {
const toDoItem: ToDoItem = {
title: name,
status: status,
id: id
};
return putCall<ToDoItem, ToDoItems>(
new Url().update, toDoItem, 200
);
}
# ingress/Cargo.toml
[dependencies]
. . .
to-do-dal = {
path = "../nanoservices/to_do/dal",
package = "dal",
features = ["sqlx-postgres"]
}
Here we can see that we point to the real package name which
is "dal", but we might have other packages called "dal" from
other nanoservices, so we call the dependency "to-do-dal" so
when we reference the dal in our code, we use the crate name
to-do-dal. This will prevent clashes in the future.
We can now move onto importing the dal in our main.rs file
with the following code:
// ingress/src/main.rs
. . .
use to_do_dal::migrations::run_migrations as run_todo
. . .
And then we run our migrations in the main function with the
code below:
// ingress/src/main.rs
. . .
#[tokio::main]
async fn main() -> std::io::Result<()> {
run_todo_migrations().await;
HttpServer::new(|| {
. . .
})
.bind("0.0.0.0:8001")?
.run()
.await
}
// ingress/.env
TO_DO_DB_URL=postgres://username:password@localhost/t
And there we have it! We have integrated Postgres into our app.
You can run it, and as long as your database is running, the
migrations will run, and you can interact with your application
in the browser.
Summary
Questions
Appendix
// file: nanoservices/to_do/dal/src/to_do_items/
// transactions/get.rs
use crate::to_do_items::schema::ToDoItem;
use glue::errors::NanoServiceError;
use std::future::Future;
#[cfg(feature = "json-file")]
use super::super::descriptors::JsonFileDescriptor;
#[cfg(feature = "json-file")]
use crate::json_file::get_all;
#[cfg(feature = "json-file")]
use std::collections::HashMap;
#[cfg(feature = "sqlx-postgres")]
use crate::connections::sqlx_postgres::SQLX_POSTGRES_
#[cfg(feature = "sqlx-postgres")]
use super::super::descriptors::SqlxPostGresDescriptor
#[cfg(feature = "sqlx-postgres")]
use glue::errors::NanoServiceErrorStatus;
pub trait GetAll {
fn get_all() ->
impl Future<Output = Result<Vec<ToDoItem>, NanoSe
}
#[cfg(feature = "sqlx-postgres")]
impl GetAll for SqlxPostGresDescriptor {
fn get_all() ->
impl Future<Output = Result<Vec<ToDoItem>, NanoSe
sqlx_postgres_get_all()
}
}
#[cfg(feature = "json-file")]
impl GetAll for JsonFileDescriptor {
fn get_all() ->
impl Future<Output = Result<Vec<ToDoItem>, NanoSe
json_file_get_all()
}
}
#[cfg(feature = "sqlx-postgres")]
async fn sqlx_postgres_get_all() ->
Result<Vec<ToDoItem>, NanoServiceError> {
let items = sqlx::query_as::<_, ToDoItem>("
SELECT * FROM to_do_items"
).fetch_all(&*SQLX_POSTGRES_POOL).await.map_err(|
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
Ok(items)
}
#[cfg(feature = "json-file")]
async fn json_file_get_all() ->
Result<Vec<ToDoItem>, NanoServiceError> {
let tasks = get_all::<ToDoItem>().unwrap_or_else(
HashMap::new()
);
let items = tasks.values().cloned().collect();
Ok(items)
}
// file: nanoservices/to_do/dal/src/to_do_items/
// transactions/update.rs
use crate::to_do_items::schema::ToDoItem;
use glue::errors::NanoServiceError;
use std::future::Future;
#[cfg(feature = "json-file")]
use super::super::descriptors::JsonFileDescriptor;
#[cfg(feature = "json-file")]
use crate::json_file::{get_all, save_all};
#[cfg(feature = "json-file")]
use std::collections::HashMap;
#[cfg(feature = "sqlx-postgres")]
use crate::connections::sqlx_postgres::SQLX_POSTGRES_
#[cfg(feature = "sqlx-postgres")]
use super::super::descriptors::SqlxPostGresDescriptor
#[cfg(any(feature = "json-file", feature = "sqlx-post
use glue::errors::NanoServiceErrorStatus;
pub trait UpdateOne {
fn update_one(item: ToDoItem) ->
impl Future<Output = Result<ToDoItem, NanoService
}
#[cfg(feature = "sqlx-postgres")]
impl UpdateOne for SqlxPostGresDescriptor {
fn update_one(item: ToDoItem) ->
impl Future<Output = Result<ToDoItem, NanoService
sqlx_postgres_update_one(item)
}
}
#[cfg(feature = "json-file")]
impl UpdateOne for JsonFileDescriptor {
fn update_one(item: ToDoItem) ->
impl Future<Output = Result<ToDoItem, NanoService
json_file_update_one(item)
}
}
#[cfg(feature = "sqlx-postgres")]
async fn sqlx_postgres_update_one(item: ToDoItem) ->
Result<ToDoItem, NanoServiceError> {
let item = sqlx::query_as::<_, ToDoItem>("
UPDATE to_do_items
SET title = $1, status = $2
WHERE id = $3
RETURNING *"
).bind(item.title)
.bind(item.status.to_string())
.bind(item.id)
.fetch_one(&*SQLX_POSTGRES_POOL).await.map_err(|e
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
Ok(item)
}
#[cfg(feature = "json-file")]
async fn json_file_update_one(item: ToDoItem) ->
Result<ToDoItem, NanoServiceError> {
let mut tasks = get_all::<ToDoItem>().unwrap_or_e
HashMap::new()
);
if !tasks.contains_key(&item.title) {
return Err(NanoServiceError::new(
format!("Item with name {} not found", it
NanoServiceErrorStatus::NotFound
));
}
tasks.insert(item.title.clone(), item.clone());
let _ = save_all(&tasks)?;
Ok(item)
}
placeholder
10 Managing user sessions
Before you begin: Join our book community on Discord
https://packt.link/EarlyAccess/
We now have our to-do server working. However, there is no
authentication. Anybody can access the application and alter
the to-do list. As we know, the internet just does not work like
this. We must authenticate our users before we can allow them
to alter to-do items. In this chapter, we are going to build an
authentication server, and integrate it into our system so we can
authenticate our users before allowing users to access the to-do
items. In this chapter, we will cover the following:
Technical requirements
Create users
Login
Logout
For our auth service modules, we will start with the data access
layer.
dal
└── Cargo.toml
├── migrations
│ └── 20240523088625_initial-setup.sql
└── src
├── connections
│ ├── mod.rs
│ └── sqlx_postgres.rs
├── lib.rs
├── migrations.rs
└── users
├── descriptors.rs
├── mod.rs
├── schema.rs
└── transactions
├── create.rs
├── get.rs
└── mod.rs
[package]
name = "auth-dal"
version = "0.1.0"
edition = "2021"
[dependencies]
serde ={ version="1.0.197", features = ["derive"] }
glue = { path = "../../../glue"}
sqlx = {
version = "0.7.4",
features = ["postgres", "json", "runtime-tokio"],
optional = false
}
once_cell = { version = "1.19.0", optional = false }
Core
core
└── Cargo.toml
└── src
├── api
│ ├── auth
│ │ ├── login.rs
│ │ ├── logout.rs
│ │ └── mod.rs
│ ├── mod.rs
│ └── users
│ ├── create.rs
│ └── mod.rs
└── lib.rs
[package]
name = "auth-core"
version = "0.1.0"
edition = "2021"
[dependencies]
auth-dal = { path = "../dal" }
serde = { version = "1.0.197", features = ["derive"]
glue = { path = "../../../glue"}
Networking Layer
Our networking layer has the following file structure:
networking
└── actix_server
├── Cargo.toml
└── src
├── api
│ ├── auth
│ │ ├── login.rs
│ │ ├── logout.rs
│ │ └── mod.rs
│ ├── mod.rs
│ └── users
│ ├── create.rs
│ └── mod.rs
└── main.rs
Again, we link all the files together and define the following api
endpoint factories. For the users factory we have the definition
below:
// nanoservices/auth/networking/actix_server/
// src/api/users/mod.rs
pub mod create;
use actix_web::web::ServiceConfig;
pub fn users_factory(app: &mut ServiceConfig) {
}
For the auth factory we have the following code:
// nanoservices/auth/networking/actix_server/
// src/api/auth/mod.rs
pub mod login;
pub mod logout;
use actix_web::web::ServiceConfig;
pub fn auth_factory(app: &mut ServiceConfig) {
}
// nanoservices/auth/networking/actix_server/
// src/api/mod.rs
pub mod auth;
pub mod users;
use actix_web::web::ServiceConfig;
pub fn views_factory(app: &mut ServiceConfig) {
users::users_factory(app);
auth::auth_factory(app);
}
With our endpoint factories defined, our main file takes the
following form:
// nanoservices/auth/networking/actix_server/
// src/main.rs
use actix_web::{App, HttpServer};
mod api;
use auth_dal::migrations::run_migrations;
#[tokio::main]
async fn main() -> std::io::Result<()> {
run_migrations().await;
HttpServer::new(|| {
App::new().configure(api::views_factory)
})
.workers(4)
.bind("127.0.0.1:8081")?
.run()
.await
}
# nanoservices/auth/networking/actix_server/Cargo.tom
[package]
name = "auth_actix_server"
version = "0.1.0"
edition = "2021"
[dependencies]
tokio = { version = "1.36.0", features = ["full"] }
actix-web = "4.5.1"
auth-core = { path = "../../core" }
auth-dal = { path = "../../dal" }
glue = { path = "../../../../glue", features = ["acti
And here we have it, our auth server is now ready to develop
on. We can start by building out the user data model.
With our approach in mind, we can define the new user struct
with the following code:
// nanoservices/auth/dal/src/users/schema.rs
use serde::{Serialize, Deserialize};
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct NewUser {
pub email: String,
pub password: String,
pub unique_id: String
}
Now that our new user struct is done, we can now define our
user struct with the code below:
// nanoservices/auth/dal/src/users/schema.rs
#[derive(Deserialize, Serialize, Debug,
Clone, PartialEq, sqlx::FromRow)]
pub struct User {
pub id: i32,
pub email: String,
pub password: String,
pub unique_id: String
}
// nanoservices/auth/dal/src/users/schema.rs
#[derive(Deserialize, Serialize, Debug,
Clone, PartialEq)]
pub struct TrimmedUser {
pub id: i32,
pub email: String,
pub unique_id: String
}
impl From<User> for TrimmedUser {
fn from(user: User) -> Self {
TrimmedUser {
id: user.id,
email: user.email,
unique_id: user.unique_id
}
}
}
With our structs, we only need to define the SQL script for the
database migrations with the code below:
-- nanoservices/auth/dal/migrations/*_initial_setup.
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR NOT NULL UNIQUE,
password VARCHAR NOT NULL,
unique_id VARCHAR NOT NULL UNIQUE
);
Here, we can see that we ensure that the email is unique. This is
because the email is the main way we communicate with the
user. For instance, if the user wants to reset their password, we
cannot have the reset link sent to two or more email addresses.
We can see that the unique ID is also unique as we would want
to use the unique ID for verification, password resets, and
identifying login sessions.
Storing Passwords
# nanoservices/auth/dal/Cargo.toml
argon2 = { version = "0.5.3", features = ["password-h
uuid = {version = "1.8.0", features = ["serde", "v4"]
rand = "0.8.5"
// nanoservices/auth/dal/src/users/schema.rs
use glue::errors::{NanoServiceError, NanoServiceError
use argon2::{
Argon2,
PasswordHasher,
PasswordVerifier,
password_hash::{
SaltString,
PasswordHash
}
};
You might be wondering why we are building the password
hashing logic in our schema.rs in the data access layer. Should
we not build out or password hashing in the core of the auth?
This question does make sense as handling passwords is an
authentication issue. Like most design decisions there is not a
clear right answer. I have chosen to put the hashing password
logic in the schema.rs because we are hashing the password
for safe storage of the password. In my personal experience, if
there is a requirement for storage, then I usually put it in the
storage module. Another developer can look at the schema of
the users and not only see the struct, but the processes needed
to store the struct. It can be messy to rely on hashing
dependencies elsewhere to safely store the user struct.
// nanoservices/auth/dal/src/users/schema.rs
impl NewUser {
pub fn new(email: String, password: String)
-> Result<NewUser, NanoServiceError> {
. . .
}
}
Inside our constructor, we create a unique ID, and hashed
password with the following code:
// nanoservices/auth/dal/src/users/schema.rs
// Fn => NewUser::new
let unique_id = uuid::Uuid::new_v4().to_string();
let salt = SaltString::generate(&mut rand::thread_rng
let argon2_hasher = Argon2::default();
let hash = argon2_hasher.hash_password(
password.as_bytes(),
&salt
).map_err(|e|{
NanoServiceError::new(
format!("Failed to hash password: {}", e),
NanoServiceErrorStatus::Unknown
)
})?.to_string();
Ok(NewUser {
email,
password: hash,
unique_id
})
We can see that we salt and hash the password. Before we can
explain salting, we must explain what hashing is.
Hashing is the process of converting a given input (in this case,
a password) into a fixed-size string of characters, which
typically appears as a seemingly random sequence of
characters. This transformation is performed by a hash
function. The key properties of a cryptographic hash function
are:
Verifying Passwords
// nanoservices/auth/dal/src/users/schema.rs
let argon2_hasher = Argon2::default();
let parsed_hash = PasswordHash::new(
&self.password
).map_err(|e|{
NanoServiceError::new(
format!("Failed to parse password hash: {}",
NanoServiceErrorStatus::Unknown
)
})?;
let is_valid = argon2_hasher.verify_password(
password.as_bytes(),
&parsed_hash
).is_ok();
Ok(is_valid)
Here we can see that we pass our stored password into a
PasswordHash struct. The passing into the PasswordHash struct
can result in an error because there is a change that the
password that we are passing in is not hashed in the right
format. For instance, the start of the hashed string should be the
name of the algorithm used to generate the hash. Once we have
the parsed hash, we then check to see if the password passed
into the function is valid. If the password does not match, then
then we return a true , otherwise it is a false .
Creating Users
// nanoservices/auth/dal/src/users/transactions/creat
use crate::users::schema::{NewUser, User};
use glue::errors::{NanoServiceError, NanoServiceError
use std::future::Future;
use crate::connections::sqlx_postgres::SQLX_POSTGRES_
use super::super::descriptors::SqlxPostGresDescriptor
We then must define our trait that is the template for inserting a
user with the code below:
// nanoservices/auth/dal/src/users/transactions/creat
pub trait SaveOne {
fn save_one(user: NewUser)
-> impl Future<Output = Result<
User, NanoServiceError>
> + Send;
}
// nanoservices/auth/dal/src/users/transactions/creat
impl SaveOne for SqlxPostGresDescriptor {
fn save_one(user: NewUser)
-> impl Future<Output = Result<
User, NanoServiceError>
> + Send {
sqlx_postgres_save_one(user)
}
}
// nanoservices/auth/dal/src/users/transactions/creat
async fn sqlx_postgres_save_one(user: NewUser)
-> Result<User, NanoServiceError> {
let user = sqlx::query_as::<_, User>("
INSERT INTO users (email, password, unique_id
VALUES ($1, $2, $3)
RETURNING *"
)
.bind(user.email)
.bind(user.password.to_string())
.bind(user.unique_id)
.fetch_one(&*SQLX_POSTGRES_POOL).await.map_err(|e
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
Ok(user)
}
For our create endpoint in our core, we only need the following
code:
// nanoservices/auth/core/src/api/users/create.rs
use auth_dal::users::schema::{NewUser, User};
use auth_dal::users::transactions::create::SaveOne;
use glue::errors::NanoServiceError;
use serde::Deserialize;
#[derive(Deserialize)]
pub struct CreateUser {
pub email: String,
pub password: String
}
pub async fn create<T: SaveOne>(data: CreateUser)
-> Result<User, NanoServiceError> {
let user = NewUser::new(data.email, data.password
let created_item = T::save_one(user).await?;
Ok(created_item)
}
// nanoservices/auth/networking/actix_server/
// src/api/users/mod.rs
pub mod create;
use auth_dal::users::descriptors::SqlxPostGresDescrip
use actix_web::web::{ServiceConfig, scope, post};
pub fn users_factory(app: &mut ServiceConfig) {
app.service(
scope("/api/v1/users")
.route("create", post().to(
create::create::<SqlxPostGresDescriptor>)
)
);
}
# glue/Cargo.toml
jsonwebtoken = "9.3.0"
// glue/src/token.rs
use crate::errors::{
NanoServiceError,
NanoServiceErrorStatus
};
use serde::{Serialize, Deserialize};
use jsonwebtoken::{
decode,
encode,
Algorithm,
DecodingKey,
EncodingKey,
Header,
Validation
};
Our JWT now needs to house the unique ID of the user, have
encoding/decoding functions, and update the FromRequest trait
implementation. The main definition of our JWT struct has the
following outline:
// glue/src/token.rs
#[derive(Debug, Serialize, Deserialize)]
pub struct HeaderToken {
pub unique_id: String
}
impl HeaderToken {
pub fn get_key() -> Result<String, NanoServiceErr
. . .
}
pub fn encode(self) -> Result<String, NanoService
. . .
}
pub fn decode(token: &str) -> Result<Self, NanoSe
. . .
}
}
// glue/src/token.rs
#[cfg(feature = "actix")]
impl FromRequest for HeaderToken {
type Error = NanoServiceError;
type Future = Ready<Result<HeaderToken, NanoServi
fn from_request(req: &HttpRequest, _: &mut Payloa
-> Self::Future {
. . .
}
}
For our JWT, we are going to extract the token from the
environment variables with the code below:
// glue/src/token.rs
pub fn get_key() -> Result<String, NanoServiceError>
std::env::var("JWT_SECRET").map_err(|e| {
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unauthorized
)
})
}
With our key, we can now encode our token in the next section.
// glue/src/token.rs
pub fn encode(self) -> Result<String, NanoServiceErro
let key_str = Self::get_key()?;
let key = EncodingKey::from_secret(key_str.as_ref
return match encode(&Header::default(), &self, &k
Ok(token) => Ok(token),
Err(error) => Err(
NanoServiceError::new(
error.to_string(),
NanoServiceErrorStatus::Unauthorized
)
)
};
}
Here we can see that we initially get the key for encoding our
token. We then parse this into an EncodingKey struct which
can then be used to encode our HeaderToken struct into a
token.
With our encoded token, the only thing left is to decode our
encoded token into an HeaderToken struct.
// glue/src/token.rs
pub fn decode(token: &str) -> Result<Self, NanoServic
let key_str = Self::get_key()?;
let key = DecodingKey::from_secret(key_str.as_ref
let mut validation = Validation::new(Algorithm::H
validation.required_spec_claims.remove("exp");
match decode::<Self>(token, &key, &validation) {
Ok(token_data) => return Ok(token_data.claims
Err(error) => return Err(
NanoServiceError::new(
error.to_string(),
NanoServiceErrorStatus::Unauthorized
)
)
};
}
Here we can see that we get the key from the environment
variable again. We then create a decoding key, with the HS256
algorithm. From this, we can deduce that the default encoding
in the previous section was the HS256 algorithm. We then
remove the "exp" from the claims. This means that we are
removing the expiry time from the expected data in the
encoded token.
Removing the "exp" from the token means that the token does
not expire. If the token did expire, or the "exp" field was not in
the token but we did not remove the "exp" from the expected
claims, then the decoding of the token would result in an error.
Removing the "exp" does make the handling of our application
a lot easier as we do not have to refresh tokens because the
tokens do not expire. However, it is also not as secure. For
instance, if someone else got hold of a token, then they can
make API on the behalf of the compromised user without any
limitations.
// nanoservices/auth/dal/src/users/transactions/get.r
use crate::users::schema::User;
use glue::errors::{NanoServiceError, NanoServiceError
use std::future::Future;
use super::super::descriptors::SqlxPostGresDescriptor
use crate::connections::sqlx_postgres::SQLX_POSTGRES_
// nanoservices/auth/dal/src/users/transactions/get.r
pub trait GetByEmail {
fn get_by_email(email: String)
-> impl Future<Output = Result<
User, NanoServiceError
>> + Send;
}
And implement this trait for our Postgres descriptor with the
following code:
// nanoservices/auth/dal/src/users/transactions/get.r
impl GetByEmail for SqlxPostGresDescriptor {
fn get_by_email(email: String)
-> impl Future<Output = Result<
User, NanoServiceError
>> + Send {
sqlx_postgres_get_by_email(email)
}
}
// nanoservices/auth/dal/src/users/transactions/get.r
async fn sqlx_postgres_get_by_email(email: String)
-> Result<User, NanoServiceError> {
let item = sqlx::query_as::<_, User>("
SELECT * FROM users WHERE email = $1"
).bind(email)
.fetch_optional(&*SQLX_POSTGRES_POOL).await.map_e
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
match item {
None => Err(NanoServiceError::new(
"User not found".to_string(),
NanoServiceErrorStatus::NotFound
)),
Some(item) => Ok(item)
}
}
And our Postgres descriptor can get users via the email. We
must note that we use the fetch_optional to check if the
result in None . This is because we want to differentiate whether
the user cannot be found or if the error is a result of something
else.
Now our data access is working, we can move onto the core api
for logging in a user.
For our core login API, we just must see it to believe it. We
import the following:
// nanoservices/auth/dal/src/api/auth/login.rs
use auth_dal::users::transactions::get::GetByEmail;
use glue::errors::{NanoServiceError, NanoServiceError
use glue::token::HeaderToken;
And then our login API function is defined with the code below:
// nanoservices/auth/dal/src/api/auth/login.rs
pub async fn login<T: GetByEmail>(
email: String,
password: String
) -> Result<String, NanoServiceError> {
let user = T::get_by_email(email).await?;
let outcome = user.verify_password(password)?;
if outcome {
Ok(HeaderToken{
unique_id: user.unique_id
}.encode()?)
} else {
Err(NanoServiceError::new(
"Invalid password".to_string(),
NanoServiceErrorStatus::Unauthorized
))
}
}
The only thing left to do for our login is mount our core login
function to our server.
// nanoservices/auth/networking/actix_server/
// src/extract_auth.rs
use actix_web::HttpRequest;
use glue::errors::{NanoServiceError, NanoServiceError
use base64::{Engine, engine::general_purpose};
#[derive(Debug)]
pub struct Credentials {
pub email: String,
pub password: String,
}
pub async fn extract_credentials(req: HttpRequest)
-> Result<Credentials, NanoServiceError> {
. . .
}
Basic auth credentials start with the "Basic " string. If that string
is not present, we must conclude that it is not basic auth as
shown with the following code:
// nanoservices/auth/networking/actix_server/
// src/extract_auth.rs
if !encoded.starts_with("Basic ") {
return Err(
NanoServiceError::new(
"Invalid credentials".to_string(),
NanoServiceErrorStatus::Unauthorized
)
)
}
// nanoservices/auth/networking/actix_server/
// src/extract_auth.rs
let base64_credentials = &encoded[6..];
let decoded = general_purpose::STANDARD.decode(
base64_credentials
).map_err(|e|{
NanoServiceError::new(e.to_string(),
NanoServiceErrorStatus::Unauthorized)
})?;
let credentials = String::from_utf8(
decoded
).map_err(|e|{
NanoServiceError::new(e.to_string(),
NanoServiceErrorStatus::Unauthorized)}
)?;
// nanoservices/auth/networking/actix_server/
// src/extract_auth.rs
let parts: Vec<&str> = credentials.splitn(2, ':').col
if parts.len() == 2 {
let email = parts[0];
let password = parts[1];
return Ok(Credentials {
email: email.to_string(),
password: password.to_string(),
});
}
else {
return Err(
NanoServiceError::new("Invalid credentials".t
NanoServiceErrorStatus::Unauthorized)
)
}
And finally, we mount our login API endpoint to our server with
the following code:
// nanoservices/auth/networking/actix_server/
// src/api/auth/mod.rs
pub mod login;
pub mod logout;
use actix_web::web::{ServiceConfig, get, scope};
use auth_dal::users::descriptors::SqlxPostGresDescrip
pub fn auth_factory(app: &mut ServiceConfig) {
app.service(
scope("/api/v1/auth")
.route("login", get().to(
login::login::<SqlxPostGresDescriptor>)
)
);
}
Our auth server can create users and login those users. We
might as well plug our auth server into our ingress server and
make some API calls.
First, we must declare our auth server and the auth data access
layer in the ingress Cargo.toml file with the following code:
# ingress/Cargo.toml
auth_server = {
path = "../nanoservices/auth/networking/actix_ser
package = "auth_actix_server"
}
auth-dal = {
path = "../nanoservices/auth/dal",
package = "auth-dal"
}
// ingress/src/main.rs
use auth_server::api::views_factory as auth_views_fac
use auth_dal::migrations::run_migrations as run_auth_
We can now run our migrations and attach our auth endpoints
to our ingress server. Once we have done this, our main
function looks like the following:
// ingress/src/main.rs
#[tokio::main]
async fn main() -> std::io::Result<()> {
run_todo_migrations().await;
run_auth_migrations().await;
HttpServer::new(|| {
let cors = Cors::default().allow_any_origin()
.allow_any_method()
.allow_any_header()
App::new()
.configure(auth_views_factory)
.configure(to_do_views_factory)
.wrap(cors)
.default_service(web::route().to(catch_al
})
.bind("0.0.0.0:8001")?
.run()
.await
}
// ingress/.env
TO_DO_DB_URL=postgres://username:password@localhost/t
AUTH_DB_URL=postgres://username:password@localhost/to
JWT_SECRET=secret
Now we can run our server. Both migrations will run, and we
can then test our create user API endpoint with the CURL
command below:
Nothing should happen in the terminal, but now our user has
been created. Now we can login using the following command:
"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1bmlxdWVfaWQ
And here we have it, our creation of the user and login was
successful because we now have an auth token.
Now is the time to lock down all our to-do item calls. If we
inspect our create to-do item API endpoint we have the
following signature:
// nanoservices/to_do/networking/actix_server/
// src/api/basic_actions/create.rs
. . .
use glue::{
errors::NanoServiceError,
token::HeaderToken
};
. . .
pub async fn create<T: SaveOne + GetAll>(
token: HeaderToken, body: Json<NewToDoItem>)
-> Result<HttpResponse, NanoServiceError> {
. . .
}
Seeing as all our steps rely on the user being able to login
successfully, we can start with building our login API call.
// ingress/frontend/src/api/url.ts
export class Url {
baseUrl: string;
create: string;
getAll: string;
update: string;
login: string;
constructor() {
this.baseUrl = Url.getBaseUrl();
this.create = `${this.baseUrl}api/v1/create`;
this.getAll = `${this.baseUrl}api/v1/get/all`
this.update = `${this.baseUrl}api/v1/update`;
this.login = `${this.baseUrl}api/v1/auth/logi
}
. . .
}
Now our Url supports our login, we can define the signature of
the login API call with the following code:
// ingress/frontend/src/api/login.ts
import axios from 'axios';
import { Url } from "./url";
export const login = async (
email: string,
password: string
): Promise<string> => {
. . .
};
Here, we take in a email and password, and return the auth
token. Inside our login , we encode the password with the code
below:
// ingress/frontend/src/api/login.ts
const authToken = btoa(`${email}:${password}`);
// ingress/frontend/src/api/login.ts
try {
const response = await axios({
method: 'get',
url: new Url().login,
headers: {
'Authorization': `Basic ${authToken}`,
'Content-Type': 'application/json'
},
});
return response.data;
}
With our login API call done, and API calls fresh in our minds, it
makes sense to now add tokens to our API calls.
Here we can see that we have changed the token in the header
from a string to localStorage.getItem('token') . This means
that we can get the auth token from the local storage of the
browser and insert it into our header of the API call. We can
make this change with the following functions in the utils.ts
file:
deleteCall
putCall
getCall
And now all our API calls have auth tokens. This also means
that we can use the local storage to check for a token to work
out if we are logged in of not, and we can also store our token in
the local storage once our login API call was successful.
Now that we have our token mechanism figured out, and our
login API call is defined, we're ready to make our login form
component.
// ingress/frontend/src/components/LoginForm.tsx
import React from 'react';
import '../Login.css';
import { login } from '../api/login';
interface LoginFormProps {
setToken: (token: string) => void;
}
With the preceding imports and interface, we can define our
login form component's signature with the code below:
// ingress/frontend/src/components/LoginForm.tsx
export const LoginForm: React.FC<LoginFormProps> = (
{ setToken }
) => {
const [email, setEmail] = React.useState<string>(
const [password, setPassword] = React.useState<st
const submitLogin = () => {
. . .
};
const handlePasswordChange = (
e: React.ChangeEvent<HTMLInputElement>
) => {
setPassword(e.target.value);
};
const handleUsernameChange = (
e: React.ChangeEvent<HTMLInputElement>
) => {
setEmail(e.target.value);
};
return (
. . .
);
};
With this outline, we can update our email and password put
into the form and fire the | function when the button on the
form is clicked with the following code:
// ingress/frontend/src/components/LoginForm.tsx
return (
<div className="login">
<h1 className="login-title">Login</h1>
<input
type="text"
className="login-input"
placeholder="Email"
autoFocus
onChange={handleUsernameChange}
value={email}
/>
<input
type="password"
className="login-input"
placeholder="Password"
onChange={handlePasswordChange}
value={password}
/>
<button className="login-button"
id="login-button"
onClick={submitLogin}>Lets Go</button>
</div>
);
You may have noticed the CSS classes. To avoid bloating this
chapter, the CSS has been put in the appendix.
// ingress/frontend/src/components/LoginForm.tsx
const submitLogin = () => {
login(email, password).then(
(response) => {
setToken(response);
}
).catch((error) => {
console.error(error);
});
};
// ingress/frontend/src/index.tsx
import { LoginForm } from "./components/LoginForm";
// ingress/frontend/src/index.tsx
const App = () => {
. . .
const [loggedin, setLoggedin] = useState<boolean>
localStorage.getItem('token') !== null
);
function setToken(token: string) {
localStorage.setItem('token', token);
setLoggedin(true);
}
. . .
React.useEffect(() => {
const fetchData = async () => {
if (wasmReady && loggedin) {
. . .
}
};
if (wasmReady && loggedin) {
fetchData();
}
}, [wasmReady, loggedin]);
if (localStorage.getItem('token') === null) {
return <LoginForm setToken={setToken} />;
}
if (!data) {
. . .
}
else {
return (
. . .
);
}
};
Here we can see that we ensure that the WASM and login is
achieved before trying to get the to-do items. We also add the
login dependency to the loading of the to-do items, because if
the login status changes, we will want to potentially trigger
getting the to-do items. If we did not have that as a dependency,
then after logging in, we would be stuck on a loading screen
because the getting of the to-do items would not be triggered
after the change of the login status.
And now our auth system is working and if we try and access
our application, we would get the login form displayed in figure
9.2.
Figure 9.2 – Our login form
And there we have it, our login is successful. Our to-do items
are still global meaning that any user that has logged in, can
access the same items as any other user. In the next chapter we
will scope out the user sessions more.
Summary
Here we are really seeing how our system can scale. These
servers can slot in and out of our system. Our authentication
system is clearly defined and there is nothing stopping you from
taking your authentication system and slotting it into another
system on another project. Authentication can explode in
complexity. I often find myself creating roles, and permissions,
teams, and email processes to verify that the user's email is
legitimate. Starting a server specifically just for authentication
can seem a little excessive, however, you will be shocked at how
quickly the complexity grows. Most people starting projects
underestimate the complexity of managing users.
Questions
Answers
Appendix
body {
background: #2d343d;
}
.login {
margin: 20px auto;
width: 300px;
padding: 30px 25px;
background: white;
border: 1px solid #c4c4c4;
border-radius: 25px;
}
h1.login-title {
margin: -28px -25px 25px;
padding: 15px 25px;
line-height: 30px;
font-size: 25px;
font-weight: 300;
color: #ADADAD;
text-align:center;
background: #f7f7f7;
border-radius: 25px 25px 0px 0px;
}
.login-input {
width: 285px;
height: 50px;
margin-bottom: 25px;
padding-left:10px;
font-size: 15px;
background: #fff;
border: 1px solid #ccc;
border-radius: 4px;
}
.login-input:focus {
border-color:#6e8095;
outline: none;
}
.login-button {
width: 100%;
height: 50px;
padding: 0;
font-size: 20px;
color: #fff;
text-align: center;
background: #f0776c;
border: 0;
border-radius: 5px;
cursor: pointer;
outline:0;
}
.login-lost
{
text-align:center;
margin-bottom:0px;
}
.login-lost a
{
color:#666;
text-decoration:none;
font-size:13px;
}
.loggedInTitle {
font-family: "Helvetica Neue";
color: white;
}
11 Communicating Between
Servers
Before you begin: Join our book community on Discord
https://packt.link/EarlyAccess/
At this point in the book, we have two servers which are the
authentication server and to-do server, however, they are not
talking to each other yet. In microservices, we must be able to
get our servers sending messages between each other. For our
system, we must get our to-do server making requests to out
authentication server. This request checks that the user is valid
before we perform a database transaction on to-do items. As
these items are related to the user ID passed in the request
requesting the database transaction. To achieve this, the
chapter will cover the following:
By the end of this chapter, you will be able to get servers talking
to each other either directly in memory due to one server
compiling another server into it, or by HTTP requests
depending on the feature compilation. You will also be able to
Technical requirements
// nanoservices/auth/dal/src/users/transactions/get.r
impl GetByUniqueId for SqlxPostGresDescriptor {
fn get_by_unique_id(id: String)
-> impl Future<Output = Result<User, NanoServiceE
sqlx_postgres_get_by_unique_id(id)
}
}
// nanoservices/auth/dal/src/users/transactions/get.r
async fn sqlx_postgres_get_by_unique_id(id: String)
-> Result<User, NanoServiceError> {
let item = sqlx::query_as::<_, User>("
SELECT * FROM users WHERE unique_id = $1"
).bind(id)
.fetch_optional(&*SQLX_POSTGRES_POOL).await.map_e
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
match item {
None => Err(NanoServiceError::new(
"User not found".to_string(),
NanoServiceErrorStatus::NotFound
)),
Some(item) => Ok(item)
}
}
// nanoservices/auth/dal/src/users/transactions/mod.r
pub mod get;
pub mod create;
And with our new API on our data access, we can move onto
adding our get to the core.
Adding get by unique ID to core
Our core can now support the getting of the user using the
unique ID with the following code:
// nanoservices/auth/core/src/api/users/get.rs
use auth_dal::users::schema::TrimmedUser;
use auth_dal::users::transactions::get::GetByUniqueId
use glue::errors::NanoServiceError;
pub async fn get_by_unique_id<T: GetByUniqueId>(id: S
-> Result<TrimmedUser, NanoServiceError> {
let user = T::get_by_unique_id(id).await?;
let trimmed_user: TrimmedUser = user.into();
Ok(trimmed_user)
}
// nanoservices/auth/core/src/api/users/mod.rs
pub mod create;
pub mod get;
// nanoservices/auth/networking/actix_server/src/api/
use auth_dal::users::transactions::get::GetByUniqueId
use auth_core::api::users::get::get_by_unique_id
as get_by_unique_id_core;
use glue::errors::NanoServiceError;
use glue::token::HeaderToken;
use actix_web::HttpResponse;
pub async fn get_by_unique_id<T: GetByUniqueId>(token
-> Result<HttpResponse, NanoServiceError> {
let user = get_by_unique_id_core::<T>(token.uniqu
Ok(HttpResponse::Ok().json(user))
}
And now our auth server can return a user by the unique ID
either by making a HTTP request or calling the core function.
We can now move onto building the kernel of the auth server
to make it easier for other servers to interact with the auth
server either via the core or HTTP networking layer.
. . .
└── nanoservices
├── auth
│ ├── core
│ │ ├── . . .
│ ├── dal
│ │ ├── . . .
│ ├── kernel
│ │ ├── Cargo.toml
│ │ └── src
│ │ ├── api
│ │ │ ├── mod.rs
│ │ │ └── users
│ │ │ ├── get.rs
│ │ │ └── mod.rs
│ │ └── lib.rs
│ └── networking
│ └── . . .
. . .
Here we can see that the API structure is the same as our core
and networking to maintain consistency. The kernel should
enable the user to make requests to the auth server either via
HTTP or calling the core function. To enable this, we need two
different features, HTTP, and core giving us the following
Cargo.toml file outline:
// nanoservices/auth/kernel/Cargo.toml
[package]
name = "auth-kernel"
version = "0.1.0"
edition = "2021"
[dependencies]
auth-core = { path = "../core", optional = true }
auth-dal = { path = "../dal" }
reqwest = {version = "0.12.5", optional = true, featu
glue = { path = "../../../glue" }
[features]
http = ["reqwest"]
core-postgres = ["auth-core"]
// nanoservices/auth/kernel/src/api/users/get.rs
#[cfg(any(feature = "auth-core", feature = "reqwest")
mod common_imports {
pub use auth_dal::users::schema::TrimmedUser;
pub use glue::errors::NanoServiceError;
}
#[cfg(feature = "auth-core")]
mod core_imports {
pub use auth_core::api::users::get::get_by_unique
as get_by_unique_id_core;
pub use auth_dal::users::descriptors::SqlxPostGre
}
#[cfg(feature = "reqwest")]
mod reqwest_imports {
pub use reqwest::Client;
pub use glue::errors::NanoServiceErrorStatus;
pub use glue::token::HeaderToken;
}
#[cfg(any(feature = "auth-core", feature = "reqwest")
use common_imports::*;
#[cfg(feature = "auth-core")]
use core_imports::*;
#[cfg(feature = "reqwest")]
use reqwest_imports::*;
// nanoservices/auth/kernel/src/api/users/get.rs
#[cfg(any(feature = "auth-core", feature = "reqwest")
pub async fn get_user_by_unique_id(id: String)
-> Result<TrimmedUser, NanoServiceError> {
#[cfg(feature = "auth-core")]
let user: TrimmedUser = get_by_unique_id_core::<
SqlxPostGresDescriptor
>(id).await?.into();
#[cfg(feature = "reqwest")]
let user: TrimmedUser = get_user_by_unique_id_api
return Ok(user)
}
Here, we can see if we are using the core feature, we are calling
the core function for the core feature, and a not yet defined
function that calls the API endpoint via HTTP is the reqwest
feature is enabled. We can also see that we are utilizing the
into function to convert the response of both functions into
the TrimmedUser struct because the TrimmedUser have
implemented the From<User> trait.
Now that our interface is now built, we can define the signature
of the HTTP call with the code below:
// nanoservices/auth/kernel/src/api/users/get.rs
#[cfg(feature = "reqwest")]
async fn get_user_by_unique_id_api_call(id: String)
-> Result<TrimmedUser, NanoServiceError> {
. . .
}
Inside this function, we define the URL to the auth server with
the following code:
// nanoservices/auth/kernel/src/api/users/get.rs
let url = std::env::var("AUTH_API_URL").map_err(|e|{
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::BadRequest
)
})?;
let full_url = format!("{}/api/v1/users/get", url);
We can then build an encoded token, pass that into the header
of a request, and send that request with the code below:
// nanoservices/auth/kernel/src/api/users/get.rs
let header_token = HeaderToken {
unique_id: id
}.encode()?;
let client = Client::new();
let response = client
.get(&full_url)
.header("token", header_token)
.send()
.await
.map_err(|e| {
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::BadRequest
)
})?;
And finally, we can handle the response and return the user
with the code below:
// nanoservices/auth/kernel/src/api/users/get.rs
if response.status().is_success() {
let trimmed_user = response
.json::<TrimmedUser>()
.await
.map_err(|e| NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::BadRequest
))?;
return Ok(trimmed_user)
} else {
return Err(NanoServiceError::new(
format!("Failed to get user: {}", response.st
NanoServiceErrorStatus::BadRequest,
))
}
# nanoservices/to_do/core/Cargo.toml
. . .
auth-kernel = { path = "../../auth/kernel" }
[features]
memory = ["auth-kernel/core-postgres"]
http = ["auth-kernel/http"]
And with this, our to-do server can get users by compiling the
auth server and calling an API function or performing a HTTP
request to the auth server.
We have maintained our flexibility with our deployment;
however, our to-do server does not need to use user data right
now. To make our user data useful to our to-do server, we must
tether our to-do items to our user ID so users can see and edit
their own items.
Right now, all our to-do items are globally accessible to any user
that logs in. To silo our to-do items, we must carry out the
following steps:
Figure 11.1 – Foreign key association between our user and items
This means that if we delete a user, then all the items associated
with that user would also get deleted. We could also perform a
join query with the SQL below:
SELECT
User.id AS user_id,
User.username,
Item.id AS todo_id,
Item.title,
status.status,
FROM
users
JOIN
todos ON users.id = todos.user_id;
This means that each row would contain elements from the
item, and the user associated with that item.
However, our items are on the to-do server, and our users are
on our auth server. Due to our setup, they could be on the same
database, but we must also accommodate for the possibility of
the items and users being on separate databases. Like
everything in software engineering there is always trade-offs.
Microservices enable you to break a massive system into
components, and have individual teams work on these
individual services. You can also use different languages, and
deployment pipelines whilst being more complex will be
smoother. However, none of this is for free. Microservices add
complexity. To accommodate the possibility of two different
databases, we have a third table where rows consist of user IDs
and item IDs as seen in figure 11.2.
Figure 11.2 – A separate database table for logging items
associations with users
Now that we know how we are going to link our users with
items, we can create a new migration for our to-do service. To
automate our migration creation, we can create a bash script
housing the following code:
#!/usr/bin/env bash
# nanoservices/to_do/scripts/add_migration.sh
# Check if argument is provided
if [ $# -eq 0 ]; then
echo "Usage: $0 <migration-name>"
exit 1
fi
# navigate to directory
SCRIPTPATH="$( cd "$(dirname "$0")" ; pwd -P )"
cd $SCRIPTPATH/../dal/migrations
# create SQL script name
current_timestamp=$(date +'%Y%m%d%H%M%S')
description="$1"
script_name="${current_timestamp}_${description}.sql"
touch $script_name
-- nanoservices/to_do/dal/migrations/
-- 20240628025045_adding-user-connection.sql
CREATE TABLE user_connections (
user_id INTEGER NOT NULL,
to_do_id INTEGER NOT NULL,
PRIMARY KEY (user_id, to_do_id)
);
Now that our database can link items to users, we can move
onto adding this linking table to our schema in our data access
layer with the following code:
// nanoservices/to_do/dal/src/to_do_items/schema.rs
#[derive(Serialize, Deserialize, Debug, Clone)]
#[cfg_attr(feature = "sqlx-postgres", derive(sqlx::Fr
pub struct UserConnection {
pub user_id: i32,
pub to_do_id: i32
}
For our transactions, we can start with the create with the
following code:
// nanoservices/to_do/dal/src/to_do_items/transaction
pub type SaveOneResponse = Result<ToDoItem, NanoServi
pub trait SaveOne {
fn save_one(item: NewToDoItem, user_id: i32)
-> impl Future<Output = SaveOneResponse> + Send;
}
#[cfg(feature = "sqlx-postgres")]
impl SaveOne for SqlxPostGresDescriptor {
fn save_one(item: NewToDoItem, user_id: i32)
-> impl Future<Output = SaveOneResponse> + Send {
sqlx_postgres_save_one(item, user_id)
}
}
#[cfg(feature = "json-file")]
impl SaveOne for JsonFileDescriptor {
fn save_one(item: NewToDoItem, user_id: i32)
-> impl Future<Output = SaveOneResponse> + Send {
json_file_save_one(item, user_id)
}
}
Here, we can see that we merely pass in the user ID into the
functions. For our sqlx_postgres_save_one function, we now
have two database transactions. We create the item, and then
use the ID of that item into the connections with the code
below:
// nanoservices/to_do/dal/src/to_do_items/transaction
#[cfg(feature = "sqlx-postgres")]
async fn sqlx_postgres_save_one(item: NewToDoItem, us
-> SaveOneResponse {
let item = sqlx::query_as::<_, ToDoItem>("
INSERT INTO to_do_items (title, status)
VALUES ($1, $2)
RETURNING *"
).bind(item.title)
.bind(item.status.to_string())
.fetch_one(&*SQLX_POSTGRES_POOL).await.map_err(|e
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
let _ = sqlx::query("
INSERT INTO user_connections (user_id, to_do_
VALUES ($1, $2)"
).bind(user_id)
.bind(item.id)
.execute(&*SQLX_POSTGRES_POOL).await.map_err(|e|
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
Ok(item)
}
And finally, for our JSON save file function takes the following
form:
// nanoservices/to_do/dal/src/to_do_items/transaction
#[cfg(feature = "json-file")]
async fn json_file_save_one(item: NewToDoItem, user_i
-> SaveOneResponse {
let mut tasks = get_all::<ToDoItem>().unwrap_or_e
HashMap::new()
);
let to_do_item = ToDoItem {
id: 1,
title: item.title,
status: item.status.to_string()
};
tasks.insert(
to_do_item.title.to_string() +
":" +
&user_id.to_string(),
to_do_item.clone()
);
let _ = save_all(&tasks)?;
Ok(to_do_item)
}
Here, we can see that we have created a key out of the title and
user ID with the : delimiter. This means we can still have a
single hashmap for a JSON file, but still separate the items
depending on user ID.
// nanoservices/to_do/dal/src/to_do_items/transaction
pub type DeleteOneResponse = Result<ToDoItem, NanoSer
pub trait DeleteOne {
fn delete_one(title: String, user_id: i32)
-> impl Future<Output = DeleteOneResponse> + Send
}
#[cfg(feature = "sqlx-postgres")]
impl DeleteOne for SqlxPostGresDescriptor {
fn delete_one(title: String, user_id: i32)
-> impl Future<Output = DeleteOneResponse> + Send
sqlx_postgres_delete_one(title, user_id)
}
}
#[cfg(feature = "json-file")]
impl DeleteOne for JsonFileDescriptor {
fn delete_one(title: String, user_id: i32)
-> impl Future<Output = DeleteOneResponse> + Send
json_file_delete_one(title, user_id)
}
}
Your Postgres dele function should look like the code below:
// nanoservices/to_do/dal/src/to_do_items/transaction
#[cfg(feature = "sqlx-postgres")]
async fn sqlx_postgres_delete_one(title: String, user
-> DeleteOneResponse {
let item = sqlx::query_as::<_, ToDoItem>("
DELETE FROM to_do_items
WHERE title = $1
RETURNING *"
).bind(title)
.fetch_one(&*SQLX_POSTGRES_POOL).await.map_err(|e
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
let _ = sqlx::query("
DELETE FROM user_connections
WHERE user_id = $1 AND to_do_id = $2"
).bind(user_id)
.bind(item.id)
.execute(&*SQLX_POSTGRES_POOL).await.map_err(|e|
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
Ok(item)
}
And your delete function for the JSON file should be like the
following code:
// nanoservices/to_do/dal/src/to_do_items/transaction
#[cfg(feature = "json-file")]
async fn json_file_delete_one(title: String, user_id:
-> DeleteOneResponse {
let mut tasks = get_all::<ToDoItem>().unwrap_or_e
|_| HashMap::new()
);
let to_do_item = tasks.remove(
&title +
":" +
&user_id.to_string()
).ok_or_else(|| {
NanoServiceError::new(
"Item not found".to_string(),
NanoServiceErrorStatus::NotFound
)
})?;
let _ = save_all(&tasks)?;
Ok(to_do_item)
}
We can now move onto our get transaction with the
implementations below:
// nanoservices/to_do/dal/src/to_do_items/transaction
pub type GetAllResponse = Result<Vec<ToDoItem>, NanoS
pub trait GetAll {
fn get_all(user_id: i32)
-> impl Future<Output = GetAllResponse> + Send;
}
#[cfg(feature = "sqlx-postgres")]
impl GetAll for SqlxPostGresDescriptor {
fn get_all(user_id: i32)
-> impl Future<Output = GetAllResponse> + Send {
sqlx_postgres_get_all(user_id)
}
}
#[cfg(feature = "json-file")]
impl GetAll for JsonFileDescriptor {
fn get_all(user_id: i32)
-> impl Future<Output = GetAllResponse> + Send {
json_file_get_all(user_id)
}
}
For our Postgres get function, we get all the item IDs that are
associated with the user and query the items table that have IDs
in the list of item IDs that are associated with the user. We can
see this query unfold with the following code:
// nanoservices/to_do/dal/src/to_do_items/transaction
#[cfg(feature = "sqlx-postgres")]
async fn sqlx_postgres_get_all(user_id: i32) -> GetAl
let items = sqlx::query_as::<_, ToDoItem>("
SELECT * FROM to_do_items WHERE id IN (
SELECT to_do_id
FROM user_connections WHERE user_id = $1
)")
.bind(user_id)
.fetch_all(&*SQLX_POSTGRES_POOL).await.map_err(|e
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
Ok(items)
}
For our JSON file implementation, we load the file, get the key of
each item, split the key with our delimiter to get the user ID,
and push the item to a vector if the user ID extracted from the
key matches the user ID passed into the function with the code
below:
// nanoservices/to_do/dal/src/to_do_items/transaction
#[cfg(feature = "json-file")]
async fn json_file_get_all(user_id: i32) -> GetAllRes
let tasks = get_all::<ToDoItem>()
.unwrap_or_else(|_| HashMap::
let items = tasks.values().cloned().collect();
let mut filtered_items: Vec<ToDoItem> = Vec::new(
for item in items {
let key = item.id.to_string().split(":").nth(
let item_user_id = key.parse::<i32>().unwrap(
if item_user_id == user_id {
filtered_items.push(item);
}
}
Ok(filtered_items)
}
// nanoservices/to_do/dal/src/to_do_items/transaction
pub type UpdateOneResponse = Result<ToDoItem, NanoSer
pub trait UpdateOne {
fn update_one(item: ToDoItem, user_id: i32)
-> impl Future<Output = UpdateOneResponse> + Send
}
#[cfg(feature = "sqlx-postgres")]
impl UpdateOne for SqlxPostGresDescriptor {
fn update_one(item: ToDoItem, _user_id: i32)
-> impl Future<Output = UpdateOneResponse> + Send
sqlx_postgres_update_one(item)
}
}
#[cfg(feature = "json-file")]
impl UpdateOne for JsonFileDescriptor {
fn update_one(item: ToDoItem, user_id: i32)
-> impl Future<Output = UpdateOneResponse> + Send
json_file_update_one(item, user_id)
}
}
And this is it, the data access layer is now fully tethered to the
user ID for items. We now must pass the user ID into the core
functions.
// nanoservices/to_do/core/src/api/basic_actions/crea
pub async fn create<T: SaveOne>(item: NewToDoItem, us
-> Result<ToDoItem, NanoServiceError> {
let created_item = T::save_one(item, user_id).awa
Ok(created_item)
}
// nanoservices/to_do/core/src/api/basic_actions/dele
pub async fn delete<T: DeleteOne>(id: &str, user_id:
-> Result<(), NanoServiceError> {
let _ = T::delete_one(id.to_string(), user_id).aw
Ok(())
}
// nanoservices/to_do/core/src/api/basic_actions/get.
pub async fn get_all<T: GetAll>(user_id: i32)
-> Result<AllToDOItems, NanoServiceError> {
let all_items = T::get_all(user_id).await?;
AllToDOItems::from_vec(all_items)
}
// nanoservices/to_do/core/src/api/basic_actions/upda
pub async fn update<T: UpdateOne>(item: ToDoItem, use
-> Result<(), NanoServiceError> {
let _ = T::update_one(item, user_id).await?;
Ok(())
}
And our core is now tethered, now we must move onto adding
the user ID to the networking layer.
For our networking, we must get the user ID from the auth
server via the kernel. Therefore, we must declare our auth
kernel with the code below:
// nanoservices/to_do/networking/actix_server/Cargo.t
. . .
auth-kernel = { path = "../../../auth/kernel" }
[features]
auth-http = ["auth-kernel/http"]
auth-core-postgres = ["auth-kernel/core-postgres"]
default = ["auth-core-postgres"]
// nanoservices/to_do/networking/actix_server/src
// /api/basic_actions/create.rs
. . .
use auth_kernel::api::users::get::get_user_by_unique_
pub async fn create<T: SaveOne + GetAll>(
token: HeaderToken,
body: Json<NewToDoItem>
) -> Result<HttpResponse, NanoServiceError> {
let user = get_user_by_unique_id(token.unique_id)
let _ = create_core::<T>(body.into_inner(), user.
Ok(HttpResponse::Created().json(get_all_core::<T>
user.id
).await?))
}
Here we can see that we get the user unique ID from the token
and get the user from the auth kernel. We can apply this
approach to all the other endpoints defined below:
// nanoservices/to_do/networking/actix_server/src
// /api/basic_actions/get.rs
. . .
use auth_kernel::api::users::get::get_user_by_unique_
pub async fn get_all<T: GetAll>(token: HeaderToken)
-> Result<HttpResponse, NanoServiceError> {
let user = get_user_by_unique_id(
token.unique_id
).await?;
Ok(HttpResponse::Ok().json(get_all_core::<T>(
user.id
).await?))
}
For update:
// nanoservices/to_do/networking/actix_server/src
// /api/basic_actions/update.rs
. . .
use auth_kernel::api::users::get::get_user_by_unique_
pub async fn update<T: UpdateOne + GetAll>(
token: HeaderToken,
body: Json<ToDoItem>
) -> Result<HttpResponse, NanoServiceError> {
let user = get_user_by_unique_id(token.unique_id)
let _ = update_core::<T>(body.into_inner(), user.
Ok(HttpResponse::Ok().json(get_all_core::<T>(
user.id
).await?))
}
For delete:
// nanoservices/to_do/networking/actix_server/src
// /api/basic_actions/delete.rs
. . .
use auth_kernel::api::users::get::get_user_by_unique_
pub async fn delete_by_name<T: DeleteOne + GetAll>(
token: HeaderToken,
req: HttpRequest
) -> Result<HttpResponse, NanoServiceError> {
let user = get_user_by_unique_id(
token.unique_id
).await?;
match req.match_info().get("name") {
Some(name) => {
delete_core::<T>(name, user.id).await?;
},
None => {
. . .
}
};
Ok(HttpResponse::Ok().json(get_all_core::<T>(
user.id
).await?))
}
And with this, our networking layer for our to-do server is now
tethered with our user ID. This means that once logged in, a
user can only see and perform operations that are tethered to
the user.
Our system now has the to-do server talking to the auth server.
If we were to run our ingress server, it would work as intended
and you would only see the to-do items in the frontend that
correspond to the user logged in. However, this is down to the
to-do server calling the authentication server in the same
binary as all the servers and frontend are compiled into one
binary. We know the ingress server will work because it
compiles. However, what about when both of our servers are
running independently on different ports? Even if it compiles,
the HTTP request to the auth server from the to-do server could
be faulty. We must test our HTTP request.
To run our test, we first must create the following .env file:
# nanoservices/to_do/.env
TO_DO_DB_URL=postgres://username:password@localhost/t
AUTH_DB_URL=postgres://username:password@localhost/to
AUTH_API_URL=http://127.0.0.1:8081
JWT_SECRET=secret
#!/usr/bin/env bash
# nanoservices/to_do/scripts/test_server_com.sh
SCRIPTPATH="$( cd "$(dirname "$0")" ; pwd -P )"
cd $SCRIPTPATH
cd ../../../
This boilerplate code ensures that the working directory of the
script is going to run at the base of our system. Now that we are
in the root directory of our system, we can build our database
and then run it in the background with the code below:
# nanoservices/to_do/scripts/test_server_com.sh
docker-compose build
docker-compose up -d
sleep 1
export $(cat ./nanoservices/to_do/.env | xargs)
Because the are server builds might also take a while, we want
to build our servers before running them with the following
code:
# nanoservices/to_do/scripts/test_server_com.sh
cargo build \
--manifest-path ./nanoservices/to_do/networking/actix
--features auth-http \
--release \
--no-default-features
cargo build \
--manifest-path ./nanoservices/auth/networking/actix_
--release
We now can run our servers in the background with the code
below:
# nanoservices/to_do/scripts/test_server_com.sh
cargo run \
--manifest-path ./nanoservices/to_do/networking/actix
--features auth-http \
--release --no-default-features &
TO_DO_PID=$!
cargo run \
--manifest-path ./nanoservices/auth/networking/actix_
--release &
AUTH_PID=$!
The & means that the command will be run in the background.
We can also see that we get the process IDs with the
TO_DO_PID=$! and AUTH_PID=$! straight after their respective
commands. We will reference these process IDs to kill the
servers once we have finished with the servers.
Now our servers are running, and our database is also running.
We can now create a user and login to assign the login result to
the token variable with the following code:
# nanoservices/to_do/scripts/test_server_com.sh
sleep 2
curl -X POST http://127.0.0.1:8081/api/v1/users/creat
-H "Content-Type: application/json" \
-d '{
"email": "[email protected]",
"password": "password"
}'
token=$(curl \
-u [email protected]:password \
-X GET http://127.0.0.1:8081/api/v1/auth/login)
token=$(echo "$token" | tr -d '\r\n' | sed 's/^"//' |
The final line stripes the token of quotation marks, spaces, and
new lines. We can now insert that token into the header of our
create HTTP request with the code below:
# nanoservices/to_do/scripts/test_server_com.sh
response=&(curl -X POST http://127.0.0.1:8080/api/v1/
-H "Content-Type: application/json" \
-H "token: $token" \
-d '{
"title": "code",
"status": "PENDING"
}')
sleep 1
echo $response
sleep 2
And finally, we kill the servers, and tear down the database
container with the following code:
# nanoservices/to_do/scripts/test_server_com.sh
kill $TO_DO_PID
kill $AUTH_PID
docker-compose down
. . .
Migrating auth database...
auth database migrations completed: ()
Migrating to-do database...
to-do database migrations completed: ()
. . .
{"pending":[{"id":1,"title":"code","status":"PENDING"
[+] Running 2/1
✔ Container to-do-postgres Removed
✔ Network tethering-users-to-items_default Removed
Here we have it, our system can support two servers that talk to
each other via HTTP.
Summary
Questions
1. At a high level how did we make the user accessible via the
unique ID?
2. How do we make the core API function available to other
servers directly by compiling the core function into another
server?
3. How do we expose the HTTP endpoint of one server to
another?
4. There is a risk that two different servers will be running
transactions on two different databases. How can we tether
data from one service to another considering that we must
accommodate the possibility of two different databases?
5. What is the risk of your database solution?
Answers
https://packt.link/EarlyAccess/
While our authentication sessions work, we might as well get
more control over them, and explore the concept of caching at
the same time. We will do this by embedding our own Rust code
directly in a Redis database, so we can perform a range of
checks and updates in one call to the cache as opposed to
making multiple calls to a database. Caching is a great tool to
add to your belt, enabling you to reduce the time and resources
used to serve data. In this chapter we cover the following:
What caching is
Setting up Redis
Technical requirements
https://github.com/PacktPublishing/Rust-Web-Programming-
3E/tree/main/chapter11
What is caching
Cold storage: This is where the storage becomes very cheap and
reliable, but there is a delay in accessing the data. For instance,
at home you would consider cold storage to be storing your
data on an optical disk or external hard drive and removing it
from the computer. It will take more time to load it as you must
get the storage device from its place and insert the storage
device into your computer so your computer can access it.
However, your storage device is not getting the daily wear and
tear of running inside your computer. Cold storage is the best
choice for data that is access infrequently. Cloud environments
offer a more automated version of cold storage, where the
access to the data takes a while, and some cloud providers
might charge per read. However, the long-term storage of
untouched data in cloud cold storage services is very cheap.
This is why old photos on a social media app might take longer
to load, as these old photos might be stored in cold storage.
Setting up Redis
└── nanoservices
├── auth
├── to_do
└── user-session-cache
├── cache-client
│ ├── Cargo.toml
│ └── src
│ └── lib.rs
└── cache-module
├── Cargo.toml
├── Dockerfile
└── src
├── lib.rs
├── processes
│ ├── login.rs
│ ├── logout.rs
│ ├── mod.rs
│ └── update.rs
└── user_session.rs
# nanoservices/user-session-cache/cache-module/Docker
FROM rust:latest as build
ENV PKG_CONFIG_ALLOW_CROSS=1
RUN apt-get update
RUN apt-get install libclang-dev -y
We then set the work directory inside of the image, copy all our
Rust code for the Redis module into the image, and build the
Rust module with the code below:
# nanoservices/user-session-cache/cache-module/Docker
WORKDIR /app
COPY . .
RUN cargo build --release
# docker-compose.yml
. . .
cache:
container_name: 'to-do-redis'
build: './nanoservices/user-session-cache/cache-mod
restart: always
ports:
- '6379:6379'
If we try and run our docker compose now, we will just get an
error. Before we run our cache, we must build our Redis
module.
Building our Redis module
# nanoservices/user-session-cache/cache-module/Cargo.
[package]
name = "cache-module"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
redis-module = "2.0.7"
chrono = "0.4.24"
bincode = "1.3.3"
serde = { version = "1.0.203", features = ["derive"]
The cdylib stands for "C dynamic library." This type of library
is intended to be used from languages other than Rust, such as
C, C++, or even Python. As Redis is written in C, our library will
have to be a C dynamic library.
Here, we can see that we have three commands, and the 1 s for
each command denotes that the first key, last key, and key step
are all set to 1 . This means that all the commands work with a
single key. The "write fast deny-oom" means that we can write
the key quickly, and that the write will be denied if the Redis
cache runs out of memory.
// nanoservices/user-session-cache/cache-module/src/u
use redis_module::{
Context, RedisString, RedisError, RedisResult, Re
};
use chrono::{DateTime, Utc, NaiveDateTime};
With this, our user session struct has the following definition:
// nanoservices/user-session-cache/cache-module/src/u
pub struct UserSession {
pub user_id: String,
pub key: String,
pub session_datetime: DateTime<Utc>,
}
// nanoservices/user-session-cache/cache-module/src/u
impl UserSession {
pub fn from_id(user_id: String)
-> UserSession {
. . .
}
pub fn check_timeout(&mut self, ctx: &Context)
-> RedisResult {
. . .
}
pub fn update_last_interacted(&self, ctx: &Contex
-> RedisResult {
. . .
}
pub fn get_counter(&self, ctx: &Context)
-> RedisResult {
. . .
}
}
// nanoservices/user-session-cache/cache-module/src/u
pub fn from_id(user_id: String) -> UserSession {
UserSession {
user_id: user_id.clone(),
key: format!("user_session_{}", user_id),
session_datetime: Utc::now(),
}
}
// nanoservices/user-session-cache/cache-module/src/u
let key_string = RedisString::create(None, self.key.c
let key = ctx.open_key_writable(&key_string);
// nanoservices/user-session-cache/cache-module/src/u
let last_interacted_string = match key.hash_get("last
Some(v) => {
match NaiveDateTime::parse_from_str(
&v.to_string(), "%Y-%m-%d %H:%M:%S"
) {
Ok(v) => v,
Err(e) => {
println!("Could not parse date: {:?}"
return Err(RedisError::Str("Could not
}
}
},
None => return Err(
RedisError::Str("Last interacted field does n
)
};
We can see that if we cannot find the field or fail to parse the
datetime, we return appropriate errors.
If the time elapsed is larger than than the cut off, we can then
delete the entry and return a message that the session has
timed out with the code below:
// nanoservices/user-session-cache/cache-module/src/u
if time_diff > timeout_mins.into() {
match key.delete(){
Ok(_) => {},
Err(_) => return Err(
RedisError::Str("Could not delete key")
)
};
return Ok(RedisValue::SimpleStringStatic("TIMEOUT
}
We have now passed the timeout check, finally, we check the
counter. The counter is a way of just forcing a refresh of the
token. For instance, if a user loves our app, and is constantly
using our app 24 hours a day for a week, it would not timeout
and the user would be using the same JWT for a week. So, we
get the counter, increase the counter by one, and return a
refresh message by one if the counter has exceeded a cut off. If
we pass the counter check, we merely return an OK message to
tell the user that the auth check is all good with the following
code:
// nanoservices/user-session-cache/cache-module/src/u
let mut counter = match self.get_counter(ctx)? {
RedisValue::Integer(v) => v,
_ => return Err(RedisError::Str("Could not get co
};
counter += 1;
key.hash_set("counter", ctx.create_string(counter.to_
if counter > 20 {
return Ok(RedisValue::SimpleStringStatic("REFRESH
}
Ok(RedisValue::SimpleStringStatic("OK"))
Our most complex function is now done. We now must move
onto our update_last_interacted where we update the
"last_interacted" field. This is a good opportunity to try and
building the function yourself. If attempted to write the
function yourself, hopefully it looks like the code below:
// nanoservices/user-session-cache/cache-module/src/u
pub fn update_last_interacted(&self, ctx: &Context) -
let key_string = RedisString::create(None, self.k
let key = ctx.open_key_writable(&key_string);
let formatted_date_string = self.session_datetime
"%Y-%m-%d %H:%M:%S"
).to_string();
let last_interacted_string = RedisString::create(
None, formatted_date_string
);
key.hash_set("last_interacted", ctx.create_string
last_interacted_string)
);
Ok(RedisValue::SimpleStringStatic("OK"))
}
// nanoservices/user-session-cache/cache-module/src/u
pub fn get_counter(&self, ctx: &Context) -> RedisResu
let key_string = RedisString::create(None, self.k
let key = ctx.open_key_writable(&key_string);
match key.hash_get("counter")? {
Some(v) => {
let v = v.to_string().parse::<i64>().unwr
Ok(RedisValue::Integer(v))
},
None => Err(RedisError::Str(
"Counter field does not exist"
)
)
}
}
And our session struct is now complete. We can now move onto
building our processes, starting with our login.
For our login process, we must have the Redis context and
arguments passed in via the Redis command and return an OK
message if everything is good. Considering the steps, the login
process has the following outline:
// nanoservices/user-session-cache/cache-module/src/
// processes/login.rs
use redis_module::{
Context, NextArg, RedisError, RedisResult, RedisS
};
use crate::user_session::UserSession;
pub fn login(ctx: &Context, args: Vec<RedisString>) -
. . .
Ok(RedisValue::SimpleStringStatic("OK"))
}
// nanoservices/user-session-cache/cache-module/src/
// processes/login.rs
let user_session = UserSession::from_id(user_id);
user_session.update_last_interacted(ctx)?;
let key_string = RedisString::create(None, user_sessi
and with this, our login process is done. We can now move onto
our logout process.
// nanoservices/user-session-cache/cache-module/src/
// processes/logout.rs
use redis_module::{
Context, NextArg, RedisError,
RedisResult, RedisString, RedisValue
};
use crate::user_session::UserSession;
pub fn logout(ctx: &Context, args: Vec<RedisString>)
-> RedisResult {
. . .
Ok(RedisValue::SimpleStringStatic("OK"))
}
Inside our logout function, we process the arguments with the
code below:
// nanoservices/user-session-cache/cache-module/src/
// processes/logout.rs
if args.len() < 2 {
return Err(RedisError::WrongArity);
}
let mut args = args.into_iter().skip(1);
let user_id = args.next_arg()?.to_string();
We then construct the key and session with the following code:
// nanoservices/user-session-cache/cache-module/src/
// processes/logout.rs
let user_session = UserSession::from_id(user_id);
let key_string = RedisString::create(None, user_sessi
let key = ctx.open_key_writable(&key_string);
And finally, we delete the session from the key value store with
the code below:
// nanoservices/user-session-cache/cache-module/src/
// processes/logout.rs
if key.is_empty() {
return Ok(RedisValue::SimpleStringStatic("NOT_FOU
}
match key.delete() {
Ok(_) => {},
Err(_) => return Err(RedisError::Str("Could not d
};
// nanoservices/user-session-cache/cache-module/src/
// processes/update.rs
use redis_module::{
Context, NextArg, RedisError, RedisResult,
RedisString, RedisValue
};
use crate::user_session::UserSession;
pub fn update(ctx: &Context, args: Vec<RedisString>)
-> RedisResult {
. . .
}
Inside our update function, we process the arguments with the
code below:
// nanoservices/user-session-cache/cache-module/src/
// processes/update.rs
if args.len() < 2 {
return Err(RedisError::WrongArity);
}
let mut args = args.into_iter().skip(1);
let user_id = args.next_arg()?.to_string();
// nanoservices/user-session-cache/cache-module/src/
// processes/update.rs
let mut user_session = UserSession::from_id(user_id);
let key_string = RedisString::create(None, user_sessi
let key = ctx.open_key_writable(&key_string);
if key.is_empty() {
return Ok(RedisValue::SimpleStringStatic("NOT_FOU
}
And finally, we check the timeout and handle the outcome with
the code below:
// nanoservices/user-session-cache/cache-module/src/
// processes/update.rs
match &user_session.check_timeout(ctx)? {
RedisValue::SimpleStringStatic("TIMEOUT") => {
return Ok(RedisValue::SimpleStringStatic("TIM
},
RedisValue::SimpleStringStatic("REFRESH") => {
user_session.update_last_interacted(ctx)?;
return Ok(RedisValue::SimpleStringStatic("REF
},
RedisValue::SimpleStringStatic("OK") => {
user_session.update_last_interacted(ctx)?;
let perm_user_id = match key.hash_get("perm_user_
Some(perm_user_id) => perm_user_id,
None => {
return Err(RedisError::Str(
"Could not get perm_user_id"
)
);
}
};
return Ok(RedisValue::SimpleString(perm_user_id.t
},
_ => {
return Err(RedisError::Str("Could not check t
}
};
We can see that we still update the last interacted with even if a
refresh is returned. This is because we want to decouple the
refresh mechanism from the timeout mechanism. If a developer
wants to keep servicing JWT tokens when they need to be
refreshed, then they can do so. If they do not want the session
to ever timeout, they can set the timeout time for a year or so.
With the update now completed, we can now say that our
caching module is complete, and we can move onto building
out our client.
We do not know what the future will hold for our system. When
developing web systems, are requirements will change as the
problem evolves. Therefore, we have no way of knowing that
servers will need to access the cache. Therefore, it makes sense
to build a client that is accessible to any Rust server that needs
it. For our client, we only need the make a connection to Redis
in an async manner and return appropriate errors if needed.
With these requirements in mind, the Cargo.toml file for our
cache client takes the following form:
// nanoservices/user-session-cache/cache-client/Cargo
[package]
name = "cache-client"
version = "0.1.0"
edition = "2021"
[dependencies]
redis = { version = "0.25.4", features = ["tokio-comp
tokio = { version = "1.38.0", features = ["full"] }
glue = { path = "../../../glue"}
// nanoservices/user-session-cache/cache-client/src/l
use std::error::Error;
use redis::aio::{ConnectionLike, MultiplexedConnectio
use redis::Value;
use glue::errors::{NanoServiceError, NanoServiceError
// nanoservices/user-session-cache/cache-client/src/l
async fn get_connnection(address: &str)
-> Result<MultiplexedConnection, NanoServiceError
let client = redis::Client::open(address).map_err
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
let con = client.get_multiplexed_async_connection
.await
.map_err(|e|{
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
Ok(con)
}
We can then handle our strings from the response of the Redis
cache with the code below:
// nanoservices/user-session-cache/cache-client/src/
fn unpack_result_string(result: Value)
-> Result<String, NanoServiceError> {
match result {
Value::Status(s) => Ok(s),
_ => Err(NanoServiceError::new(
"Error converting the result into a strin
NanoServiceErrorStatus::Unknown
))
}
}
// nanoservices/user-session-cache/cache-client/src/l
pub async fn login(
address: &str,
user_id: &str,
timeout_mins: usize,
perm_user_id: i32
) -> Result<(), NanoServiceError> {
. . .
Ok(())
}
Inside our login function, we get the connection and send the
request with the code below:
// nanoservices/user-session-cache/cache-client/src/l
let mut con = get_connnection(address).await?;
let result = con
.req_packed_command(
&redis::cmd("login.set")
.arg(user_id)
.arg(timeout_mins)
.arg(perm_user_id.to_string())
.clone(),
)
.await.map_err(|e|{
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
match result {
Value::Okay => {
return Ok(());
},
_ => {
return Err(NanoServiceError::new(
format!("{:?}", result),
NanoServiceErrorStatus::Unknown
));
}
}
We then match the result, and return an error if we do not get a
value::Okay . With this, we can now login on our Redis cache.
For our logout function, the approach is the same as the
login function with the following code:
// nanoservices/user-session-cache/cache-client/src/l
pub async fn logout(address: &str, user_id: &str)
-> Result<String, Box<dyn Error>> {
let mut con = get_connnection(address).await?;
let result = con
.req_packed_command(
&redis::cmd("logout.set")
.arg(user_id)
.clone(),
)
.await.map_err(|e|{
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
let result_string = unpack_result_string(result)?
Ok(result_string)
}
// nanoservices/user-session-cache/cache-client/src/l
#[derive(Debug)]
pub enum UserSessionStatus {
Ok(i32),
Refresh
}
pub async fn update(address: &str, user_id: &str)
-> Result<UserSessionStatus, NanoServiceError> {
let mut con = get_connnection(address).await?;
. . .
}
// nanoservices/user-session-cache/cache-client/src/l
let result = con
.req_packed_command(
&redis::cmd("update.set")
.arg(user_id)
.clone(),
)
.await.map_err(|e|{
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
// nanoservices/user-session-cache/cache-client/src/l
let result_string = unpack_result_string(result)?;
match result.as_str() {
"TIMEOUT" => {
return Err(NanoServiceError::new(
"Session has timed out".to_string(),
NanoServiceErrorStatus::Unauthorized
));
},
"NOT_FOUND" => {
return Err(NanoServiceError::new(
"Session not found".to_string(),
NanoServiceErrorStatus::Unauthorized
));
},
"REFRESH" => {
return Ok(UserSessionStatus::Refresh)
},
_ => {}
}
// nanoservices/user-session-cache/cache-client/src/l
let perm_user_id = match result.parse::<i32>() {
Ok(perm_user_id) => perm_user_id,
Err(_) => {
return Err(NanoServiceError::new(
"Error converting the result into a strin
NanoServiceErrorStatus::Unknown
));
}
};
Ok(UserSessionStatus::Ok(perm_user_id))
And with this our Redis client is now ready. Finally, we can use
our Redis cache by using our client in our servers.
└── nanoservices
├── auth
│ ├── kernel
│ │ ├── Cargo.toml
│ │ └── src
│ │ ├── api
│ │ │ ├── . . .
│ │ ├── lib.rs
│ │ └── user_session
│ │ ├── descriptors.rs
│ │ ├── mod.rs
│ │ ├── schema.rs
│ │ └── transactions
│ │ ├── get.rs
│ │ └── mod.rs
# nanoservices/auth/kernel/Cargo.toml
[dependencies]
. . .
cache-client = { path = "../../user-session-cache/cac
// nanoservices/auth/kernel/src/lib.rs
pub mod api;
#[cfg(any(feature = "auth-core", feature = "reqwest")
pub mod user_session;
// nanoservices/auth/kernel/src/user_session/mod.rs
pub mod transactions;
pub mod descriptors;
pub mod schema;
// nanoservices/auth/kernel/src/user_session/descript
pub struct RedisSessionDescriptor;
The schema.rs file:
// nanoservices/auth/kernel/src/user_session/schema.r
pub struct UserSession {
pub user_id: i32
}
// nanoservices/auth/kernel/src/user_session/transact
pub mod get;
And finally, the setup code for our get trait takes the following
form:
// nanoservices/auth/kernel/src/user_session/transact
use std::future::Future;
use crate::user_session::schema::UserSession;
use glue::errors::{NanoServiceError, NanoServiceError
use cache_client::{update, UserSessionStatus, login};
use crate::api::users::get::get_user_by_unique_id;
use crate::user_session::descriptors::RedisSessionDes
pub trait GetUserSession {
fn get_user_session(unique_id: String)
-> impl Future<Output = Result<UserSession, NanoS
}
// nanoservices/auth/kernel/src/user_session/transact
impl GetUserSession for RedisSessionDescriptor {
fn get_user_session(unique_id: String)
-> impl Future<Output = Result<UserSession, NanoS
get_session_redis(unique_id)
}
}
// nanoservices/auth/kernel/src/user_session/transact
pub async fn get_session_redis(unique_id: String)
-> Result<UserSession, NanoServiceError> {
. . .
}
The get_session_redis function is going to be fired for every
authorized request that we process. First, we get the URL of the
Redis server and call the cache update function, with the
following code:
// nanoservices/auth/kernel/src/user_session/transact
let address = std::env::var("CACHE_API_URL").map_err(
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::BadRequest
)
})?;
let user_id = update(&address, &unique_id).await?;
For this book, the handling of the return from the | cache
function is carried out by the code below:
// nanoservices/auth/kernel/src/user_session/transact
match user_id {
UserSessionStatus::Ok(id) => Ok(UserSession { use
UserSessionStatus::Refresh => {
let user = get_user_by_unique_id(
unique_id.clone()
).await?;
let _ = login(&address, &unique_id, 20, user.
match user_id {
UserSessionStatus::Ok(id) => Ok(UserSessi
user_id: id
}),
_ => Err(NanoServiceError::new(
"Failed to update user session".to_st
NanoServiceErrorStatus::Unknown)
)
}
}
}
Our kernel is now complete. We can now move onto calling the
kernel in our server.
// nanoservices/to_do/networking/actix_server/
// src/api/basic_actions/create.rs
. . .
use auth_kernel::user_session::transactions::get::Get
pub async fn create<T, X>(
token: HeaderToken,
body: Json<NewToDoItem>
) -> Result<HttpResponse, NanoServiceError>
where
T: SaveOne + GetAll,
X: GetUserSession
{
let session = X::get_user_session(
token.unique_id
).await?;
let _ = create_core::<T>(
body.into_inner(),
session.user_id
).await?;
Ok(HttpResponse::Created().json(
get_all_core::<T>(session.user_id).await?
))
}
It should not be a surprise that all the other actions in the API
for the to-do server follow suit. The delete view is redefined
with the code below:
// nanoservices/to_do/networking/actix_server/
// src/api/basic_actions/delete.rs
. . .
use auth_kernel::user_session::transactions::get::Get
pub async fn delete_by_name<T, X>(
token: HeaderToken,
req: HttpRequest
) -> Result<HttpResponse, NanoServiceError>
where
T: DeleteOne + GetAll,
X: GetUserSession
{
let session = X::get_user_session(token.unique_id
. . .
Ok(HttpResponse::Ok().json(
get_all_core::<T>(
session.user_id
).await?
))
}
// nanoservices/to_do/networking/actix_server/
// src/api/basic_actions/get.rs
. . .
use auth_kernel::user_session::transactions::get::Get
pub async fn get_all<T, X>(token: HeaderToken)
-> Result<HttpResponse, NanoServiceError>
where
T: GetAll,
X: GetUserSession
{
let session = X::get_user_session(token.unique_id
Ok(HttpResponse::Ok().json(
get_all_core::<T>(session.user_id).await?)
)
}
// nanoservices/to_do/networking/actix_server/
// src/api/basic_actions/update.rs
. . .
use auth_kernel::user_session::transactions::get::Get
pub async fn update<T, X>(
token: HeaderToken,
body: Json<ToDoItem>
) -> Result<HttpResponse, NanoServiceError>
where
T: UpdateOne + GetAll,
X: GetUserSession
{
let session = X::get_user_session(token.unique_id
let _ = update_core::<T>(
body.into_inner(), session.user_id
).await?;
Ok(HttpResponse::Ok().json(
get_all_core::<T>(session.user_id).await?
))
}
// nanoservices/to_do/networking/actix_server/
// src/api/basic_actions/mod.rs
. . .
use auth_kernel::user_session::descriptors::RedisSess
pub fn basic_actions_factory(app: &mut ServiceConfig)
app.service(
scope("/api/v1")
.route("get/all", get().to(
get::get_all::<
SqlxPostGresDescriptor,
RedisSessionDescriptor
>)
)
.route("create", post().to(
create::create::<
SqlxPostGresDescriptor,
RedisSessionDescriptor
>)
)
.route("delete/{name}", delete().to(
delete::delete_by_name::<
SqlxPostGresDescriptor,
RedisSessionDescriptor
>)
)
.route("update", put().to(
update::update::<
SqlxPostGresDescriptor,
RedisSessionDescriptor
>)
)
);
}
This is now done. We can sit back and appreciate what is going
on here. Although there are a lot of minor changes in several
files, but we have introduced a cache that checks the user
session. Because of the way our code is structured, our changes
easily slot in. We do not have to rip out and change huge
chunks of code. Also, our complexity is contained. We can just
look at the networking workspace and see how the HTTP
request is handled throughout the life cycle of a request
because the core logic is all abstracted away. If you wanted to
use a file, other database, or just memory of the server for the
cache, you can just implement another descriptor and slot in
the descriptor easily. If we wanted to do this, we could do this
with features as seen with the code below:
#[cfg(feature = "cache-postgres")]
use auth_kernel::user_session::
descriptors::PostgresSessionDescriptor as CacheDescri
#[cfg(feature = "cache-redis")]
use auth_kernel::user_session::
descriptors::RedisSessionDescriptor as CacheDescripto
. . .
get::get_all::<
SqlxPostGresDescriptor,
CacheDescriptor
>)
To call our kernel we now must add the core feature because
our cache interface relies on the get user in the core. This
means that our auth Cargo.toml must be updated with the
following:
# nanoservices/auth/networking/actix_server/Cargo.tom
. . .
[dependencies]
. . .
auth-kernel = {
path = "../../kernel",
features = ["auth-core"],
default-features = false
}
// nanoservices/auth/kernel/src/user_session/transact
use std::future::Future;
use glue::errors::NanoServiceError;
use crate::user_session::descriptors::RedisSessionDes
use cache_client::login as cache_login;
We then define our trait for logging in a user session with the
code below:
// nanoservices/auth/kernel/src/user_session/transact
pub trait LoginUserSession {
fn login_user_session(
address: &str,
user_id: &str,
timeout_mins: usize,
perm_user_id: i32
)
-> impl Future<Output = Result<(), NanoServiceErr
}
Finally, we implement this trait for the Redis descriptor that just
calls the login function from the Redis interface with the
following code:
// nanoservices/auth/kernel/src/user_session/transact
impl LoginUserSession for RedisSessionDescriptor {
fn login_user_session(
address: &str,
user_id: &str,
timeout_mins: usize,
perm_user_id: i32
)
-> impl Future<Output = Result<(), NanoServiceErr
cache_login(
address,
user_id,
timeout_mins,
perm_user_id
)
}
}
// nanoservices/auth/networking/actix_server/src/
// api/auth/login.rs
. . .
use auth_kernel::user_session::transactions::login::
LoginUserSession;
// nanoservices/auth/networking/actix_server/src/
// api/auth/login.rs
pub async fn login<T, X>(req: actix_web::HttpRequest)
-> Result<HttpResponse, NanoServiceError>
where
T: GetByEmail,
X: LoginUserSession
{
. . .
Ok(HttpResponse::Ok().json(token))
}
// nanoservices/auth/networking/actix_server/src/
// api/auth/login.rs
let credentials = extract_credentials(req).await?;
let token = core_login::<T>(
credentials.email.clone(),
credentials.password
).await?;
let user = T::get_by_email(credentials.email).await?;
We then get our Redis URL and login the user session with the
code below:
// nanoservices/auth/networking/actix_server/src/
// api/auth/login.rs
let url = std::env::var("CACHE_API_URL").map_err(|e|{
NanoServiceError::new(
e.to_string(),
NanoServiceErrorStatus::Unknown
)
})?;
let _ = X::login_user_session(
&url,
&user.unique_id,
20,
user.id
).await?;
# nanoservices/to_do/.env
TO_DO_DB_URL=postgres://username:password@localhost/t
AUTH_DB_URL=postgres://username:password@localhost/to
AUTH_API_URL=http://127.0.0.1:8081
JWT_SECRET=secret
CACHE_API_URL=redis://127.0.0.1:6379
Summary
Questions
Answers
https://packt.link/EarlyAccess/
We now have a working to-do application that can either run as
a single binary or run as multiple servers. However, we do not
actually know what is going on inside our system. Let us say
that our system makes a request from on server to another.
How do we know that this request was made and what the
response was. We don’t. We can try and work out what
happened from the error message returned to the frontend, but
this might not be clear. We also might not want to expose
intricate details of the error to the frontend. To remedy this, we
can produce logs of these requests and how the request travels
through the system. This also gives us the power to inspect the
steps that lead up to the error. Logging also enables us to keep
an eye on the health of the system. In this chapter, we will cover
the following:
By the end of this chapter, you will be able to log what is going
on in your program including all of the requests and response
codes by implementing middleware for our logger. You will also
be able to create background tasks where our program can
send logs to this background task to send to the database, taking
pressure off our main program. Finally, we can perform queries
on our elasticsearch database to look for particular logs.
Technical requirements
https://github.com/PacktPublishing/Rust-Web-Programming-
3E/tree/main/chapter13
// nanoservices/auth/networking/actix_server/
// src/auth/logout.rs
use actix_web::HttpResponse;
pub async fn logout() -> HttpResponse {
HttpResponse::Ok()
.content_type("text/html; charset=utf-8")
.body(
"<html>\
<script>\
localStorage.removeItem('token'); \
window.location.replace(
document.location.origin);\
</script>\
</html>"
)
}
Here, we can see that we are removing the user token from the
localStorage and then refresh the window afterwards to be
rerouted to the login form. We now must add our logout API
endpoint to our factory with the code below:
// nanoservices/auth/networking/actix_server/
// src/auth/mod.rs
. . .
pub mod logout;
. . .
pub fn auth_factory(app: &mut ServiceConfig) {
app.service(
scope("/api/v1/auth")
. . .
.route("logout", get().to(
logout::logout)
)
);
}
This is the easiest way to force a logout as the frontend browser
does not have access to the JWT, and therefore cannot make
any more authenticated HTTP requests.
While our running code in the browser does work, it will leave
a dangling entry in our cache. If we want to remove the user
session from the cache, we will have to make a HTTP request
with the user ID. At this stage in the book, you should be able to
add this API endpoint. Covering the addition of this API
endpoint at this stage in the book will merely bloat the chapter.
If you choose not to add this endpoint, your to-do application
will work ok for the rest of the book, but the memory
consumption of the cache will grow over time.
What is logging?
So far, our application does not log anything. This does not
directly affect the running of the app. However, there are some
advantages to logging. Logging enables us to debug our
applications. Right now, as we are developing locally, it may not
seem like logging is really needed. However, in a production
environment, there are many reasons why an application can
fail, including Docker container orchestration issues. Logs that
note what processes have happened can help us to pinpoint an
error. We can also use logging to see when edge cases and
errors arise for us to monitor the general health of our
application. When it comes to logging, there are four types of
logs that we can build:
In the worst case, there will be a delay. With the error type, we
will not be able to make the database call as the server was
interrupted by an error before the order was even entered in
the database. Considering this, it is clear why error logging is
highly critical, as the user needs to be informed that there is a
problem and their transaction did not go through, prompting
them to try again later.
The second issue is that logs are not considered secure. They get
copied and sent to other developers in a crisis and they can be
plugged into other pipelines and websites, such as Bugsnag, to
monitor logs. Considering the nature of logs, it is not good
practice to have any identifiable information in a log.
Now we are all juiced up knowing that logs rock, we can start
our logging journey by building a basic logger that writes to the
terminal.
Logging via the terminal
├── Cargo.toml
└── src
├── errors.rs
├── lib.rs
├── logger
│ ├── logger.rs
│ ├── mod.rs
│ └── network_wrappers
│ ├── actix_web.rs
│ └── mod.rs
└── token.rs
Define a logger
Create a logging middleware
Defining a logger
# glue/Cargo.toml
. . .
[dependencies]
. . .
tracing = "0.1.4"
tracing-subscriber = "0.3.18"
futures-util = "0.3.30"
Here we can see that we are locking the standard output and
writing to it. This makes sense as printing to the terminal would
not be very useful if half of one message got printed alongside
half of another message. This lock mechanism results in a
reduction in performance. For instance, it is good practice to log
every request that is sent to the server. If we log our requests
using println! , even if we have four threads processing
requests, each thread processing a request would have to wait
their turn to acquire the lock, essentially holding up all threads
to a single bottleneck. To stop this from happening, we create a
global logger that accepts all logs from all threads. This logger
will also remain live for the entire duration of the program.
// glue/src/logger.rs
use tracing::Level;
use tracing_subscriber::FmtSubscriber;
pub fn init_logger() {
let subscriber = FmtSubscriber::builder()
.with_max_level(Level::INFO)
.finish();
tracing::subscriber::set_global_default(subscribe
.expect("Failed to set up logger");
}
// glue/src/logger.rs
pub fn log_info(message: &str) {
tracing::info!("{}", message);
}
pub fn log_warn(message: &str) {
tracing::warn!("{}", message);
}
pub fn log_error(message: &str) {
tracing::error!("{}", message);
}
pub fn log_debug(message: &str) {
tracing::debug!("{}", message);
}
pub fn log_trace(message: &str) {
tracing::trace!("{}", message);
}
And with this, we can now initialize our logger and produce log
messages. However, what about logging every HTTP request
sent to the server? Sure, we could write log_info for every API
endpoint, but this would be a pain to write and maintain.
Instead, we can build some middleware to log every HTTP
request for us.
// glue/src/network_wrappers/actix_web.rs
pub struct ActixLogger;
impl<S, B> Transform<S, ServiceRequest> for ActixLogg
where
S: Service<
ServiceRequest,
Response = ServiceResponse<B>,
Error = Error
> + 'static,
S::Future: 'static,
{
type Response = ServiceResponse<B>;
type Error = Error;
type InitError = ();
type Transform = LoggingMiddleware<S>;
type Future = Ready<Result<
Self::Transform, Self::InitError
>>;
fn new_transform(&self, service: S) -> Self::Futu
ok(LoggingMiddleware { service })
}
}
// glue/src/network_wrappers/actix_web.rs
fn call(&self, req: ServiceRequest)
-> Self::Future {
let fut = self.service.call(req);
Box::pin(async move {
let res = fut.await?;
let req_info = format!(
"{} {} {}",
res.request().method(),
res.request().uri(),
res.status().as_str()
);
tracing::info!("Request: {}", req_info);
Ok(res)
})
}
// glue/src/logger/network_wrappers/mod.rs
#[cfg(feature = "actix")]
pub mod actix_web;
// glue/src/logger/mod.rs
pub mod network_wrappers;
pub mod logger;
// glue/src/lib.rs
pub mod errors;
pub mod token;
pub mod logger;
For our servers, they all follow the same template. We wrap the
ActixLogger in our server definition. All our servers should
have a layout like the following:
. . .
use glue::logger::{
logger::init_logger,
network_wrappers::actix_web::ActixLogger
};
use actix_cors::Cors;
#[tokio::main]
async fn main() -> std::io::Result<()> {
init_logger();
run_migrations().await;
HttpServer::new(|| {
let cors = Cors::default().allow_any_origin()
.allow_any_method()
.allow_any_header()
App::new().wrap(ActixLogger)
.wrap(cors)
.configure(api::views_factory)
})
.workers(4)
.bind("127.0.0.1:8081")?
.run()
.await
}
2024-07-19T23:36:02.320369Z INFO
glue::logger::network_wrappers::actix_web:
Request: POST /api/v1/users/create 201
2024-07-19T23:36:02.767376Z INFO
glue::logger::network_wrappers::actix_web:
Request: GET /api/v1/auth/login 200
2024-07-19T23:36:22.298595Z INFO
glue::logger::network_wrappers::actix_web:
Request: GET /api/v1/auth/login 200
2024-07-19T23:36:22.310567Z INFO
glue::logger::network_wrappers::actix_web:
Request: GET /api/v1/get/all 200
2024-07-19T23:36:38.671171Z INFO
glue::logger::network_wrappers::actix_web:
Request: POST /api/v1/create 201
Learning this approach will also give you the skillset to offload
other tasks from the request process if needed using the actor
approach. Before we set off building the logging mechanism
however, we must define what an actor is.
What is an actor?
Actors really shine when you have a long running process, and
you want to keep sending messages to. This can allow you to
take pressure of the task you are currently performing. It can
also keep resources allocated to one actor as opposed to
needing to reallocate those resources. For instance, when we
make a network connection, there is usually a handshake
consisting of message back and forth to establish that
connection. Having an actor maintain that connection and
accept messages to send over that connection can reduce the
number of handshakes you need to make. Another example is
opening a file. When you open a file, the operating system
needs to check if the file is there, if the reader has permissions,
and must acquire a lock for that file. Instead of opening a file to
perform one transaction to it and then closing it, we can have
an actor maintain the handle of that file and write to it the
messages that were sent to the actor. Another advantage to
remember is that channels can also act as queues. Channels can
have messages build up in them with the actor consuming them
at the actors own pace as long as the computer has enough
memory to keep the messages in memory, and the channel has
enough allocated capacity.
Now that we know what actors are and how we are going to use
them, we can start building our remote logging system by
building our actor.
We then define our elastic logger actor model with the code
below:
// glue/src/logger/mod.rs
pub mod network_wrappers;
pub mod logger;
#[cfg(feature = "elastic-logger")]
pub mod elastic_actor;
// glue/src/logger/elastic_actor.rs
use tokio::sync::mpsc;
use tokio::sync::mpsc::{Receiver, Sender};
use serde_json::json;
use reqwest::{Client, Body};
use chrono::Utc;
use once_cell::sync::Lazy;
use serde::Serialize;
// glue/src/logger/elastic_actor.rs
#[derive(Debug, Serialize)]
struct LogMessage {
level: String,
message: String,
}
With this message, we can then build our send_log function
with the following code:
// glue/src/logger/elastic_actor.rs
pub async fn send_log(level: &str, message: &str) {
static LOG_CHANNEL: Lazy<Sender<LogMessage>> = La
let (tx, rx) = mpsc::channel(100);
tokio::spawn(async move {
elastic_actor(rx).await;
});
tx
});
LOG_CHANNEL.send(LogMessage {
level: level.to_string(),
message: message.to_string(),
}).await.unwrap();
}
For our actor, we define the outline with the code below:
// glue/src/logger/elastic_actor.rs
async fn elastic_actor(mut rx: Receiver<LogMessage>)
let elastic_url = std::env::var(
"ELASTICSEARCH_URL"
).unwrap();
let client = Client::new();
while let Some(log) = rx.recv().await {
. . .
}
}
Here we can see that we get the URL to the database, establish a
HTTP client, and then run an infinite loop where we cycle
through an interation of the loop every time a message is
received from the channel. Once we get the message from the
channel, we create a JSON body and send it via HTTP to the
elastic search database with the following code:
// glue/src/logger/elastic_actor.rs
let body = json!({
"level": log.level,
"message": log.message,
"timestamp": Utc::now().to_rfc3339()
});
let body = Body::from(serde_json::to_string(&body)
.unwrap());
match client.post(&elastic_url)
.header("Content-Type", "application/json")
.header("Accept", "application/json")
.body(body)
.send()
.await
{
Ok(result) => {},
Err(e) => {
eprintln!(
"Failed to send log to Elasticsearch: {}"
e
);
}
}
For our logging functions, we still need to log the message to the
terminal so logs can still be recovered if there is a problem with
our database. This sending of the message to the database
should be triggered if the feature is enabled. Considering this,
our | function remains the same giving us the following
outline:
// glue/src/logger/logger.rs
. . .
#[cfg(feature = "elastic-logger")]
use super::elastic_actor::send_log;
pub fn init_logger() {
. . .
}
// glue/src/logger/logger.rs
pub async fn log_info(message: &str) {
tracing::info!("{}", message);
#[cfg(feature = "elastic-logger")]
send_log("INFO", message).await;
}
pub async fn log_warn(message: &str) {
tracing::warn!("{}", message);
#[cfg(feature = "elastic-logger")]
send_log("WARN", message).await;
}
pub async fn log_error(message: &str) {
tracing::error!("{}", message);
#[cfg(feature = "elastic-logger")]
send_log("ERROR", message).await;
}
pub async fn log_debug(message: &str) {
tracing::debug!("{}", message);
#[cfg(feature = "elastic-logger")]
send_log("DEBUG", message).await;
}
pub async fn log_trace(message: &str) {
tracing::trace!("{}", message);
#[cfg(feature = "elastic-logger")]
send_log("TRACE", message).await;
}
And now our system is fully ready to send logs to our database,
we now must configure our database.
# docker-compose.yml
. . .
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearc
container_name: elasticsearch
environment:
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9200:9200"
- "9300:9300"
TO_DO_DB_URL=postgres://username:password@localhost/t
AUTH_DB_URL=postgres://username:password@localhost/to
AUTH_API_URL=http://127.0.0.1:8081
JWT_SECRET=secret
CACHE_API_URL=redis://127.0.0.1:6379
ELASTICSEARCH_URL=http://localhost:9200/logs/_doc
And this is it, our system is now able to start sending logs out
our database.
If we run our server, and then create a user a login using our
ingress/scripts/create_login.sh script, we can make a
CURL request to our database with the command below:
Here, we are making a query for all logs with a level of INFO .
Our query gives us the following results:
{
"took" : 39,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 0.18232156,
"hits" : [
{
"_index" : "logs",
"_type" : "_doc",
"_id" : "NbbV15AB4EMJAczNwoWr",
"_score" : 0.18232156,
"_source" : {
"level" : "INFO",
"message" : "Request: POST /api/v1/users/cr
"timestamp" : "2024-07-22T00:27:08.721360+0
}
},
{
"_index" : "logs",
"_type" : "_doc",
"_id" : "NrbV15AB4EMJAczNw4Xx",
"_score" : 0.18232156,
"_source" : {
"level" : "INFO",
"message" : "Request: GET /api/v1/auth/logi
"timestamp" : "2024-07-22T00:27:09.167054+0
}
}
]
}
}
Here, we can see that both of our logs are here in the database.
We can also see that we have a score. The higher the score, the
more relevant the log is to the search. We can also see that we
have a max_score in the response. We can see that the scores
in the logs are the same as the maximum scores. Our
level: “INFO” matches exactly. We could be more granular
and produce even more tags on our logs. For instance, we could
also put in a tag for the service and filter by this if we want. As
your system gets more complex, logging on a database can be a
lifesaver.
Summary