Introduction
Hello, and welcome to RS118! This is a short series of lectures and projects designed to introduce beginners to programming in Rust. There is a basic level of programming knowledge assumed throughout.
This tutorial will heavily lean on and is adapted from The Rust Programming Language, more commonly known as The Book. Book chapters and links are referenced throughout, and it is recommended you read the entire chapter, as these notes are just here as a brief summary. Other resources:
- Rust by Example - If you're looking for an example for something to explain how to do it or how it works, look here
- The Reference - This is a complete* reference for the Rust language. It is quite detailed and technical, but very thorough.
- Rustlings - Lots of little exercises and examples to demonstrate specific concepts
The code snippets in this book can be run using the play button in the top right corner:
#![allow(unused)] fn main() { println!("Hello, Ferris!"); }
Some can also be edited, to allow you to play around with the concepts being presented. Try to fix the error in the code below:
fn main() { let message: &str = "Edit me!" println!("Your message says: {message}"); }
I encourage you to play around with the snippets to help get a better understanding of how the compiler works.
The source for this book is available on GitHub, and contributions/corrections/suggestions/additions are welcome.
The full playlist of talks accompanying accompanying the notes can be found here on youtube.
RS118 is kindly supported by an event grant from The Rust Foundation. They do a lot of really important stuff for the language, so go show them some love.
This book and all the code associated with RS118 is distributed under the terms of the MIT licence, Copyright 2022 Joey Harrison & The University of Warwick Computing Society. If you use anything from this book or the associated GitHub repos, please give credit.
Part 1: Hello, Ferris!
Installing Rust (1.1)
We have a few bits we'll need
rustup
for managing versions of Rust and the other tools belowcargo
Rust's build tool and package manager. 99% of Rust projects use cargo.rustc
the Rust compiler itselfrust-analyzer
the Rust language server
This is made easy with rustup
, which can be installed by running the command below. If you're on a system other than DCS, go to https://rustup.rs for installation instructions.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Follow the on-screen instructions, and this will install Rust's default stable toolchain to $HOME/.cargo
. cargo
can use a large amount of space, so if you are concerned about filling up your home directory (e.g. filling your disk quota), you can set RUSTUP_HOME
and CARGO_HOME
to somewhere else, e.g. /var/tmp/rustup
(this will not persist, but it won't fill your quota).
You'll need to install rust-analyzer
separately. If you're using VS Code, the command
code --install-extension rust-lang.rust-analyzer
will install rust-analyzer
for you. See the rust-analyzer
user manual for instructions for other editors.
Hello World (1.2 & 1.3)
Before we write any code, we need a new Cargo project, or "crate".
cargo new hello_world
Open your new Cargo project, and in the hello_world
folder, you should see:
Cargo.toml
, Cargo's config file, in TOML (Tom’s Obvious, Minimal Language) format.src/main.rs
- The
src
directory is where all source code should live main.rs
is the top-level source file in the crate. It's where themain()
function should live.
- The
- A
.git
folder - Cargo automaticallyinit
s a git repo for you.- It also adds a default
.gitignore
- It also adds a default
Cargo has multiple commands that facilitate the building and running of Rust projects
cargo run
will build and run your code, usingmain::main
as the entry pointcargo build
will just compile the cratecargo check
will check your crate for errorscargo fmt
will format your code using rustfmtcargo clippy
will lint your code using clippy
See The Book, and also The Cargo Book, for more info about Cargo and the Cargo.toml
file.
Open main.rs
in VS Code and, oh look, you don't even need to write hello world: Cargo did it for you. You can delete it and write it out again, if you really want.
fn main() { println!("hello world"); }
- The
fn
keyword is used to declare functions main
is the name of the function- Parentheses
()
is where the parameter list goes, and braces{}
are used to declare blocks - The
println!()
macro is used for command line output (more on macros/functions later)
Variables (3.1)
Variables in Rust are declared, or bound, using the let
keyword:
#![allow(unused)] fn main() { let x = 6; println!("The value of x is: {}", x); }
Variables are immutable by default, meaning their value cannot be changed once they have been bound. This is A Good Thing™ because immutability means less potential for errors. Prefer to leave variables as immutable, unless you absolutely have to, in which case mutable variables can be declared mut
:
fn main() { let mut x = 5; println!("The value of x is: {}", x); x = 6; println!("The value of x is: {}", x); }
We'll talk about type annotations and inference later.
Types
Basic Types (3.2)
Rust has the following numeric types:
Bit Width | Signed | Unsigned |
---|---|---|
8 | i8 | u8 |
16 | i16 | u16 |
32 | i32 | u32 |
64 | i64 | u64 |
128 | i128 | u128 |
architecture-defined | isize | usize |
(usize/isize
are the pointer width of the architecture you're compiling for. You should use them for anything representing size/length/indices of data structures, such as arrays.)
We also have floats, f32
and f64
which are IEEE754 floating point types (equivalent to float
and double
in many languages).
Booleans are of type bool
and are either true
or false
.
Characters are of type char
and are written using single quotes:
#![allow(unused)] fn main() { let c = 'c'; let z = 'Z'; let heart_eyed_cat = '😻'; }
char
s in Rust are four bytes in size, as a char is meant to represent a single Unicode scalar value, meaning it can do much more than just ASCII.
Primitive Compound Types (3.2)
Rust has tuples, which are used extensively. Tuples can be non-homogenous and of any length.
#![allow(unused)] fn main() { let tup = (1, 2); let trip: (char, i32, f64) = ('A', -2, 100.01); }
Tuples are accessed using dot syntax, or destructured by pattern matching (more on this later):
#![allow(unused)] fn main() { let tup = (1, 2); let trip: (char, i32, f64) = ('A', -2, 100.01); let one = tup.0 + tup.1 + trip.1; let (x, y) = tup; let (c, i, _) = trip; }
Note the _
binding, which is used in pattern matching to discard a value:
fn main() { let _ = 1; print!("Can you use _? {}", _); }
Arrays in Rust have a fixed length and are allocated on the stack. They are indexed using brackets []
, and have a type signature [type; length]
. The length of an array is part of it's type in Rust.
fn main() { let a = [1, 2 ,3]; let first = a[0]; let second = a[1]; let b: [f32; 2] = [-2.0, 4.1]; // you can also use shorthand for creating arrays of the same element! let zeroes = [0; 10]; // ten zeroes! }
Attempting to index out of bounds will cause a panic. Panics are Rust's way of throwing unrecoverable errors at runtime, similar to exceptions in other languages. More on those later.
Type Annotations
let
bindings can be annotated with their types using the following syntax:
fn main() { let a: u16 = 4; let b: i32 = -1; let c: char = 'c'; let d: &str = "hello!"; // More on string types shortly let e: (i32, f32) = (12, 12.0); let f: [u32; 3] = [1, 2, 3]; }
This is rarely necessary, as Rust can infer types most of the time. You can also annotate numeric literals with their types:
#![allow(unused)] fn main() { let a = 4_u16; let b = -1_i32; let c = 3.14_f64; }
Composite Data Types
We can use these basic types to build more complex types. Rust has two ways of doing this.
Structures (5)
Those of you familiar with C will be familiar with struct
s. Structs are types made up of several fields, where each field has a name and type.
- Structs can be created by giving values to the types, using the syntax shown.
- Fields can be accessed or updated using dot notation.
- If a struct is bound as immutable, its fields are also immutable.
struct Student { name: String, id: String, year: u32, active: bool, } fn main() { let mut you = Student { name: String::from("Your Name"), id: String::from("1234567"), year: 2, active: false, }; println!("My student ID is: {}", you.id); you.year += 1; println!("My year of study is: {}", you.year); }
Enums (6.1)
Enums restrict variables to a predefined finite set of named discrete values. It's short for enumeration, because it is enumerates of all the allowed options. Many languages have support for enums (e.g. C, Java, Haskell), but Rust's enums are most similar to Java's enums or sum types in functional languages such has Haskell and F#.
Say we want to represent some colours. We have a Thing, and we need it to be either red, green, or blue:
enum Colour { Red, Green, Blue, } fn main() { let thing_colour: Colour = Colour::Red; }
We now have a new type, Colour
, and three new values: Colour::Red
, Colour::Blue
, and Colour::Green
.
What makes enums really special is that variants can contain values:
#![allow(unused)] fn main() { enum MyEnum { NoValue, ANumber(u32), TwoNumbers(u32, i64), AString(String), } }
This is a really powerful feature, especially when used in combination with pattern matching, and part of what makes Rust's type system so good.
An example, using an enum to represent some shapes:
enum Shape { Circle(u64), Rectangle(u64, u64), Triangle(u64, u64, u64), } fn main() { let circle = Shape::Circle(6); let triangle = Shape::Triangle(1, 2, 3); }
Or a recursive definition of a binary tree1:
#![allow(unused)] fn main() { enum Tree { Leaf(i32), Branch(Box<Tree>, Box<Tree>), } }
Option<T>
A commonly used enum is Option<T>
. Option is used to represent values that may or may not exist. Rust has no notion of null
(no more NullPointerException
s!), instead preferring to wrap other types in an Option
when some similar notion of null is needed. This has advantages, as it forces you to explicitly deal with error cases, among other things.
T
is a generic type parameter, which we'll cover in more detail later.
#![allow(unused)] fn main() { enum Option<T> { Some(T), None, } }
Think of it as a type-safe container which may or may not be empty. This enum is frequently used as a return type in methods that may fail:
fn div(x: i64, y: i64) -> Option<i64> { if y == 0 { None } else { Some(x/y) } } fn main() { println!("{:?}", div(15, 3)); }
This particular enum is very important as it's used widely throughout the language, and we'll get to it in more detail later.
The actual use of the tree here is more complex, Box
es are smart pointers with heap allocation (for unknown size data). More in the Raytracer Project, so don't worry about these for now.
Functions (3.3)
Functions in Rust are declared using the fn
keyword.
- The list of parameters is declared between the parentheses:
name: type
- The return type is given using an arrow
-> type
- Functions with no return type implicitly return
()
, the unit type.
- Functions with no return type implicitly return
fn main() { println!("this function has no parameters and no return type"); } fn inc(x: i32) -> i32 { println!("this function returns its parameter plus one"); return x+1; } fn mul(a: f64, b: f64) -> f64 { println!("this function returns the product of its parameters"); a*b } fn zip(fst: i32, snd: i32) -> (i32, i32) { (fst, snd) }
Note how in the last examples the return
keyword was not used and there is no semicolon. This is because in Rust, (almost) everything is an expression that returns a value, even blocks. The value of the last expression in a block is the return value of the block. Ending an expression with a semicolon discards the return value. This can be a tricky concept to grasp, see Rust by Example for more detail.
Control Flow (3.5)
if
Expressions
You all hopefully know how these work. The syntax is shown below. In Rust, you don't need parentheses around the condition.
fn main() { let number = 6; if number % 4 == 0 { println!("number is divisible by 4"); } else if number % 3 == 0 { println!("number is divisible by 3"); } else if number % 2 == 0 { println!("number is divisible by 2"); } else { println!("number is not divisible by 4, 3, or 2"); } }
Boolean expressions are combined with
&&
and||
or!
not&&
and||
are lazily evaluated
if
expressions return a value too, so it can be used as a ternary conditional operator:
#![allow(unused)] fn main() { let condition = true; let x = if condition { 5 } else { 6 }; }
loop
Loops
loop
expressions repeat ad infinitum.
loop {
println!("again!");
}
You can break
out of one, if needs be. This is preferred to while true
. You can also use one to construct a do-while:
loop {
do_thing();
if !condition {
break;
}
}
while
Loops
Pretty standard stuff too:
#![allow(unused)] fn main() { let mut number = 3; while number != 0 { println!("{}", number); number -= 1; } }
for
Loops
for
loops are more like Python's, or Java's range-based for loop, than C/C++. They are used to iterate over a collection, using an iterator (more on those later).
#![allow(unused)] fn main() { let a = [10, 20, 30, 40, 50]; for element in a { println!("the value is: {}", element); } }
If you want to loop over a numerical range, you can create a range iterator. Like Python's range, this is exclusive of the last number:
#![allow(unused)] fn main() { for number in 1..10 { println!("{}", number); } for number in (1..10).rev() { // this loop goes from 10 -> 1 println!("{}", number); } }
Strings
Strings in Rust are not so simple. For now, we will consider only String
. String
represents a mutable, variable length buffer which can hold your char
s. Create an empty one with String::new()
, or create one from a literal using String::from()
, .to_owned()
or the trait into
.
fn main() { let empty_string = String::new(); let hello_world = String::from("Hello, World!"); let hello_ferris: &str = "Hello, Ferris"; let owned_ferris = hello_ferris.to_owned(); }
The ::
symbol is used to access associated functions which belong to a type. The from()
and new()
functions both belong to String
, so we call them as shown. Think of them as static methods, for those of you into your Java.
Pattern Matching (6.2)
Those of you familiar with functional languages will be pleased to hear that Rust can do this. Those of you not lucky enough to be familiar with functional languages will likely be a bit confused. Think of it as a fancy switch statement for now. Here's an example, using our Colour
enum from earlier:
enum Colour { Red, Blue, Green, White, Black } fn colour_to_rgb(colour: Colour) -> (u8, u8, u8) { match colour { Colour::Red => (255, 0, 0), Colour::Green => (0, 255, 0), Colour::Blue => (0, 0, 255), Colour::White => (255, 255, 255), Colour::Black => (0, 0, 0), } } fn main() { colour_to_rgb(Colour::Blue); }
The cases come before the =>
, and the return value for that match arm comes after it. The match returns one of those tuples, and then the function returns the return value of the match
. Remember that we don't need a return
However, matching is much more powerful than this. We can include arbitrary expressions after the match arm:
enum Colour { Red, Blue, Green, White, Black } fn colour_to_rgb(colour: Colour) -> (u8, u8, u8) { match colour { Colour::Red => (255, 0, 0), Colour::Green => (0, 255, 0), Colour::Blue => { println!("I'm blue, da ba dee da ba di"); (0, 0, 255) }, Colour::White => if 10 % 2 == 0 { (255, 255, 255) } else { unreachable!("Something is very wrong"); }, Colour::Black => (0, 0, 0), } } fn main() { let rgb = colour_to_rgb(Colour::Blue); println!("{:?}", rgb); }
We can also use it to destructure (unpack) and bind values, for use in the match
body. For example, with Option<T>
:
fn maybe_increment(x: Option<i64>) -> Option<i64> { match x { None => None, Some(i) => Some(i + 1), } } fn main() { let five = Some(5); let six = maybe_increment(five); let none = maybe_increment(None); }
Note that matches must be exhaustive. For example, if you're matching on a u8
, you must cover all cases from 0 to 255. This is impractical, so the placeholder _
can be used, as a default case:
fn main() { let val: u8 = 40; match val { 12 => println!("val is 12"), 21 => println!("val is 21"), _ => println!("val is some other number that we don't care about"), } }
Pattern matching can also make use of further conditions (called guards). These require a fallback case to act as an else, even if it is only a _
. There are other places pattern matching can be used, such as in if let
and while let
expressions.
fn maybe_collatz(x: Option<u64>) -> Option<u64> { match x { None => None, Some(1) => None, Some(i) if i % 2 == 0 => Some(i / 2), Some(i) => Some(3 * i + 1) } } fn main() { let mut number = Some(7_u64); while let Some(i) = number { println!("{i}"); number = maybe_collatz(number); } }
Part 2 - Fighting the Borrow Checker
Today we're gonna cover ownership and the borrow checker. This is where it will start to get confusing, but eventually it'll click and you'll see why all your C code doesn't work.
Ownership (4.1)
Rust does not have a garbage collector like Java/Python/C#/Go. Instead, it allows you to manually manage memory, but within a set of constraints enforced by the compiler:
- Every value in Rust has a variable that is its owner
- There can only be one owner at a time
- When the owning variable goes out of scope, the value will be dropped
If you're not familiar with the stack and the heap, read this for a quick overview.
In terms of scope in Rust, this can broadly be interpreted in the way that you probably think. A variable is valid from the point at which it is declared, until the end of the current scope
fn main() { //s not valid here, not declared yet let s = "string"; // s is valid from here down }// this scope ends here, s is no longer valid
I'm going to use the String
type as an example, which hopefully you have already seen. String is more complex than types we've seen or written so far, because it represents a mutable string that may vary in length. This is as opposed to string literals, like s
in the example above.
The value of string literals are hardcoded into the program, meaning if you were to compile the below program, you'd find the actual values of the characters in the executable (you can use the strings
command to verify this).
fn main(){ println!("Hello from your binary file!"); }
String
, instead, stores the string data on the heap, which allows us to work with text that is unknown at compile time. String
can be mutated:
fn main() { let mut s = String::from("hello"); s.push_str(", world!"); // push_str() appends a literal to a String println!("{s}"); // This will print `hello, world!` } // s goes out of scope here
Since data on the heap is being used, we need to:
- Allocate that memory when we create the string
- Modify the buffer as we grow the string
- Free the memory when we're done with it
Rust is a big fan of zero-cost abstraction, meaning that lots of these details that would ordinarily have to be done manually in C/C++, are abstracted away behind String
's implementation details. The zero-cost bit means it's just as fast as doing it manually, so we get all this simplicity for free with no performance cost.
- Memory is allocated by the call to
String::new()
orString::from()
- The buffer is modified behind the scenes when we modify the string's size using
String::push_str()
- The memory is automatically freed when the owning variable goes out of scope
This last bit is really key. Using the example above, we can see that s
goes out of scope at the closing brace. s
is the variable that owns the string, so the as the owner goes out of scope, the memory is freed. This is done using a special function called drop, which Rust calls for us and is implemented automatically for most types.
(This concept will be familiar to you if you know about the concept of Resource Acquisition Is Initialization (RAII) in C++)
Moves
What do you think this code does?
#![allow(unused)] fn main() { let x = 5; let y = x; }
How about this?
#![allow(unused)] fn main() { let s1 = String::from("hello"); let s2 = s1; }
The first example binds the value 5
to the variable x
, and then the second line makes a copy of the variable x
and binds it to y
.
The second example does not do this.
String
is a Struct made up of 3 parts:
- A number representing its length
- A number representing the capacity of its internal buffer
- A pointer to the buffer where the data is.
(For those of you not familiar with pointers, a pointer is just a number representing an address in memory. It is a variable that says something like "hello, the data you're looking for isn't here, but it can be found at this memory address!". The idea is that you then go and look at that memory address on the heap to go get your data.)
Our string looks a little something like this (the left values are on the stack, the heap to the right):
So to copy our data, we could copy both the heap and stack data, giving us a view in memory like this:
This is also not what happens (imagine if your string was very very long, this would be a very expensive operation).
So what we could do is copy only the data on the stack (the three numbers), giving us a view of the two variables that looks like this:
This is still dangerous. Remember that when variables go out of scope, they are drop
ped and their memory is freed. If this were the case, the pointer would be freed twice, and this is a big error that can cause the heap to be corrupted.
What actually happens, is that when you create a copy of the String
, its values are moved, and the original invalidated.
A shallow copy is made, copying only the stack data, and not the heap data. To avoid the issue of multiple pointers pointing to the same data the old variable is invalidated. Try to run this:
#![allow(unused)] fn main() { let s1 = String::from("hello"); let s2 = s1; println!("{}, world!", s1); }
You can't, because s1
was invalidated when you moved it into s2
, and s2
became the new owner.
Copy
and Clone
Of course, this wasn't the case for our first example with the integers:
#![allow(unused)] fn main() { let x = 5; let y = x; println!("x = {}, y = {}", x, y); }
Both variables are still valid. This is because, for certain types which:
- are Small/cheap to copy,
- have a known size at compile time,
- and live on the stack
Rust disregards the whole move semantics thing and just makes copies all over the place, because it is basically free to do so. This is done via the Copy
trait, a trait that tells the compiler that the type is free to copy. If a type is Copy
, then you don't have to worry about move semantics. The following types are Copy
by default:
- All integral types, such as
u32
bool
(true
/false
)- Floats
f64
andf32
char
s- Tuples, but only if they contain types that are also
Copy
.(u32,f64)
isCopy
but(char, String)
is not
There are other ways to make copies of data too. The Clone
trait can be implemented on any type, which gives it a clone()
method. clone()
may be called explicitly to make a deep copy of your data. The following code clones the data in s1
into s2
, meaning both variables own separate, but identical, copies of the data.
#![allow(unused)] fn main() { let s1 = String::from("hello"); let s2 = s1.clone(); println!("s1 = {}, s2 = {}", s1, s2); }
Clone
is used to allow types to be explicitly copied when needed, and indicates that this is an expensive operation which may take timeCopy
is used to tell the compiler that types may be copied for free, and that move semantics may be disregarded for this type
To learn how to add these traits to your types, see Derivable Traits.
Don't worry too much about traits for now, we'll get into the details later. They're basically typeclasses/interfaces/abstract base classes. Traits define some behaviour in an abstract way (such as the clone
function), and then types can implement the behaviour.
Ownership and Functions
The semantics for passing values to functions follow the same rules, it will either move or copy, just like assigning to a variable.
fn main() { let s = String::from("hello"); // s comes into scope takes_ownership(s); // s's value moves into the function... // ... and so is no longer valid here let x = 5; // x comes into scope makes_copy(x); // x would move into the function, // but i32 is Copy, so it's okay to still // use x afterward } // Here, x goes out of scope, then s. But because s's value was moved, nothing // special happens. fn takes_ownership(some_string: String) { // some_string comes into scope println!("{}", some_string); } // Here, some_string goes out of scope and `drop` is called. The backing // memory is freed. fn makes_copy(some_integer: i32) { // some_integer comes into scope println!("{}", some_integer); } // Here, some_integer goes out of scope. Nothing special happens.
We can't use s
after take_ownership()
is called, as it is moved into the function, then the function scope ends and it is dropped.
Functions return values, which gives ownership too. Assigning the result of a function call to a variable makes that variable the new owner:
fn main() { let s1 = gives_ownership(); // gives_ownership moves its return // value into s1 let s2 = String::from("hello"); // s2 comes into scope let s3 = takes_and_gives_back(s2); // s2 is moved into // takes_and_gives_back, which also // moves its return value into s3 } // Here, s3 goes out of scope and is dropped. s2 was moved, so nothing // happens. s1 goes out of scope and is dropped. fn gives_ownership() -> String { // gives_ownership will move its // return value into the function // that calls it let some_string = String::from("yours"); // some_string comes into scope some_string // some_string is returned and // moves out to the calling // function } // This function takes a String and returns one fn takes_and_gives_back(a_string: String) -> String { // a_string comes into // scope a_string // a_string is returned and moves out to the calling function }
References and Borrowing (4.2)
What if we want to use a value in a function, without having to pass ownership around?
fn main() { let s1 = String::from("hello"); let len = calculate_length(s1); // error, s1 has been moved into calculate_length, // then dropped when it went out of scope println!("The length of '{}' is {}.", s1, len); } fn calculate_length(s: String) -> usize { s.len() }
This is where borrowing comes in. References allow us to temporarily borrow values from their owner.
fn main() { let mut s1 = String::from("hello"); let len = calculate_length(&s1); // calculate_length only *borrowed* //ownership remains with s1 println!("The length of '{}' is {}.", s1, len); } fn calculate_length(s: &String) -> usize { s.len() }
The ampersand &
in front of String
denotes that it is a reference type: the parameter s
in the function does not own the String
, it owns a reference to it. References are similar to pointers, except they come with a few more restrictions and static guarantees.
&
in front of a type name denotes that the type is a reference to a value of that type, and does not have ownership&
in front of a value creates a reference to that value (see the call site in the listing above,&s1
)
When you create a reference, you are borrowing the value.
So, using that, try this below:
#[allow(unused_mut)] fn main() { let mut s = String::from("hello"); add_world(&s); } fn add_world(some_string: &String) { some_string.push_str(", world"); }
Look at that compile error. You've tried to modify a value that you do not own, which is not allowed. When you create a reference, you are not allowed to modify it: references are immutable.
Mutable References
I lied, you can modify values that you don't own, but you need a special kind of reference: &mut
, the mutable reference:
fn main() { let mut s = String::from("hello"); add_world(&mut s); } fn add_world(some_string: &mut String) { some_string.push_str(", world"); }
This works, because we told the function it could accept an &mut String
, and then created one using &mut s
. How about this?
fn main() { let mut s = String::from("hello"); let ref_1 = &mut s; let ref_2 = &mut s; add_world(ref_1); add_exclamation(ref_2); } fn add_world(some_string: &mut String) { some_string.push_str(", world"); } fn add_exclamation(some_string: &mut String){ some_string.push_str("!"); }
I think the compiler is pretty clear here: we can only have one mutable reference in scope at a time. This is really annoying because this makes shared mutable state really hard, but for good reason. Shared mutable state is generally regarded as really bad, as it introduces loads of bugs: data races are non-existent, and there's no pointer aliasing at all. Rust will just straight up refuse to compile any of these bugs, which is very kind of it. Compare this to C, which doesn't give a shit how dumb you are, and will be even dumber in response.
You also cannot combine mutable and immutable references:
#![allow(unused)] fn main() { let mut s = String::from("hello"); let r1 = &s; // no problem let r2 = &s; // no problem let r3 = &mut s; // BIG PROBLEM println!("{}, {}, and {}", r1, r2, r3); }
You can have either:
- any number of immutable references or
- one mutable reference.
Remember that this all depends on scope though, and references are dropped when they leave scope or when they are last used:
#![allow(unused)] fn main() { let mut s = String::from("hello"); let r1 = &s; // no problem let r2 = &s; // no problem println!("{} and {}", r1, r2); // variables r1 and r2 will not be used after this point let r3 = &mut s; // no problem println!("{}", r3); }
Rust will also not allow you to create a reference to data that outlives the reference:
fn main() { let reference_to_nothing = dangle(); } fn dangle() -> &String { let s = String::from("hello"); &s }
This code won't compile, because s
goes out of scope at the end of dangle
, so the string that it owns is dropped. This means that if you were to access the reference_to_nothing
that dangle
returns, it would be precisely that: a reference to something that no longer exists. &s
would be a dangling reference, and Rust doesn't allow this.
The concept of lifetimes deals with when and where references are valid, but that can get really tricky so we'll save that for another time.
Borrowing in Functions
Ownership and Functions above discusses moving and copying with function arguments, but most of the time functions will want to only borrow their arguments. Functions arguments can be captured by reference, mutable reference, or by value:
fn by_value(s: String) { println!("{s}"); // string is dropped here } fn by_mut_ref(s: &mut String) { s.push_str("!!"); } fn by_ref(s: &String) { // cannot modify string! // comment this line out to make by_ref work s.push_str(" world"); // can read string to print it println!("{s}"); } #[allow(unused_mut)] fn main() { // convert our &str to a String let mut s = "hello".to_owned(); by_ref(&s); by_mut_ref(&mut s); by_value(s); // by_value above took ownership of s, so cannot use after // comment out this line println!("{s}"); }
Play with the example above to see where you can and can't modify the string.
When writing a function, consider if you need to capture by value (move) or by reference. Always use references unless you absolutely need to modify your value, and then you should use immutable references. If you really need to take ownership of the value, then pass by value.
Of course, when working with Clone
types, you can always pass by value as the compiler will make copies for you and you don't need to worry about borrows.
Slices (4.3)
Slices are a special kind of reference, that give you a view into a contiguous sequence of elements. Slices allow you to refer to parts of strings, arrays, or Vec
s. Think of them as references with a length. Say we want to take a string, and return the first word in it:
- We could return some indices, but this would be annoying to work with
- We could create a copy, but this would be expensive (what if the word is really long?)
- Or, we could use a slice, to return a reference to a part of the string:
fn first_word(s: &String) -> &str { let bytes = s.as_bytes(); //some fancy for loop/iterator stuff for (i, &item) in bytes.iter().enumerate() { if item == b' ' { return &s[0..i]; } } &s[..] } fn main() { let mut s = String::from("hello world"); let word: &str = first_word(&s); // s.clear(); // Error if uncommented println!("the first word is: {}", word); }
The compiler will ensure that our slice of the string remains valid, and doesn't get mutated while we're trying to use it. Uncommenting s.clear()
fails as it creates a mutable reference to s
(it borrows s
mutably), which then invalidates our immutable slice in word
, meaning we can no longer use it past that point.
&str
&str
is the type of string slices, but it is also the type of string literals. Remember how string literals were stored inside the program binary somewhere? Well slices allow us to immutably refer to those.
#![allow(unused)] fn main() { let s: &str = "Hello, World!"; let s1: &str = &s[1..3]; }
We don't own the string, but we can slice it immutably.
More detail on String
and &str
is available in The Book. Rust uses UTF-8 encoding, so 1 byte != 1 character, and there's a fair amount of complexity associated with this.
Another neat thing about the &str
type worth mentioning is that anywhere that accepts it will also automatically accept an &String
(recall the important difference between the two). Basically, String
implements the Deref
trait, which allows the compiler to perform deref coercion. Don't worry about it too much for now, the main takeaway I want you to have is that: if you ever want a function to accept a string reference, you should probably use &str
over &String
.
Array slices
You can also slice arrays:
#![allow(unused)] fn main() { let a: [i32; 5] = [1, 2, 3, 4, 5]; let slice_1: &[i32] = &a[1..3]; let slice_2: &[i32] = &a[4..5]; println!("Slice 1: {slice_1:?}"); println!("Slice 2: {slice_2:?}"); }
Vec
s (8.1)
Vec
are Rust's equivalent of:
- Python's List
- Java's ArrayList
- C++'s std::vector
It's a contiguous, heap allocated, dynamically-allocated, homogenous, linear, collection of elements.
fn main() { let mut v1: Vec<i32> = Vec::new(); // a new, empty Vec let v2: Vec<&str> = Vec::from(["hello", "world"]); // create from an array // we can push v1.push(5); v1.push(6); v1.push(7); v1.push(8); // and pop // as the vector may be empty, this returns an Option let maybe_tail: Option<i32> = v1.pop(); match maybe_tail { Some(tail) => println!("Tail is {tail}"), None => println!("Empty Vec!!"), } }
We can also index and slice into vectors.
#![allow(unused)] fn main() { let mut v = vec![1, 2, 3, 4, 5]; let third: &i32 = &v[2]; println!("The third element is {third}"); let mut_slice: &mut [i32] = &mut v[3..5]; mut_slice[1] +=1; println!("The new last element is {}", mut_slice[1]); }
Note that indexing is not a safe operation, and may panic if the index is out of bounds. We can fix this using the get
method, which returns an Option
.
fn main() { let v = vec![1, 2, 3, 4, 5]; // will panic when uncommented! //let does_not_exist = &v[100]; match v.get(100) { Some(i) => println!("100th element is {i}"), None => println!("Looks like your index is out of bounds there buddy"), } match v.get(2) { Some(i) => println!("2nd element is {i}"), None => println!("Looks like your index is out of bounds there buddy"), } }
If you wanted to iterate over a vector, you might think of something like this:
#![allow(unused)] fn main() { let v = vec![100, 32, 57]; for i in 0..v.len() { print!("{i} "); } }
However, we can use iterators to do better:
#![allow(unused)] fn main() { let v = vec![100, 32, 57]; for elem in &v { print!("{elem} "); } }
This will iterate over an immutable reference to v
, hence the &v
. If we wanted to iterate over while mutating, we need a mutable iterator, which is created as you'd expect:
#![allow(unused)] fn main() { let mut v = vec![100, 32, 57]; for elem in &mut v { *elem += 1; print!("{elem} "); } }
Note how we have to use the dereference operator *
on elem
. Dereferencing is how we access the value behind a reference. This is often done implicitly for us, but sometimes we have to do it ourselves, like here.
Methods, Associated Functions & impl
s (5.3 & 10.2)
Methods are just like functions, but associated with a type. You've been using methods this whole time, by calling functions using .
syntax. We can define methods on our own types using impl
blocks, which are used to implement a function for a type.
struct Rectangle { width: f32, height: f32, } impl Rectangle { fn area(&self) -> f32 { self.width * self.height } } fn main() { let r = Rectangle { width: 2.0, height: 3.5 }; println!("The area of the rectangle is {}", r.area()) }
The area
method can then be called as you'd expect: r.area()
, where r
is any Rectangle
. Note how area
takes a special parameter called self
. self
is the first parameter to any method, and can capture by reference (&self
), by mutable reference (&mut self
), or by value (self
).
We can also use impl
blocks to create associated functions, which are similar to methods, but don't capture self
. These are the equivalent of static methods from many languages.
struct Rectangle { width: f32, height: f32, } impl Rectangle { fn area(&self) -> f32 { self.width * self.height } } // Note these impl blocks can be separate but do not need to be impl Rectangle { fn new(size: (f32, f32)) -> Self { Rectangle {width: size.0, height: size.1} } } fn main() { let r = Rectangle::new((2.0, 3.5)); println!("The area of the rectangle is {}", r.area()) }
new
is a common associated function which is used to provide a constructor shorthand. We call it using ::
syntax: Rectangle::new()
.
Note Self
is the return type (distinct from the parameter self
): Self
is simply an alias for the type of whatever the impl
block applies to (Rectangle
, in this case).
Methods are just fancy associated functions, r.size()
is equivalent to calling Rectangle::area(&r)
.
Traits
The other use for impl
blocks is to implement a trait for a type. Traits are used to define shared behaviour, and types can implement traits to share functionality. They're very similar to Java's Interfaces or Python/C++'s Abstract base classes, and almost identical to Haskell's typeclasses. The idea is you implement a trait on a type by filling in some function definitions, and then you get a bunch of other functionality for free, or can use that type in certain places.
trait Shape { /// Returns the area of the shape fn area(&self) -> f32; } impl Shape for Rectangle { fn area(&self) -> f32 { self.width * self.height } } impl Shape for Circle { fn area(&self) -> f32 { self.radius * self.radius * 3.14 } }
Traits are one of the more complex bits of Rust's type system, and we'll go into more detail next time.
Part 3 - Don't panic
This section will look at error handling in Rust, and also take a look at a few other things that you might have come accross already in more detail.
Errors (9)
Rust's error handling mechanisms are kind of unique. It draws a lot on functional languages here too, with the Result<T, E>
and Option<T>
types being used to represent possible errors that can reasonably be expected to be handled. Panics are the other error mechanism in Rust, and are used for unrecoverable errors that halt execution.
Panics (9.1)
When something goes very very wrong in your program, and you can't really do anything about it, you panic. Rust has the panic!()
macro, which prints an error message, unwinds and cleans up the stack, and then exits.
fn main() { panic!("uh oh"); }
See how our error message ends up in the console? By default, no backtrace is shown when code panics, but we can set the environment variable RUST_BACKTRACE=1
to. On your own machine, create a rust program that you know will panic, such as indexing out of bounds in an array, and run it with the environment variable set. You'll get a lot more information, especially when you're in debug mode. Printing the backtrace will help you pin down what is causing your code to panic.
Results (9.2)
A lot of errors can be handled at runtime, which is what Result<T, E>
is for.
enum Result<T, E> {
Ok(T),
Err(E),
}
Result is generic over T
, the type of the value returned upon success, and E
, the type of the error value. Functions that return a Result
expect you to handle possible error cases explicitly, whereas in other languages you may have to deal with exceptions. This makes error handling non-optional, meaning you're forced to write more robust code by design.
#![allow(unused)] fn main() { use std::{fs, io}; let f: Result<fs::File, io::Error> = fs::File::open("foo.txt"); println!("{f:?}"); //print the result }
We can see in the above snippet that trying to open a non-existent file returns a result with either the file handle, or an I/O error. Since the code here is run on the Rust playground servers where foo.txt
does not exist, the Err(io::Error)
variant of the enum is printed, containing some information about the error.
If foo.txt
did exist, we'd get an Ok()
variant looking something like:
Ok(File { fd: 3, path: "/Users/Joey/code/rs118/foo.txt", read: true, write: false })
Remember that f
would still be Ok(File)
, and we'd need to extract the contained file before we can use it. There are a few different ways to handle Result
s. Pattern matching with match
or if let
is a good option, as then we can destructure the type to get the values we want out of it, and deal with the error however we want.
#![allow(unused)] fn main() { use std::{fs, io}; let f: Result<fs::File, io::Error> = fs::File::open("foo.txt"); match f { Ok(file) => println!("File opened successfully!"), Err(_) => println!("Could not open file, continuing..."), } }
We could also use unwrap
, which panics if the Result
is an Err(E)
, and returns T
if not. expect
behaves the same, except we can specify an error message
use std::{fs, io}; fn main() { let f: Result<fs::File, io::Error> = fs::File::open("foo.txt"); let file = f.expect("Could not open file"); }
If we don't want to handle the error ourselves, we can return propagate the Result
error back up to the caller. If we wanted to do this manually, we would need something like:
fn maybe_func() -> Result<T, E>;
let x = match maybe_func() {
Ok(x) => x,
Err(e) => return Err(e),
};
// equivalent to
let x = maybe_func()?;
In functions that return a Result<T, E>
or Option<T>
, we can use the ?
operator as a shorthand to bubble the error back up, where the return type is compatible.
use std::{fs, io}; fn open_my_file() -> Result<String, io::Error> { let contents: String = fs::read_to_string("foo.txt")?; Ok(contents) } fn main() { let error = open_my_file().unwrap_err(); println!("{error}"); }
Note that this only work if all the expressions you use ?
on have the same return type as your function.
Alternatively, you can return Result<T, Box<dyn Error>>
, which uses a neat trick with trait objects and boxes to allow you to return any error type that implements the Error
trait. main
can also return this type, allowing you to use ?
within main
.
use std::{error::Error, fs}; fn do_fallible_things() -> Result<(), Box<dyn Error>> { let _ten = u32::from_str_radix("A", 16)?; let _crab = String::from_utf8(vec![0xf0, 0x9f, 0xa6, 0x80])?; let _contents: String = fs::read_to_string("foo.txt")?; Ok(()) } fn main() -> Result<(), Box<dyn Error>> { let _f = fs::File::create("new_file")?; do_fallible_things() }
Best Practices (9.3)
Getting used to writing idiomatic error handling code in Rust can take a bit of practice. The general idea is to use the type system to your advantage where possible, and panic only where absolutely necessary.
unwrap
and expect
are handy when prototyping, before you decide on how you want to handle errors within your code. unwrap
is also useful in cases where you have more information than the compiler, such as there being an invariant in your code that means the conditions for an error can never occur. panic
should generally be used in cases where your code is in a bad state, and execution cannot continue under any circumstances.
Result
should be used where failure is an expected possibility, and the caller can be reasonably expected to handle them. It is often useful to create custom types that can be returned within Result::Err(E)
, to describe the kinds of errors that may occur within your program. Encoding information within the type system makes your code more expressive and robust.
#[derive(Debug)] enum MyError { FileOpenError, FileTooLarge, FileParseError, NumberTooSmallError, } fn do_stuff() -> Result<u64, MyError> { let contents = std::fs::read_to_string("data.txt").map_err(|_| MyError::FileOpenError)?; if contents.len() > 10 { return Err(MyError::FileTooLarge); } let num: u64 = contents.parse().map_err(|_| MyError::FileParseError)?; num.checked_sub(100).ok_or(MyError::NumberTooSmallError) } fn main() { println!("{:?}", do_stuff()); }
Note how I'm making use of some of the methods of Result
which make working with errors a lot nicer.
map_err
changes theErr
type (technically transformed by a function/closure -- the|_| MyError::...
ignore function input).ok_or
transforms anOption
into aResult
, you need to specify the error to give instead ofNone
.
I want to shout out two crates here that make error handling much nicer. Anyhow
provides a more ergonomic and flexible type for error handling. thiserror
provides a #[derive]
macro for the std::error::Error
trait, allowing you to create custom error types much more easily. Have a look at some examples of using the two to see how to structure code that works with errors nicely.
Generics (10.1)
Generics allow for parametric polymorphism in Rust, letting us write type signatures that are generic over type variables. Generics can be used in struct/enum definitions:
struct Point<T> {
x: T,
y: T,
}
enum Option<T> {
Some(T),
None
}
And in function/method definitions:
fn id<T>(ty: T) -> T {
ty
}
impl<T> Point<T> {
fn new(x: T, y: T) -> Self {
Self {x, y}
}
}
At compilation, Rust creates a separate copy of the generic function for each type needed, a technique known as monomorphisation (unlike, for example, Java, which uses a single function that accepts any Object
(type erasure)). Rust's approach is faster at runtime, at the cost of binary size and compilation time.
Generics are most powerful when used in combination with trait bounds, which leads me very nicely into...
Traits (10.2)
Traits define shared behaviour, and tell the compiler about what functionality a particular type has and shares with other types. They're very similar to interfaces/abstract classes in object-oriented languages, and very very similar to typeclasses in Haskell. Traits are defined as shown, with the function bodies omitted.
#![allow(unused)] fn main() { trait Printable { fn format(&self) -> String; } }
Giving a type a trait is much like implementing any other function, you useimpl <trait> for <type>
.
#![allow(unused)] fn main() { trait Printable { fn format(&self) -> String; } struct Person { first_name: String, last_name: String } impl Printable for Person { fn format(&self) -> String { format!("{} {}", self.first_name, self.last_name) } } }
The type can then be used as normal:
trait Printable { fn format(&self) -> String; } struct Person { first_name: String, last_name: String } impl Printable for Person { fn format(&self) -> String { format!("{} {}", self.first_name, self.last_name) } } fn main() { let named = Person { first_name: String::from("Triz"), last_name: String::from("Luss"), }; println!("{}", named.format()); }
Traits can be used to apply bounds to type variables, allowing only use of types that implements the trait. The syntax is f<T: Trait1 + Trait2>(a: T, b: T)
:
#![allow(unused)] fn main() { fn bold_fmt<T : Printable>(original: T) -> String { format!("**{}**", original.format()) } }
If there are lots of trait bounds, you might also see them written using a where
clause:
impl<Si, Item, U, Fut, F, E> Sink<U> for With<Si, Item, U, Fut, F>
where
Si: Sink<Item>,
F: FnMut(U) -> Fut,
Fut: Future<Output = Result<Item, E>>,
E: From<Si::Error>,
{
//trait impl for Sink<U>
}
(This is an actual trait implementation from the futures crate that provides tools for asynchronous programming. This code is implementing the generic trait Sink
for the struct With
, that has 5 generic type parameters. Don't worry about the complexity here, I just wanted to show fully what the syntax looked like)
This pattern of a .format
seems awfully useful, and luckily enough Rust provides a Display
trait to print a struct. println!()
calls fmt
automatically on any Display
types when using {}
as the placeholder:
use std::fmt; struct Person { first_name: String, last_name: String } impl fmt::Display for Person { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "{} {}", self.first_name, self.last_name) } } fn main() { let named = Person { first_name: String::from("Triz"), last_name: String::from("Luss"), }; println!("{}", named); }
Display
is meant for user-facing representation of a type, but there also exists Debug
that is intended to fully describe a type instance to the programmer (like repr
in Python). This uses the {:?}
placeholder.
Writing these, especially Debug
, can get very tedious, so Rust allows you to derive some traits, including Debug
, PartialEq
, and Hash
. This is just like Haskell's deriving
.
#[derive(Debug, PartialEq)] struct Person { first_name: String, last_name: String } fn main() { let named = Person { first_name: String::from("Triz"), last_name: String::from("Luss"), }; // Try making these match let named2 = Person { first_name: String::from("Joris"), last_name: String::from("Bohnson"), }; println!("{:?} {}", named, named == named2); }
We can also use trait objects to allow us to return any type implementing a trait. These are often used in combination with Box
es, as trait objects are resolved at runtime, so we don't know their size statically and they have to be stored on the heap.
trait Printable { fn format(&self) -> String; } struct Person { first_name: String, last_name: String } impl Printable for Person { fn format(&self) -> String { format!("{} {}", self.first_name, self.last_name) } } struct Animal { species: String, name: String } impl Printable for Animal { fn format(&self) -> String { format!(" a {} named {}", self.species, self.name) } } fn resident_of_10_downing(is_cat: bool) -> Box<dyn Printable> { if is_cat { Box::new(Animal{ species: String::from("Cat"), name: String::from("Larry") }) } else { Box::new(Person{ first_name: String::from("Triz"), last_name: String::from("Luss") }) } } fn main() { println!("{}", resident_of_10_downing(false).format()); }
Traits and generics form the backbone of the type system, and there's an awful lot you can do with them. A few examples of interesting things to research:
- Conditionally implementing methods by bounding
impl
blocks - Associated types
- Trait objects & object safety
std::marker::PhantomData
- Const generics
Deref
coercion- Function traits (
Fn
vsfn
) - The
Sized
trait - Interior mutability
- GATs
References & Lifetimes (10.3)
One of Rust's selling points is that it ensures, at compile time, that all references and borrows are always valid. The compiler does this using lifetimes, which is the scope for which a reference is valid. Most of the time the compiler can do this for us, but sometimes we have to step in and help it out.
This is very unique to Rust and will feel weird to start with, but it'll really cement the ideas about borrow checking and ownership.
Recall how the borrow checker works: it makes sure that all of our references are valid, and prevents us from using references after the values have been dropped. Every reference in Rust has a lifetime. Below, r
has a lifetime of 'a
and x
has a lifetime of 'b
.
#![allow(unused)] fn main() { { let r; // ---------+-- 'a // | { // | let x = 5; // -+-- 'b | r = &x; // | | } // -+ | // | println!("r: {}", r); // | } // ---------+ }
Have a look at that compile error. We can see that r
borrows x
, but x
is dropped while it's still borrowed, which is not allowed.
Let's look at an example: a function that takes two strings and returns the longest one:
#![allow(unused)] fn main() { fn longest(x: &str, y: &str) -> &str { if x.len() > y.len() { x } else { y } } }
Rust isn't quite sure how to infer lifetimes here, so we need to help it out by adding generic lifetimes parameters. Just take a minute to appreciate that compiler error too, which tells us to do exactly what I'm about to explain now.
Lifetime annotations in function signatures tell the compiler which input references the output reference comes from, describing how input and output lifetimes relate to each other. We introduce lifetime names with the same syntax as generics, and then can use the annotations on references to tell the compiler what the lifetime of the reference is.
#![allow(unused)] fn main() { fn longest<'a>(x: &'a str, y: &'a str) -> &'a str { if x.len() > y.len() { x } else { y } } }
The output reference has the same lifetime as the two input references: 'a
. The annotations go in the function signature, and become part of the contract of the function, much like types are. Anyone calling the function must uphold the contract, meaning the two input references must have the same lifetime. We are guaranteeing the returned reference is alive for at least 'a
. The generic lifetime becomes the shortest of x
and y
's lifetimes (which is the maximum time both are alive). Take a look at the following error:
fn longest<'a>(x: &'a str, y: &'a str) -> &'a str { if x.len() > y.len() { x } else { y } } fn main() { let string1 = String::from("long string is long"); let result; { let string2 = String::from("xyz"); result = longest(string1.as_str(), string2.as_str()); } // string2 dropped, last point both string1 and string2 are live println!("The longest string is {}", result); } // string1 dropped
result
could either be string1
or string2
. The compiler cannot guarantee that result
lasts longer than either string it could reference: if result
referenced string2
, the target memory will be dropped before the println!
, making result
invalid.
When retuning a reference from a function, the lifetime of the return type needs to match the lifetime of one of the parameters:
#![allow(unused)] fn main() { fn longest<'a>(x: &str, y: &str) -> &'a str { let result = String::from("really long string"); result.as_str() } }
Have a look at the error. Even though we annotated the lifetime, we're trying to return a reference to new memory, which will be dropped when the function ends. In these cases, it is better to use an owning type (String
) instead of a reference.
Lifetimes in Structs
What if you wanted to create a struct that contained a reference? The tuple struct below contains a reference to the first word in a string.
#![allow(unused)] fn main() { struct Word<'a>(&'a str); impl<'a> Word<'a> { pub fn new(input: &'a str) -> Self { let first_word = input.split_ascii_whitespace().next().unwrap(); Self(first_word) } } }
Just like with functions, we are tying the struct's lifetime to the lifetime of a referenced string (input
). In this case, Word
would contain invalid data once input
is dropped, so cannot outlive it. The example below won't compile, take a look at the error message.
struct Word<'a>(&'a str); impl<'a> Word<'a> { pub fn new(input: &'a str) -> Self { let first_word = input.split_ascii_whitespace().next().unwrap(); Self(first_word) } } fn main() { let fst: Word; { let sentence = "Oh boy, I sure do love lifetimes!".to_owned(); fst = Word::new(&sentence); } // sentence dropped println!("First word is: {}", fst.0); }
fst
references sentence
, so fst
cannot outlive sentence
. Lifetimes are hard, so don't worry if you don't get it all straight away. The best way to get the hang of them, as with anything, is to use them, so try writing some code that needs lifetime annotations, or adding annotations to existing code.
Closures (13.1)
Closures are anonymous functions, similar to lambdas in other languages. What makes closures special in Rust is that they close over their environment, allowing you to use variables from outer scopes in which they are defined. The arguments to a closure go within pipes: |args|
and the expression following the pipes is the function body. Like variable types, you don't need to specify types as they're inferred, but you can if you like.
#![allow(unused)] fn main() { let increment = |x: u32| -> u32 { x+1 }; let also_increment = |x| x+1; let four: i32 = also_increment(3); }
The example above just shows a basic anonymous function. Consider the one below.
fn main() { let pi: f64 = 3.14; let area = |r| pi * r * r; let radii = [1.0, 2.5, 4.1]; let areas = radii.map(area); println!("{areas:?}"); }
The closure is referencing a variable from local scope (pi
) within it's function body, and we can still do all the things with it that we usually can with anonymous functions. We then pass the closure to map
, which maps an array of circle radii to their areas by applying the closure to each element. Closures can be passed around to/from functions, bound to variables, stored in structs, etc, like any other value.
Closures capture variables in the least restrictive way they are allowed, in order: by reference &T
, by mutable reference &mut T
, by value T
. Variables are captured between closure definition and the closure's last use and unsurprisingly follows the borrowing rules from before. Meaning, if a closure mutably references a variable, that variable cannot be borrowed again until after the closure's last use. If a variable is required by value and isn't Copy
able, the closure cannot be run more than once, as the variable is consumed. These types have corresponding traits.
fn main() { let mut idx = 0; let zeros = [0; 5]; println!("{:?}", zeros.map(|_| { idx += 1; idx })); }
The Rust By Example section on closure capturing gives good examples of these rules.
Iterators (13.2)
Iterators are used to process sequences of items. When usually you'd have to use loops to process sequences of data, iterators allow common patterns such as maps, filters and folds to be expressed much more concisely. Many languges have analagous concepts: Haskell's list
, Python's Generator
, or Java's Iterator
and Stream
. Iterators can be created from any type that implements IntoIterator
, such as slices, vectors and arrays.
#![allow(unused)] fn main() { let v1 = vec![1, 2, 3]; let v1_iter = v1.iter(); for val in v1_iter { println!("Got: {}", val); } }
for
loops in Rust work using iterators. All iterators implement the Iterator
trait, which requires the next()
method: fn next(&mut self) -> Option<Item>
. It will return the next item in the iterator each time it is called, until it returns None
after the end of the iterator.
Iterators have a number of methods, generally grouped into consumers and adaptors. Consumers process the iterator and return some aggregated value, whereas adaptors convert the iterator into another iterator. Iterators in Rust are lazy, so no computation is done until they are consumed.
fn main() { let v1 = vec![4, 7, 12, -9, 18, 1]; //consumers let total: i64 = v1.iter().sum(); let min = v1.iter().min(); let max = v1.iter().max(); let product = v1.iter().fold(0, |acc, x| acc * x); //producers let evens = v1.iter().filter(|x| *x % 2 == 0); let squares = v1.iter().map(|x| x * x); //use the iterator in a for loop //zip zips two iterator together for (i, i_squared) in v1.iter().zip(squares){ println!("{i} squared is {i_squared}"); } }
Take a look at the Iterator
trait and std::iter
to see what iterators can do.
Iterators are at least as performant as the equivalent loops, and often allow to express computation much more cleanly and efficiently. Choosing for loops vs chains of adaptors for more complex operations is often personal preference, but it is best to know both wayse. Two methods are shown below to calculate the dot product of two slices. Which do you think is better?
fn dot_1(x: &[i64], y: &[i64]) -> i64 { assert_eq!(x.len(), y.len()); //simple indexing for loop let mut sum = 0; for i in 0..x.len() { sum += x[i] * y[i]; } sum } fn dot_2(x: &[i64], y: &[i64]) -> i64 { assert_eq!(x.len(), y.len()); x.iter() // zip combines two lists into pairs: // [1,2,3] zipped with [4,5,6] is [(1, 4), (2, 5), (3, 6)] .zip(y.iter()) // fold accumulates the result (sum) over each element in the iterator. .fold(0, | sum, (xi, yi) | sum + xi * yi ) } fn main() { let x = vec![4, 7, 12, -9, 18, 1]; let y = vec![1, 0, -8, 63, 72, -11]; let d1 = dot_1(&x, &y); let d2 = dot_2(&x, &y); println!("{d1} == {d2}"); }
Further Reading
Well, thanks for making is this far. Hopefully you consider yourself a confident Rustacean by now (at least in theory, doing the projects should solidify this). If you want to learn more, the list below contains a collection of Rust things that I think are excellent.
- The chapters of the book not covered here are worth a read (11, 12, and 14 onwards)
- The Rustonomicon covers lots of the finer details of Rust, especially the forbidden arts of unsafe Rust
- The Reference aims to be a complete reference for the language
- Learn Rust With Entirely Too Many Linked Lists talks you through building a series of linked lists in Rust, with a surprising amount of detail for something seemingly so simple
- Rust for Rustaceans covers some advanced Rust topics for, for those who've already mastered the basics.
- The Rust Cookbook is a collection of examples that demonstrate good practices and common tasks
- The Embedded Rust Book is an introduction to writing Rust for bare metal embedded systems
- Discovery is an introduction to microcontroller-based embedded systems that uses Rust as the teaching language rather than the usual C/C++
- rust-lang.org's own "Learn Rust" page
- This Week in Rust is a weekly Rust newsletter
- https://cheats.rs/ is a handy cheatsheet
- https://github.com/ctjhoa/rust-learning has a bunch of resources for learning Rust
- https://github.com/rust-unofficial/awesome-rust is a curated list of Rust code and resources
- r/rust often has interesting posts worth reading
- The Rust community discord server is cool
Tic-Tac-Toe!
We're gonna use everything we learned to put together a neat little game of tic-tac-toe. I'm assuming you all know the rules. If you get stuck on any of the tasks, try to use your resources (The Book, Rust by Example, Google), or ask for someone to help talk you through it, before going straight to the solutions. Remember, the compiler is your friend and will try to tell you where to fix your code when you have an error, and always run cargo clippy
and cargo fmt
! (I recommend setting up VS Code to do this for you on save)
Task 0: cargo new
Create a new Cargo project, cd
into it, and open your editor. Check The Book if you need a reminder on how to do this.
Task 1: Data Types
We're gonna need some types to represent things within our game. In a game with two players and a board, a Player
type and a Board
type both seem sensible.
- There are two players,
X
andO
- The board is a 3x3 grid of either an
X
orO
, or blank.
Task 1.1
Implement a simple data type to represent players. We could just use strings or numbers to do this, but structuring data properly is key to writing good Rust.
Recall that we can use struct
s and enum
s to create our own data types. Which of these could be used to represent our a type with only two different values?
For any struct
or enum
you write today, add the line #[derive(Copy, Clone)]
just above it. This derive statement tells the compiler to copy your types whenever it would usually move it, allowing us to (mostly) ignore the borrow checker and focus on the basics for now. Don't worry too much about what this does exactly for now, but details are available here if you're interested.
Task 1.2
Implement a simple data type to represent the state of the game/board.
There are a few ways of approaching this, most of them involving a fixed size array. You'll want to use your player type, but also think about what types can be used to represent something that may or may not be there (Option
, anyone...?)
Task 1.3
So we have some players and a game board, but what now? Well it's no good if neither of our players can see the board, so you're going to have to come up with some way of printing the board to the terminal.
Create a new, empty instance of your game board type in main, and write some code to print the empty board. Experiment with manually adding some moves to the board and make sure your code can handle printing X
s and O
s properly.
You'll most likely want to iterate through your board array in some way, printing some other characters along with it. You'll need some code to print your Player
type too (using the Display
trait if you feel fancy), but a simple match
expression with some println!()
s will likely do for now.)
Note: Rust might not allow you to compare the equality of two custom types so easily. This is also A Good Thing™ because the notion of equality is not so simple for all types, so much so that Rust splits it into two traits, Eq
and PartialEq
. You will probably need to derive them for your type, by adding Eq, PartialEq
alongside Copy, Clone
in the #[derive()]
attribute. Again, The Book has more details on this.
Task 2: Gaming
You're finally ready to write your game. You'll want a game loop in your main function to do a few things:
- Print the state of the board each turn (T1.3)
- Prompt a player for input (T2.1)
- Add the player's guess to the board (T2.2)
- Check if they've won (T3)
- Move to the next turn (T2.3)
The first bit you've already written, but we need to do the rest too
Task 2.1
Write some code to prompt for user input in a loop, storing whatever data they enter.
What kind of loop do you want (for
/while
/loop
), and when do you want to break out of it/jump back to the top of it? Consider your control flow carefully here. You'll also need some way to read user input from the terminal. Rust has a Stdin
struct with a read_line()
method in its standard library. The Book has some good examples of this. You'll need to convert your input from a string into a number too, so check out str::parse
for some help with that.
Task 2.2
Now we have player input, we need to use it to update the state of the game. Use the input to add the player's turn to the board, if it is a valid guess. Have a look at std::io
for input. Numbering squares left-to-right top-to-bottom works well, but if you want to be fancy, how about some chess board style labelling?
What constitutes valid input for a turn? You have 9 squares on your game board, and you can't play where there is already a square in that space. If the guess isn't valid, you'll need to get the player to input a new guess.
Task 2.3
At the end of each turn, you need to move the game to the next player. Add some code to make sure players take turns properly in your loop, and make sure your game is mostly coherent at this point.
Task 3: A winner?
Two players should be able to play your game now, taking turns, and specifying only valid moves. But this is no fun if there are no winners.
Add some code to your game loop to see if a move leads to the player winning. If so, print a message to indicate this, and exit the game.
There are multiple cases to consider for a win: 3 rows, 3 columns, and the 2 diagonals. You could hard-code all 8 of these, or save some sanity with some for
loops. Up to you.
Tic-Tac-Toe Solutions
Task 0
If you haven't installed Rust/cargo already, head to https://rustup.rs
cargo new tic-tac-toe
cd tic-tac-toe
code .
Exchange code
for your editor of choice.
Task 1.1
You're looking for an enum:
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
enum Player {
X,
O,
}
Note that I've derive
d some traits on this type, which will come in handy later:
Debug
generates a string representation of the type for debugging, that can be used via thedbg!()
macroEq
andPartialEq
are tell the compiler it can compare equality of the type using==
and!=
Copy
andClone
tells the compiler that it is free to copy the type all over the place- This means we don't have to worry about move semantics for now. That's a lesson for next time.
Task 1.2
There's a few different ways to go about this. You could use either a 2d array, or 1d array, whichever you prefer. The way I opted to use a 2-d array of Option<Player>
, which represents either Some(Player)
, or None
, when the square is empty. I then wrapped that array in a struct, which also holds who's turn it currently is, and if there is currently a winner. This allows the struct to represent more the state of the entire game than just the board, but this is entirely up to you.
struct Board {
grid: [[Option<Player>; 3]; 3],
current_turn: Player,
winner: Option<Player>,
}
You could also opt to not hold the current player or winner in the struct, in which case a type alias would make more sense.
type Board = [[Option<Player>; 3]; 3]
Task 1.3
The let mut
expression creates a new, mutable, instance of our Board
struct from above. We then iterate through each square, adding in some decoration too.
let mut board = Board {
grid: [[None, None, None], [None, None, None], [None, None, None]],
current_turn: Player::X,
winner: None,
};
println!("-------------");
for row in board.grid {
for square in row {
print!("|");
match square {
Some(p) => print!(" {} ", p),
None => print!(" "),
}
}
println!("|");
println!("-------------");
}
will print something like:
-------------
| X | O | X |
-------------
| O | X | O |
-------------
| X | O | X |
-------------
Note how we're using our Player
enum within the print!()
macro. This is because I manually implemented the Display
trait on it:
impl Display for Player {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}",
match self {
Player::X => "X",
Player::O => "O",
}
)
}
}
This may look a little scary for now, because it is. Pattern matching on Option<Player>
in the loop just as good:
println!("-------------");
for row in board.grid {
for square in row {
print!("|");
match square {
Some(Player::X) => print!(" {X} "),
Some(Player::O) => print!(" {O} "),
None => print!(" "),
}
}
println!("|");
println!("-------------");
}
You could also implement the Display
trait on the entire Board
if you wished, moving the above code to the fmt
function, similar to Player
.
Task 2.1
You'll need std::io
, which is a module from Rust's standard library. Modules are imported in Rust using the use
keyword, so something like:
use std::io::stdin;
use std::io::stdout;
will import those modules. You can also combine them if you want:
use std::io::{stdin, stdout};
You could use a while
loop, with a condition checking for winners, or a loop
with a break
. I opted for the latter approach.
fn main() {
let mut board = Board {
grid: [[None, None, None], [None, None, None], [None, None, None]],
current_turn: Player::X,
winner: None,
};
loop {
//prompt for user input
print!("Player {}, enter a square>>", board.current_turn);
//flush stdout because stdout is weird
stdout().flush();
//create the buffer our input will be copied into
let mut turn = String::new();
//read input into that buffer,
stdin().read_line(&mut turn);
//print the board
println!("-------------");
for row in board.grid {
for square in row {
print!("|");
match square {
Some(Player::X) => print!(" {X} "),
Some(Player::O) => print!(" {O} "),
None => print!(" "),
}
}
println!("|");
println!("-------------");
}
}
}
The reason we have to flush stdout
manually is because it is usually flushed when there is a newline character or the runtime's buffer fills up, but neither of these things happen.
Our board printing code is also included there at the bottom of the loop, but it doesn't do anything yet really, as using the input comes next. Our game is starting to come together!
Task 2.2
The following snippet validates the input is a number, in the range of the board, in a blank square. It then adds the turn to the board if so.
let guess: Result<usize, _> = turn.trim().parse();
if guess.is_err() {
continue;
}
let square = guess.unwrap() - 1;
let row = square / 3;
let column = square % 3;
if square > 8 || board.grid[row][column].is_some() {
continue;
}
//add the turn to the board
board.grid[row][column] = Some(board.current_turn);
parse()
is a funny little function, as it is generic over any type that a string can be turned into, hence why we have to use the affectionately-named turbofish syntax to specify what type we want to parse our string into. It also returns a Result<T,E>
, which is like an upgraded version of Option<T>
, expressing that the function returns either our result T
, or some type expressing an error E
. We check that the Result
is not an error, and then unwrap the guess from it.
We use one conditional expression to check if our parse function failed using is_err()
, then we can unwrap()
our number from the Result
. Another condition is used to check the square is not out of range, or already taken. We jump back to the top of the loop in any of these cases.
Task 2.3
We can just add a simple match
expression at the bottom of our loop to switch turns.
board.current_turn = match board.current_turn {
Player::O => Player::X,
Player::X => Player::O,
}
Depending upon how your loop works you might need to put this somewhere else to handle the control flow differently.
Our main function now looks like this. I added a little help text at the top to print at the start of the game, too!
fn main() {
println!("tic tac toe!");
println!("Board squares are numbered as follows:");
println!(
"------------\n\
| 1 | 2 | 3 |\n\
-------------\n\
| 4 | 5 | 6 |\n\
-------------\n\
| 7 | 8 | 9 |\n\
-------------"
);
let mut board = Board {
grid: [[None, None, None], [None, None, None], [None, None, None]],
current_turn: Player::X,
winner: None,
};
loop {
print!("Player {}, enter a square>>", board.current_turn);
stdout().flush().expect("Could not flush stdout");
let mut turn = String::new();
stdin().read_line(&mut turn).expect("Failed to read line");
let guess: Result<usize, _> = turn.trim().parse();
turn.clear();
if guess.is_err() {
continue;
}
let square = guess.unwrap() - 1;
if square > 8 || board.grid[square / 3][square % 3].is_some() {
continue;
}
//print the board
board.grid[square / 3][square % 3] = Some(board.current_turn);
}
}
Task 3
This bit is a little more complicated. Board.grid
is an array of Option<Player>
, so we need to check that if each tile in the row is equal, and also that they are not all None
values (done by the is_some()
method). We check each row, each column, and also the two diagonals. If any of these checks end up storing a winner in board.winner
, then the match
at the bottom catches this and ends the game.
//check if we have any winners
//check rows -- easily done
for row in board.grid {
if row[0] == row[1] && row[1] == row[2] && row[0].is_some() {
board.winner = row[0];
}
}
//check columns -- need some indexing for this
for i in 0..3_usize {
if board.grid[0][i] == board.grid[1][i]
&& board.grid[1][i] == board.grid[2][i]
&& board.grid[0][i].is_some()
{
board.winner = board.grid[0][i];
}
}
//check diagonals
if board.grid[0][0] == board.grid[1][1]
&& board.grid[1][1] == board.grid[2][2]
&& board.grid[0][0].is_some()
{
board.winner = board.grid[0][0];
}
if board.grid[0][2] == board.grid[1][1]
&& board.grid[1][1] == board.grid[2][0]
&& board.grid[0][2].is_some()
{
board.winner = board.grid[0][2];
}
The Eq
and PartialEq
derives from earlier allow us to use the ==
operator to compare instances of our Player
type. More info about those can be found in The Book
The Final Product
The full code can be found on github: https://github.com/uwcs/rs118-tic-tac-toe
CHIP-8 Project
CHIP-8 is an interpreted programming language, developed by Joseph Weisbecker. It was initially used on the COSMAC VIP and Telmac 1800 8-bit microcomputers in the mid-1970s. CHIP-8 programs are run on a CHIP-8 virtual machine. It was made to allow video games to be more easily programmed for these computers, but CHIP 8 is still used today, due to its simplicity, and consequently on any platform and its teaching of programming binary numbers.
In short, its a really basic assembly-language-type specification that lets people build neat games, and we're going to build an interpreter for it, applying some of the Rust we've learned so far.
Our finished interpreter is available on crates.io, so you can install it with cargo install rs118-chip8
to have a play with it, so you can see what your final product should look like. Plenty of ROMs are available online, we recommend Space Invaders and Tetris.
Also, if you haven't read chapter 7 of The Book, I'd recommend doing so. Knowing how to lay out a Rust program is something that's often overlooked but super important.
0: Getting Started
Task 0.1: Read the Docs
Before you do anything, have a read of the CHIP-8 specification, so you have a rough idea of what it is you need to implement and can start thinking about a solution. Think about how you'll represent the different components in Rust.
Task 0.2: Read More Docs
Create a new cargo project, and open up the Cargo.toml
file. Our emulator exposes some stuff as a library for you to base your solution around, so add that to your dependencies:
[dependencies]
chip8_base = "0.2"
Take a look at the chip8_base
library, that we'll be using as the base for our implementation. We can see that it exposes two type aliases, Display
and Keys
, a Pixel
enum, and an Intepreter
trait. The idea is that you create your own interpreter by implementing the trait, and then the run()
method will run it for you. This works because run()
is written to accept any type that implements the Interpreter
trait as an argument.
There's 3 functions our own interpreter will need to provide for a complete trait implementation:
step(&mut self, keys: &Keys) -> Option<Display>
speed(&self) -> Duration
buzzer_active(&self) -> bool
The first one is the main driver function of your interpreter that you'll have to implement, and will execute it one cycle at a time, modifying internal state as necessary. The other two are just helpers to provide the trait with some more info it needs to run your interpreter correctly. Have a look at those function signatures and what the types tell you about what you might need to do.
Task 0.3: Git Good
Cargo initialises your new project as a git repo. It does this for a reason, to encourage you to use version control. If you aren't familiar with git, check out our Git Good resources. Make a new commit every time you change or try something, so you can keep track of what you've done and roll back if things break. Commit at least when you've done each task.
1: The Virtual Machine
The first step we're gonna take is to lay out our program using modules. Refer back to The Book if you need a refresher on how to create modules and structure a Rust program.
Task 1.1: Modules
Create a new directory next to main.rs
called interpreter
, and then add interpreter/mod.rs
. Add the line mod interpreter;
to the top of main.rs
so the interpreter
module is added to the module tree. This module is where most of our code is going to live. Feel free to create any other modules you wish alongside mod.rs
too, but don't forget to include them in the module tree.
Task 1.2: The Interpreter Type
In our new interpreter module, we want to create a struct that will represent the state of our CHIP-8 virtual machine.
Create a new struct type, adding fields from the spec: memory, registers, display (following chip8_base::Display
). Also add an impl
with a new()
associated function to return a default copy of our struct. new()
can take whatever arguments you see fit to create a new virtual machine, but for now, none is fine.
Task 1.3: The Interpreter Trait
Before we can use the run()
function from the chip8_base
module, we need to tell the compiler that our interpreter is actually an interpreter, by implementing the Interpreter
trait on it.
Import the chip8_base::Interpreter
trait into your interpreter module, and use another impl
block to create the skeleton of your implementation of the trait on your struct.
Task 1.4: Keep the Compiler Happy
The compiler should be screaming at you right about now, because your type is implementing a trait without providing definitions for any of its methods. Go ahead and add the three required methods to your impl
block, marking them as todo!()
for now.
Note: rust-analyzer
can do this for you if you ask it nicely (in VS Code, press Ctrl
+ .
inside the impl
and select Implement missing members
).
Task 1.5: One Step at a Time
So, now you have your own skeleton of an interpreter, you can call step()
on it to step through execution one clock cycle at a time. We do however need somewhere to create and run things from: main()
.
Head back to main.rs
and use your new()
function from the interpreter
module to create a new virtual machine (check your pub
s). Then, call chip8_base::run()
, passing the struct you just instantiated as an argument. Have a look at the type signature for chip8_base::run()
and think about how exactly to pass your arguments, considering ownership.
This should all compile now, so cargo run
it and see what happens!
Task 1.6: Now I'm Panicking
Well, you left the methods marked as todo!()
, so it panicked. That was silly. We can provide some really barebones implementations though that don't do anything. Look at the return type for the three methods and think about what you might want to return.
Make the three Interpreter
methods return something such that they don't really do anything, but still don't panic.
Task 1.7: Timing
Have another look at the speed()
method. To run games properly, the interpreter needs to run at a fixed clock speed. The return type of speed()
is Duration
, which is Rust's representation of a period of time. You can create Duration
types with a number of associated functions, take a look at the docs to see which one is appropriate here. Your interpreter type should store some representation of the speed it is currently being run at (a clock frequency of 700Hz is generally pretty good for most CHIP-8 games), and be able to return the speed as a Duration
so that the run()
function knows exactly how fast to run it.
Make your speed()
method return a period of time corresponding to the speed of the interpreter. You should not hard-code this, make it a configurable constant or something passed when instantiating a new interpreter, as different games may require different speeds.
2: Fetch-Decode-Execute
Your interpreter should be happily churning away burning CPU cycles for now, and the display window should work, letting you drag it around, resize it, etc. So, we have a virtual machine that does nothing. That's not very exciting at all, so let's change that. The virtual machine works pretty much the same as a CPU, with a fetch-decode-execute cycle. This entire cycle is one step, and what should be implemented in your step function. The basic idea of Interpreter::step()
is:
- Get the next instruction from memory (fetch)
- Increment the program counter
- Work out what that instruction needs to do (decode)
- Execute the instruction (execute)
Task 2.1: Fetch
Let's start with fetch. Looking at the spec, we can see that each CHIP-8 opcode is two bytes, composed of four nibbles (each nibble being four bits, or one hex digit). The program counter is 16 bits, and should point to the address of the next instruction to be executed from CHIP-8's 4kB of memory.
Write a function, fetch
, to return the next opcode to be executed from memory, and increment the program counter. Consider carefully the parameters and return types of your method, how do ownership rules interact with it? Add a call to fetch
within step
. If you haven't already got fields for memory and program counter on your struct, now is the time to add them.
Task 2.2: Fetch, But it Works
So when you run your program now, what should be happening is opcodes are continually fetched from memory, until... it panics? Yes, panics. What's happening is your program is continually fetching until the program counter overflows, which causes a panic when Rust is in debug mode (see this blog post for a good rundown on overflow in Rust). To fix this, we need to manually specify a way to make our program counter wrap around back to 0. The program counter is meant to represent a 12-bit address (for CHIP-8's 4096 bytes of memory), so we should wrap back to 0 when it reaches 4096.
Fix your fetch instruction to wrap back to 0 when it reaches the end of the addressable memory. Add in some debug statements (take a look at the excellent dbg!()
macro) to verify that it is fetching continually from memory, returning 0 each time (we haven't loaded anything into memory yet).
Task 2.3: Logging
When writing code, logging events to the terminal can be really useful to help determine what is going on in your program. The log crate provides some macros used for logging events to the console, and good libraries will generally include logs to help users see what's going on. Libraries can choose to log events, but it's up to the user to initialise a logging implementation to consume the logs. We recommend the simple env_logger
, which let's you configure logging via an environment variable. See the examples on the docs pages for those two crates to get an idea of how they work.
Add both log
and env_logger
to your package manifest, then add a call to env_logger::init()
as the first call in your main()
function. Run your code again, but prepend the cargo command with RUST_LOG=info
(ie, RUST_LOG=info cargo run ...
) and you should see the library, and other crates we depend on such as wgpu
, doing things. Set the log level to debug to see even more.
If you run export RUST_LOG=info
, then that sets the log level for the entire terminal session. Error logs will always be printed by default, but you might find it useful to leave warn
or info
logs on.
Add some logging calls using the log
macros to your code so you can easily trace what's going on. Try to choose an appropriate log level for each call, to make your logs easy to consume and filter. Using trace
and debug
logs are useful for tracking down issues, you'll thank yourself later on.
Task 2.4: Decode
So, we can fetch instructions, but what do we do with them? Well, execute them of course. CHIP-8 has 35 instructions with a varying number of operands. We could write a very very long chain of if/else expressions to decode each instruction, or we could use one match
expression. Each instruction has four nibbles, some of which are fixed to specify the opcode, some of which are operands of varying length (4, 8 or 12 bit). We can use this to write our execute()
function (you could separate decode and execute, but it's easiest to combine them for simplicity).
Write an execute
method that uses a match
expression to decode instructions. Just pattern match for now, no need to execute anything yet (that's spoilers for part 2.5). There are many ways you could go about this, but I recommend breaking each instruction down into it's four nibbles, then pattern matching on that. For now, make each of your match arms do nothing (return the unit type). Remember that match
expressions in rust have to cover all possible types, so you can use a wildcard pattern (_ => ()
) to cover any unimplemented instructions, or instructions that don't exist. You don't have to do all of the instructions now, just maybe have a look at the essentials for later (see section 3) and check the advice at the bottom for a little help. Refer back to The Book (Chapter 18 may also be useful) if you need a refresher on how match
works.
Task 2.5: Execute
So we've done fetch and decode, no prizes as to what comes next. Executing an instruction just consists of modifying the internal state of your virtual machine accordingly, so double-check the specification at this point to make sure you have all the fields you need on your interpreter struct.
First implement the opcode 0x000
as a NOP (No-Operation) instruction, that simply does nothing. Then fill in a few of the arms of your match statement to execute the decoded instructions (perhaps take some inspiration from section 3). Think about how you're going to get the operands out of the instruction when they vary in width. Make your step()
method call execute()
so that the interpreter is now doing the fetch-decode-execute cycle.
Congrats! You're successfully emulating a CPU* (*CHIP-8 is not a CPU but it's an awful lot like one). Take a moment to appreciate how cool this is, even if it does nothing so far. What should be happening is the interpreter is fetching, decoding and executing 0x0000
instructions continually, which aren't real instructions but I added them because I could.
3: The First Few Instructions
In the last section, you implemented one or two random instructions of your choosing, plus our fictitious NOP instruction. Now, I'm gonna talk you through implementing a few more, such that you can run a real program. We've written a UWCS test ROM that does nothing except display the UWCS logo, but using only 6 instructions, so those are what we'll implement.
Task 3.1: 00E0 (clear screen)
This instruction should set all the display's pixels to black. If you don't have some representation of the display in your struct, add one now (look at the Display
type if you need a hint).
The step()
function should return Some(Display)
when the display is updated, so maybe your execute function wants to do the same, and then the step function should return that? Either way, make sure that executing this instruction causes your updated display state to be returned to step()
's caller.
Task 3.2: 1nnn (jump)
This instruction has one operand, nnn
, a 12-bit address that should be pushed onto the stack. Rust doesn't have a 12-bit number type, so pick another type accordingly and include checks/wrapping to make sure that the value remains within 12 bits. The program counter should simply be set to the value of the operand.
Task 3.3: 6xkk (set register Vx)
CHIP-8 has 16 general purpose 8-bit registers, numbered V0
to VF
. VF
is often used for special flags from some operations. This instruction has two operands, x
, the register, and kk
, a byte. The byte should be put in the register. Easy.
Task 3.4: 7xkk (add to register Vx)
Add the value kk
to the value in the register Vx
. This instruction may overflow (causing a panic), so make sure to handle overflow/wrapping correctly.
Task 3.5: Annn (set index register)
The index register is a separate special 16-bit register, which is generally used to point at locations in memory. This instruction should set the index register to the 12-bit value nnn
.
Task 3.6: Dxyn (draw)
This is certainly the most involved instruction in the whole CHIP-8 spec. CHIP-8 has sprites which are 8 bits wide and up to 15 bytes tall. This instruction draws the sprite starting at the address in the index register to the screen at position (Vx
,Vy
). Info on how sprites should wrap varies, but generally the X and Y coordinates should be modulo the display size, ie an X-coordinate of 69 should be interpreted as a coordinate of 6, and sprites should not partially wrap. Drawing works by XORing the pixel values with the current display values, and the VF
register should also be set to 1
if this causes any pixels to be erased.
- Set
X
to the value inVx
modulo 64 - Set
Y
to the value inVy
modulo 32 - Zero
VF
- For each row in the n-byte sprite
- if y is out of bounds, stop drawing the sprite
- For each bit in the byte/row
- If x is out of bounds, stop drawing the row
- XOR the bit onto the screen
- Set
VF = 1
if this caused a pixel to be erased.
- Set
The Pixel
type has some traits implemented (mainly bitwise operator overloads and conversions) that you may find helpful, so check the docs page for that. Check the resources at the bottom (and also Google for anything else you can find) for more explanations, as implementations and explanations may vary ever so slightly, but the general idea is always the same.
Task 3.7: This Tutorial Not Sponsored By UWCS
Theoretically, you should be able to run the UWCS test ROM now. But first you need a way to load it into memory. CHIP-8 Programs start at 0x200
in memory, so you need to write a method to load a ROM from disk into memory and ensure your PC starts there. std::fs::read
will load a file from disk and return it as Vec
of bytes, but how to get it into memory is up to you. You could add it to your new()
function, or create a separate load()
function. Make sure you properly handle the Result
that the fs::read
returns too, in case you give it a file that doesn't exist.
Task 3.8: Debugging
Chances are this doesn't work first try (unless you're some next level god tier genius, in which case, congrats). You'll probably need to do some debugging. Making extensive use of the dbg!()
macro and debug
and trace
logs is a good idea, or maybe slow down the speed of your emulator to make it easier to see what's going on step-by-step. Redirecting stderr
to a file on the command line may come in handy too so, you can take a closer look.
If you're using VS Code, debugging Rust is easy. rust-analyzer
adds a little "debug" button above your main function you can press to launch the debugger, allowing you to step through and inspect values one step at a time. If you've never used a debugger in VS Code before, have a look at this article. This article includes some information about debugging Rust specifically.. If you prefer the command line, gdb
has support for rust too, through rust-gdb
.
Writing unit tests to test each instruction in isolation is a good idea too. Chapter 11 of The Book has some information on writing unit tests in rust, which is incredibly easy. Obviously you should always test your code, but a lot of the opcodes are fairly simple and we don't expect a full suite of unit tests for just a toy project. Writing a few to cover the more complex instructions and check your edge cases is a good idea, as you can then debug the execution of the tests in isolation too.
4: The Rest
Well, we've got this far. However, you still have about 30 instructions before you can say you're done. A few test ROMS can be found here for testing all your instructions work. Remember, unit tests are your friend, have a look at some of the unit tests in our implementation if you're stuck on how to write these.
As a little bonus for getting this far, here's a version of main.rs
where you can pass the ROM and frequency as command line arguments.
Some advice:
- Make sure you implement the
buzzer_active()
function correctly. - The timer registers will be quite tricky to line the timing up correctly. You can rely on the fact that your
step()
function will be executed once everyt
seconds, wheret
is whateverDuration
is returned by thespeed()
method. - Make sure you handle wrapping to 8/12/16 bit boundaries correctly, making use of the standard library's wrapping and saturating add/sub methods.
n & 0xfff
will wrapn
to a 12 bit boundary
- Some instructions require you set
VF
under certain conditions. - This is a very specific use case where casting in Rust can be annoying, as CHIP-8 has no strong type system like Rust does. Make sure all your
as
andinto
/from
casts are in the right place. - You don't have to completely re-architect the whole thing to implement the
Fx0A
instruction, trust me. Ask for help with this one if you need (how can you keep the PC from moving on?). - You'll need the
rand
crate from crates.io to generate random numbers - You'll need to initialise the font in memory at some point. Where/how is best to do this? Font usually starts at 0x50, but can be anywhere in the first 512 bytes.
- Ask for help from a friend or lab tutor, or in Discord if you get stuck
- Look at existing implementations if you get really stuck
Not all ROMS you find online will work, as some may be written for Super CHIP-8, an extension of CHIP-8 that adds a few more instructions. Feel free to extend your emulator with these instructions if you want.
Resources
The CHIP-8 Specification is available at http://devernay.free.fr/hacks/chip8/C8TECH10.HTM
A more detailed explanation of each instruction and more of the details of the "hardware" are available at https://tobiasvl.github.io/blog/write-a-chip-8-emulator/
A large collection of CHIP-8 stuff is available at https://chip-8.github.io/links/
A sample solution is available at https://github.com/ericthelemur/chip8 or https://github.com/UWCS/rs118-chip8/tree/main/chip8
CHIP-8 Solutions
These solutions can be seen step-by-step on GitHub here.
Task 1.1
Your directory structure should now look like this:
.
├── Cargo.lock
├── Cargo.toml
└── src
├── interpreter
│ └── mod.rs
└── main.rs
Which gives a module hierarchy like this:
crate root
└── main
└── interpreter
You can add any other modules, for tests or anything else, anywhere you wish.
Task 1.2
The interpreter/CPU/virtual machine struct should look something like this:
pub struct ChipState {
memory: [u8; 4096],
program_counter: u16,
registers: [u8; 16],
display: chip8_base::Display,
stack_pointer: u8,
stack: [u16; 16],
// ... there will be more
}
Only a few of the fields you need are included here, you'll need to add a few more as you go, and you can represent them however you wish. The corresponding new()
method should look like this:
impl ChipState {
pub fn new() -> Self {
Self {
memory: [0; 4096],
registers: [0; 16],
program_counter: 0x200,
display: [[chip8_base::Pixel::default(); 64]; 32],
stack_pointer: 0,
stack: [0; 16],
}
}
}
Note how both the type and the function are pub
, so the module above (main, the crate root) can use them. The program_counter
is initialized to 0x200
, as this is where CHIP-8 programs start.
Task 1.3 & 1.4
We implement the trait for the type like so
impl chip8_base::Interpreter for ChipState {
fn step(&mut self, keys: &chip8_base::Keys) -> Option<chip8_base::Display> {
todo!()
}
fn speed(&self) -> std::time::Duration {
todo!()
}
fn buzzer_active(&self) -> bool {
todo!()
}
}
Look at how the methods are capturing self
. step()
takes a mutable reference, because it needs to mutate the state of the virtual machine, but it doesn't move, because then we wouldn't be able to do more than one step. The other two take immutable references, because they only need to read state, not modify it.
Task 1.5
main()
should look like this:
use interpreter::ChipState;
mod interpreter;
fn main() {
let vm = ChipState::new();
chip8_base::run(vm);
}
Task 1.6
The following return values don't do anything, and let the interpreter run without panics:
use std::time::Duration;
...
impl chip8_base::Interpreter for ChipState {
fn step(&mut self, keys: &chip8_base::Keys) -> Option<chip8_base::Display> {
None
}
fn speed(&self) -> Duration {
Duration::from_secs(1)
}
fn buzzer_active(&self) -> bool {
false
}
}
Task 1.7
For a clock rate of 700Hz, you can create a Duration
using Duration::from_secs_f64(1_f64/700_f64)
. Don't hardcode this though. The "proper" way to do it is to modify your new()
method to accept a clock speed, then store the duration in the struct to return when requested.
pub struct ChipState {
...
speed: Duration
}
impl ChipState {
pub fn new(clock_freq: u32) -> Self {
Self {
...
speed: Duration::from_secs_f64(1_f64 / clock_freq as f64),
}
}
}
...
impl Interpreter for ChipState {
fn speed(&self) -> Duration {
self.speed
}
}
Task 2.1
fn fetch(&mut self) -> u16 {
let instruction = u16::from_be_bytes([
self.memory[self.program_counter as usize],
self.memory[(self.program_counter + 1) as usize],
]);
self.program_counter += 2;
instruction
}
We're capturing by mutable reference, because we need to mutate, but not take ownership.
Look at the documentation for the from_be_bytes()
method if you don't get what's going on.
There's lots of casting using as usize
going on, because only a usize
type can be used to index an array for safety reasons (imagine you used a u16
type to index an array of 30,000 numbers, it wouldn't make sense semantically). Casting the program counter and other numbers to usize
is gonna happen a lot, but you can't store them as usize
types because that wouldn't make sense either, and would also make it much harder to keep track of what a value is meant to represent.
Task 2.2
The self.program_counter & 0x0fff;
will wrap the program counter to 12 bits, discarding the upper nibble. Adding some debug calls too:
fn fetch(&mut self) -> u16 {
dbg!(&self.program_counter);
let instruction = u16::from_be_bytes([
self.memory[self.program_counter as usize],
self.memory[(self.program_counter + 1) as usize],
]);
self.program_counter += 2;
self.program_counter & 0x0FFF;
dbg!(&instruction);
instruction
}
We don't have to add any additional info to dbg!()
because the expression and line number are printed for us.
Task 2.3
Our main should now look like this:
fn main() {
env_logger::init();
let vm = ChipState::new(700);
chip8_base::run(vm);
}
Don't forget to add the crates to Cargo.toml
. Where you choose to add logs is up to you, but as a rule of thumb, put a log::debug!()
call everywhere you expect something might go wrong. You can use format strings in log macros too, just like println!()
.
Task 2.4 & 2.5
First, we've written a helper method to break the u16
instruction down into four nibbles (add in impl ChipState
):
//break a u16 into its nibbles
fn nibbles(n: u16) -> (u8, u8, u8, u8) {
let n3 = ( n >> 12) as u8;
let n2 = ((n >> 8) & 0b1111) as u8;
let n1 = ((n >> 4) & 0b1111) as u8;
let n0 = ( n & 0b1111) as u8;
(n3, n2, n1, n0)
}
We can then match on this. Below shows NOP (0000
), AND (8xy2
) and RET (00EE
) implemented. Here, you could implement almost anything, but this is just an example of the sort of structure you need.
fn execute(&mut self, instruction: u16) {
match Self::nibbles(instruction) {
// 0000 NOP: Nothing
(0x0, 0x0, 0x0, 0x0) => (),
// 00EE RET: Return from subroutine
(0x0, 0x0, 0xE, 0xE) => {
self.program_counter = self.stack[self.stack_pointer as usize];
self.stack_pointer -= 1;
},
// 8xy2 AND Vx, Vy: Set Vx = Vx AND Vy.
(8, x, y, 2) => self.registers[x as usize] &= self.registers[y as usize],
_ => panic!("Instruction either doesn't exist or hasn't been implemented yet"),
}
}
Note how we can specify constants in the tuple for the pattern, and also variables to bind to if the pattern matches. How you decode operands wider than one nibble is up to you.
step()
now looks like this:
fn step(&mut self, keys: &Keys) -> Option<Display> {
let instr = self.fetch();
self.execute(instr);
None
}
Task 3.1
fn execute(&mut self, instruction: u16) -> Option<chip8_base::Display> {
match Self::nibbles(instruction) {
// 0000 NOP: Nothing
(0x0, 0x0, 0x0, 0x0) => (),
// 00E0 CLS: Clears the display
(0x0, 0x0, 0xE, 0x0) => {
self.display = [[chip8_base::Pixel::default(); 64]; 32];
return Some(self.display);
}
_ => panic!("Instruction either doesn't exist or hasn't been implemented yet"),
};
None
}
...
impl chip8_base::Interpreter for ChipState {
fn step(&mut self, keys: &chip8_base::Keys) -> Option<chip8_base::Display> {
let instr = self.fetch();
self.execute(instr)
}
...
Note the return is needed to pass the display back since clear updates the display, also pattern matching on hex to match e.g. 0xE
and stay consistent.
Task 3.2
fn nnn(instruction: u16) -> u16 {
instruction & 0x0FFF
}
...
fn execute(&mut self, instruction: u16) -> Option<chip8_base::Display> {
...
// 1nnn JP addr: Jump to location nnn
(0x1, _, _, _) => self.program_counter = Self::nnn(instruction),
...
}
Here we use a bitmask to chop off the first bit to get the last 12. This approach disregards the last 3 nibbles in the pattern match, since those variables aren't used, and are taken straight from instruction
instead. You could also construct nnn
from those nibbles, though it is more involved.
Task 3.3
fn kk(instruction: u16) -> u8 {
(instruction & 0x00FF) as u8
}
...
fn execute(&mut self, instruction: u16) -> Option<chip8_base::Display> {
...
// 6xkk LD Vx, byte: Set Vx = kk.
(0x6, x, _, _) => self.registers[x as usize] = Self::kk(instruction),
...
Nearly identical to above, but using kk
to match the last byte instead of 12 bits.
Task 3.4
fn execute(&mut self, instruction: u16) -> Option<chip8_base::Display> {
...
// 7xkk ADD Vx, byte: Set Vx = Vx + kk.
(0x7, x, _, _) => {
self.registers[x as usize] = self.registers[x as usize].wrapping_add(Self::kk(instruction));
}
...
As the hint gave, wrapping_add
wraps around the overflow as required.
Task 3.5
Add index: u16
to the struct and new
.
fn execute(&mut self, instruction: u16) -> Option<chip8_base::Display> {
...
// Annn LD I, addr: Set I = nnn.
(0xA,_,_,_) => self.index = Self::nnn(instruction),
...
Task 3.6
fn execute(&mut self, instruction: u16) -> Option<chip8_base::Display> {
...
// Dxyn DRW Vx, Vy, nibble: Display n-byte sprite starting at memory location I at (Vx, Vy), set VF = collision.
(0xD, x, y, n) => {
let tlx = self.registers[x as usize] % 64;
let tly = self.registers[y as usize] % 32;
self.registers[0xF] = 0;
let ind = self.index as usize;
let sprite = &self.memory[ind..(ind + n as usize)];
for (i, row) in sprite.iter().enumerate() {
let pxy = tly + i as u8;
if pxy > 31 {
break;
}
for j in 0..8 {
let pxx = tlx + j;
if pxx > 63 {
break;
}
let old_px = &mut self.display[pxy as usize][pxx as usize];
let mask = 2_u8.pow(7 - j as u32);
let new_u8 = (row & mask) >> (7 - j);
let new_px: chip8_base::Pixel = new_u8.try_into().unwrap();
if (new_px & *old_px).into() { // if collision
self.registers[0xF] = 1
}
*old_px ^= new_px;
}
}
return Some(self.display)
},
...
This is a translation of the rough pseudocode into Rust. Note how iterating over bits is a bit of a pain. However, iterating over the sprite super is easy: we just grab it as a slice. Remember slices? If not, check The Book
Task 3.7
Here is a load function to load a ROM into memory from disk:
pub fn load(mut self, filename: &str) -> std::io::Result<Self> {
let program = std::fs::read(filename)?;
self.memory[0x200..(0x200 + program.len())].copy_from_slice(&program);
Ok(self)
}
Note how this takes ownership, and then returns std::io::Result<Self>
. We return Err
if we have some error reading from disk, and the error is returned early to the caller using `the ? operator, which we'll cover in more detail next time. Loading a ROM copies the bytes into memory and then moves the PC to the start of the program. Finally, we give ownership back to the caller if everything is okay.
You could also do this capturing self
by mutable reference, or handle the I/O error here instead of bubbling it up to the caller. All up to you.
Task 4
You really are on your own here.
Try to ask for help, check your resources, and debug properly first before going straight to the nuclear option of just copying it from my solution, but you can find a full implementation (approximately) following on from this one at ericthelemur/chip8, and one that separates decode and execute at rs118-chip8
.
Note that these solutions are certainly not infallible, so don't rely on it as a source of truth for CHIP-8 implementations!
Raytracer Project
For those of you not familiar with raytracing, it's a 3D graphics rendering technique that works by modelling and simulating individual light rays. It's becoming a more common technique in modern gaming (thanks to NVIDIA). Ray tracing is a bit of an umbrella term, but what we're gonna build is technically a path tracer, and a fairly general one. You'll eventually end up with an image something like this:
This tutorial is adapted from the excellent Ray Tracing in One Weekend, and adapts some illustrations from there as well. I've rewritten it from C++ to Rust, and also added a few other bits to hopefully make it more interesting and explore a bit more of Rust.
There's a fair amount of vector maths involved here but don't let that intimidate you. I'll try to explain it all well enough that you don't need a maths degree to follow what's going on.
Also, unlike the original book, I've not included the code snippets inline, as trying to implement the solution yourself is much more rewarding. Feel free to take a look at the solutions if you get stuck, but try to solve the tasks yourself as you'll get more out of it. Remember to make use of your resources!
Contents
- 1: Images
- 2: Vectors
- 3: Rays
- 4: Spheres
- 5: Surface Normals & Multiple Objects
- 6: Antialiasing
- 7: Diffuse Materials
- 8: Metal
- 9: Glass
- 10: Positionable Camera
- 10.1: FoV
- 10.2: Direction
- 11: Depth of Field
- 12: A Final Render
- What next?
1: Images
What does a renderer render? Well... pictures. An image of a scene. So we're going to need some way to output an image in Rust. We're going to take advantage of the excellent crates.io
ecosystem here and use a crate called image that does pretty much exactly what it says on the tin: provide tools for working with images. Have a look over the docs over on docs.rs and have a think about how you might go about creating a new image.
We don't need to support any fancy encoding or anything for our raytracer, we just want each pixel to be comprised of 3 bytes: the good old (r, g, b)
.
Task 1.1: Create Image
I'm assuming you've already created a new cargo project and added image
to your dependencies in Cargo.toml
. In your main function, write some code to generate an image and save it to disk. The basic steps are something like:
- Create a new image buffer to hold our pixels (docs). 256x256 should be good to start with.
- Iterate over each pixel (docs)
- Modify each pixel, setting it to a single colour (red) to start with (docs)
- Save the buffer to a file (docs)
Your image should be saved to disk, and look like this:
Task 1.2: Gradients
We're gonna extend the code from above to output something a bit nicer. From here on out, I'm going to talk about RGB values as being floats from 0.0 to 1.0. Converting them to bytes can be done just before you write to the pixel. I'm also going to refer to i
and j
as the coordinates of each pixel, where i
is the offset in columns from the top-left corner, and j
is the offset in rows (if you're not already using an iterator that does this for you, try and find it in the image
documentation).
For each pixel:
- Scale
i
to the range 0.0-1.0 based upon the image's width. This will be yourr
value. - Scale
j
to the range 0.0-1.0 based upon the image's height. This will be yourg
value. - Fix
b
at 0.25 - Convert your rgb values to bytes
- Write your pixel to the buffer.
Red will fade from 0 to 1 left to right, and green will fade in from top to bottom. You should get a nice gradient like this:
This is a sort of graphics "Hello World", because once we have an image we can do what we want with it.
2: Vectors
Almost all graphics programs have some data structures for storing geometric vectors and colours. In many systems these vectors are 4D (3D plus a homogeneous coordinate for geometry, and RGB plus an alpha transparency channel for colours). For our purposes, three coordinates work just fine. We’ll use the same struct Vec3
for colours, locations, directions, offsets, whatever. Some people don’t like this because it doesn’t prevent you from doing something silly, like adding a colour to a location. They have a good point, and we could enforce this through Rust's type system, but we're going to not for now because it adds a lot of complexity. We will create some type aliases Colour
and Point
, though, to make our types a little more descriptive where possible.
Task 2.1: Vector Struct
Our Vec3
will require a few methods to make it useful in graphics applications:
- Dot and cross products
- A
len()
method, to get its magnitude - A
normalise()
method, to scale a vector to unity magnitude. - A
map()
method, that applies a function to each element of the vector, consuming it and returning a new vector, similar to themap()
method for arrays.
Create a new vector.rs
file, and include it in the module tree with a mod vector;
statement in main. Then create a simple struct, Vec3
, with three f64
fields: x, y, z. Then, implement all these methods on it. Start with dot()
and len()
, then try cross()
. Do map()
next, as you can use it to then implement normalise()
. Look at the docs for std::array::map for help with your map implementation, you want to take some function as an argument, and apply it to all 3 elements in your vector.
Add two type aliases pub type Colour = Vec3
and pub type Point = Vec3
too, you can add any other general vector methods you think might come in handy too.
Since we are using Vec3
for colours, it's also useful to add a method to convert Vec3
into image::Rgb
. Rust has a pair of traits to convert From
a type and Into
a type. These traits are the inverse of each other, so when you implement one, you get the other for free. The conventional way to do this is to implement From
and let the compiler to Into
for you, so go ahead and implement the trait From<Vec3> for Rgb<u8>
and Rust will ensure vec.into()
can convert to Rgb
(the resulting type of .into()
is inferred).
Task 2.2: Vector Operations
We'll also want to overload some operators. Operator overloading allows operators to be implemented to work on custom types, which is done in Rust by implementing the std::ops
traits. You want to be able to:
- Add two vectors
- Subtract two vectors
- Multiply a vector with a float
- Divide a vector by a float
- Negate a vector
- Multiply a vector element-wise by another vector
Implementing all of these means a lot of boilerplate, but we can draft in another crate to help us: derive_more
, which extends the #[derive(...)]
macros we're familiar with by allowing us to derive more traits, including operators. Add it to your Cargo.toml
and have a look at the docs to see how to use it. Add derives for Add
, Sub
, Mul
, Div
, and Neg
. You can also derive a Constructor
! Add Debug
, PartialEq
, and PartialOrd
while you're at it too.
Our vector is only 24 bytes, so can be considered cheap enough to derive Copy
and Clone
for it too. Remember that this disregards move semantics for the vector to let the compiler automatically make copies of it where needed.
derive_more
isn't perfect, so we need to add a few operator overloads manually. Mul<f64> for Vec3
is derived for us, which gives us mul(Vec3, f64)
, but not the other way round (Rust does not assume that multiplication is commutative when implementing these traits). We can get the other way round with an impl Mul<Vec3> for f64
, so we technically implement the trait again for the f64
type. Take a look at the docs for std::ops::Mul
to work out how to do this.
There's also one or two cases where we want to multiply a vector by another vector element-wise. Add another Mul
implementation for Vec3
to do this.
Task 2.3: Macros
We're gonna take a quick adventure down the rabbit hole that is Rust macros to create a dirty hack shorthand for initialising new vectors, since we're going to be doing an awful lot of it. I recommend having a read through this blog post, and some of the Rust by Example chapter, then I'll walk you through it.
Declarative macros are just functions that operate on syntax. Our macro is declared using macro_rules!
, and we'll call it v!
(because it's for vectors).
macro_rules v! {
//patterns
}
The arguments are declared using syntax similar to match
: () => {}
. The macro matches on the pattern in the parentheses, and then expands to the code in the braces. In the parentheses goes the arguments to the macro, which are Rust syntax items, specified like \$x: ty
, where \$x
is the name of the token, and ty
is the type of the syntax token. There's a few kinds of tokens, but we'll just use expr
for now, which matches any expression, so takes almost anything.
macro_rules v! {
(\$x: expr) => {
Vec3::new(\$x, \$x, \$x)
}
}
The macro above takes a single expression as an argument, and replaces it with a call to Vec3::new
with the same expression as all 3 arguments. A call to v!(1)
will be expanded to Vec3::new(1, 1, 1)
. We don't have to just use numbers though, the macro can be called on any valid expression.
We're going to add another pattern too to create a vector with three different arguments. The macro will pattern match on the two sets of arguments, and expand the one that matches. If no patterns match, the code won't compile.
#![allow(unused)] fn main() { macro_rules! v { (\$x: expr, \$y: expr, \$z: expr) => { Vec3::new(\$x, \$y, \$z) }; (\$x: expr) => { Vec3::new(\$x, \$x, \$x) }; } }
We'll add another neat little trick using From
again. f64::from
accepts any value that can be easily converted to a f64, and converts it. For example, we can do f64::from(0_u8)
, f64::from(0_i32)
and f64::from(0.0_f32)
, and get 0.0_f64
from all of them. Using this in our macro lets it be a little more flexible.
#![allow(unused)] fn main() { #[macro_export] macro_rules! v { (\$x:expr, \$y: expr, \$z: expr) => { Vec3::new(f64::from(\$x), f64::from(\$y), f64::from(\$z)) }; (\$x:expr) => { Vec3::new(f64::from(\$x), f64::from(\$x), f64::from(\$x)) }; } }
The #[macro_export]
annotation at the top tells Rust to export our macro at the crate root, so other modules in our crate can use it with use crate::v
. Exporting/using macros is a bit funky in Rust, but don't worry about it too much for now.
3: Rays
All ray tracers need some data type to represent rays. Think of a ray of a function .
- is a position on a line in 3 dimensions
- is the ray origin
- is the direction the ray is pointing
Using this, you can plug in a different parameter t
to get a position anywhere on the line/ray.
3.1: Ray Struct
Create a new ray
module. Create a new struct in it that stores the origin Point
and direction Vec3
of a ray, and add a method Ray::at(&self, t: f64) -> Point
that returns the point in 3d space that is t
units along the ray. Either create or derive a constructor for your Ray
too.
3.2: Ray Directions
Now we have rays, we can finally trace some. The basic concept is that the ray tracer casts rays from a "camera" and through each pixel, calculating the colour of each pixel. This is like tracing the light paths backwards. We'll start with a simple camera defined with a few basic parameters, and a ray::colour
function that will trace and compute the resulting colour for a ray.
Our basic image will use the very standard 16:9 aspect ratio, partly because with a square image it's easy to introduce bugs by accidentally transposing x
and y
.
The camera will be at , with the y
axis going up and x
to the left, as you'd expect. To maintain a right-handed coordinate system, the camera will face in the -z
direction.
We'll also set up a virtual viewport that represents the screen in the world. For each pixel, we will trace a ray out into the scene. This viewport will be two units wide and one unit away from the camera (facing -z). We will traverse the screen from the upper left-hand corner, and use two offset vectors and along the screen sides to move the ray across the screen.
- Define your aspect ratio as
16/9
, your width as 400, and your height accordingly. - The viewport height should be 2, and width should be set accordingly as per the aspect ratio.
- The focal length should be 1
- Looking at the diagram above, we can see that the top left corner lies at
- and are your image height and width vectors
- is your focal length vector
- It's tempting to not bother with the vectors here, but it will become very helpful when the camera is moveable.
Write a colour(&Ray) -> Colour
function that for now always returns v!(0, 1.0, 0)
, we'll add a nice pattern later. Update your loop in your main
function to calculate the direction vector of the ray to cast on each iteration based on i
and j
, and then create a ray starting at the origin and going into the pixel. You can do this by scaling your pixel coordinate from 0 to 1, and then multiplying by your height and width vectors. Colour your ray and save the pixel value to the RGB buffer calling Vec3::into
to convert your colour from 0-1 from 0-255. Take care to get your signs right here, ensuring your vectors are all going in the same direction.
You should get a nice green rectangle. I appreciate there's a lot going on here and it doesn't look like much right now, so ask for help or take a look at the solutions if you're not sure.
3.3: Sky
To make the background for our raytraced image, we're gonna add a nice blue-white gradient. In your colour function, add code to normalise the ray's direction vector, then scale it from to . We're then gonna do a neat graphics trick called a lerp, or linear interpolation, where we blend two colours: blended_value = (1-t) * start_value + t * end_value
. Use white for your starting colour, a nice (0.5, 0.7, 1.0)
blue for your end colour, and blend based upon the y coordinate. You should end up with something like:
If your colours don't look similar, or it's upside down, check your geometry is correct.
4: Spheres
Spheres are often used in raytracers because it's fairly easy to work out if a ray has hit one or not. The equation for a sphere centred at the origin with radius is:
This means that for any given point , the equation will be satisfied if the point is distance from the origin. If the sphere is at centre , then the equation gets a little uglier:
We need to get our equation in terms of vectors instead of individual coordinates to work with them in a graphics context. Using and , the vector from to is , so the equation of our sphere in vector form is:
We want to know if our ray ever hits the sphere. We're looking for some where the following is true:
A little vector algebra and we have:
The only unknown in that equation is , so we have a quadratic. We can use everyone's favourite, the quadratic formula, to find a solution.
where
There are three possible cases, which the determinant of the formula (the bit), will tell us:
Empowered with some A-level linear algebra, we can go forth and draw balls.
4.1: Sphere Struct
Create another file object.rs
that will contain code to do with objects. In there, create a new struct Sphere
that holds the centre point and radius. Derive a constructor for it. Implement a method hit
that takes a ray as an argument, and returns true
if there is at least one intersection, and false otherwise.
Add a call to Sphere::hit
in your ray::colour
function, checking for intersection with a sphere with radius 0.5
centred on (0, 0, -1)
. If there is a hit, return red instead of our usually lovely lerp background from earlier. The result:
You have a basic ray tracer that can calculate intersections, congrats! This has zero bells and/or whistles so far, but we'll get to shading and reflection later on.
4.2: Rayon (multi-threading)
How long did that take to execute on your machine? You might notice the raytracer starting to chug from here on out, because it's doing a lot of maths, and it'll start to do a lot more maths as we add more code. This is really what GPUs are for, but that's a whole other rabbit hole. We can do a few things to increase performance though. Introducing, my favourite crate: Rayon
.
Rayon is a data parallelism library that works by converting your iterators to parallel iterators, and then distributing work across all the cores in your system. We're going to re-write our main rendering loop as an iterator, and then drop Rayon in to make it a lot faster (depending on how many cores you have).
Where we are using for loops, we generally convert them to iterators using the for_each
adaptor, which just calls a closure on each item of an iterator. The two examples below are equivalent.
#![allow(unused)] fn main() { for x in 0..10_u32 { //loop body let y = f64::sqrt(x.into()); println!("sqrt(x) = {y}"); } }
to:
#![allow(unused)] fn main() { (0..10_u32).for_each(|x| { //the same loop body let y = f64::sqrt(x.into()); println!("sqrt(x) = {y}"); }) }
Convert your rendering loop in main
to use for_each
. Then, import Rayon's prelude with use rayon::prelude::*
, and add a call to par_bridge
before your for_each
to use Rayon's parallel bridge to parallelise your iterator. Run your code to make sure nothing is broken, and you should notice a speed-up.
Another easy way to get free speed is to run in release mode. Instead of just cargo run
, doing cargo run --release
will compile your code with optimisations and no debug symbols to help speed it up, at the cost of longer compile times.
There are more efficient ways to utilise rayon than this (notably par_bridge
is not as performant as regular parallel iterators), and additional optimisations that can be enabled in rustc
. I encourage you to play around with it and experiment to see what makes the renderer fastest.
5: Surface Normals & Multiple Objects
Task 5.1: Normal Shading
A surface normal is a vector that is perpendicular to the surface of an object. You, stood up, are a surface normal to the planet earth. To be able to shade our sphere, we need to know the surface normal at the point where the ray intersects with the sphere.
We'll normalise our normal vectors (make them unit vectors), then visualise them using a colour map, scaling from to like before. This means we need to do two things.
First, change your hit function to return the solution to the quadratic equation if the ray and sphere intersect, and return nothing if the ray misses. If there are two solutions, you can pick which solution is closest without comparing them directly, but I'll let you figure that out ;P.
Next, re-write your colour function to do the following:
- Check if the ray and sphere intersect
- If they do, use the
Ray::at()
function from earlier to find the exact point where, and then find the surface normal using- Normalise the surface normal and scale it to the range
- Return this as a colour to shade your sphere
- If they do not, then just return then same background colour as before
- If they do, use the
You should get this lovely image of a shaded sphere:
Task 5.2: Hit
Only one sphere is boring, let's have some more. Let's create a trait to represent objects so we can easily extend our raytracer with whatever we want. The Object
trait will contain our hit()
function, so any shape/object/thing can then implement it to be able to tell us if a ray has hit it or not.
We'll extend the hit()
function a bit here to, to be something more like fn hit(&self, ray: &Ray, bounds: (f64, f64)) -> Option<Hit>
. The bounds
will specify valid bounds for the parameter t
(the solution of our quadratic equation) to lie in, and
the Hit
struct will bundle some more detailed information about a ray-object intersection. Hit
will include:
- The
Point
where the intersection is - The surface normal
- The parameter
t
Create the Hit
struct, add the Object
trait with its one function, and then implement it for Sphere
. The old Sphere::hit()
can go, as the new and improved Sphere::hit()
should be part of the Object
impl. You still need to determine if there is an intersection or not using the same calculations as before, and you'll now need to do some additional maths to find the closest of the two roots that is in within the bounds given. Calculate the surface normal and the intersection point here too, and pack it all into the Hit
struct. If there is no intersection, continue to return None
.
Update your colour function to use the Object
trait implementation of Sphere::hit
. Put the bounds as for now, so all intersections in front of the camera are valid. Also update it to deal with the new return type, but doing the same thing as before, shading based upon the normal. Make sure there's no change from the last task so you know everything is working.
Task 5.3: Inside Out
We need to talk about surface normals again. The normal is just a unit vector perpendicular to the surface, of which there is actually two: either pointing outside the sphere, and the inverse pointing inside. At the moment, the normal we're finding will always match whether the ray starts inside or outside the sphere (we will need inside when we get to glass). We need to know which side a ray hits from, and we'll need to be consistent in which direction the normals point.
For consistency, we're going to make normals always point outward, and then store in the Hit
struct which side of the object the ray hit. Add a boolean field called front_face
to the struct, that will be true
when the ray hits the outer surface, and false
when it hits the inner surface.
The normalised outer surface normal can be calculated by (the length of is , so this normalises). We can then use a property of the dot product to detect which side the ray hit from:
Where is the angle between the two vectors joined tip-to-tip. This means that if the dot product of two vectors is 0, they are perpendicular. If the product is positive the two are at angles of less than 90 degrees, and if negative they lie at angles of between 90 and 180 degrees. So, if ray.direction.dot(outward_normal) > 0
, then the ray has hit from the inside and we need to invert our normal and set front_face
to false.
Implement this logic in your code, making sure that in the current render front_face
is always true. If there are any bugs in your implementation you might not catch them all now because we have no cases where front_face
is false yet, so double and triple check your maths. You could shuffle the sphere and camera positions around to put the camera inside the sphere, and see what your results look like.
Task 5.4: Scenes
We have implemented Object
for Sphere
, but what about multiple spheres? We don't want to check every object for intersection for each ray, so we're going to implement Object
for a list of Objects
. Well how is that going to work, can you implement a trait for a list of itself? Yes, you can. We're going to use trait objects. Trait objects are pointers to types, where the only thing known about that type is that it implements a specific trait. If you're familiar with dynamic dispatch, this is dynamic dispatch in Rust. We can create a Vec<dyn Object>
, which Rust reads as "a list of items which all implement Object
, and I'll figure the rest out at runtime".
Create a type alias for an entire scene: pub type Scene = Vec<dyn Object>
. The compiler should now be complaining, as it can't know what the size of dyn Object
is at compile time, so we can't just put it into a Vec
so easily. We need to put all our types in boxes, which are smart pointers that work with data of unknown size (allocated on the heap, not on the stack). If you haven't figured it out already, what you actually want is Vec<Box<dyn Object>>
.
Now we have the type nailed down, implement Object for Scene
. The idea is you return the Hit
that is closest to the camera, so the one with the smallest t
of all the spheres. Being able to do this provides some nice abstraction, as we can just check if the ray intersects anywhere in the scene, and get back the Hit
for the closest object, which is the one the ray actually wants to hit.
The code you wrote using Rayon earlier might be complaining now. Rust is very strict about what types can work with multithreaded code, which it enforces though the Send and Sync traits. Rust doesn't know for sure that a dyn Object
is necessarily safe to share between threads, so we simply need to add a Sync
bound on our Scene
type signature: Vec<Box<dyn Object + Sync>>
. All our types implementing Object
will are (and will be) Sync
, so we don't need to worry about this, beyond us telling the compiler and Rayon know we have everything under control. Change your type signature again to be Vec<Box<dyn Object + Sync>>
, and that should shut the compiler up.
Task 5.5: "Ground"
In main, create a Scene
of the existing sphere, then add another sphere at (0, -100.5, -1)
with radius 100. Change your colour function to take anything that implements Object
(it's probably best to use generics here), and you should end up with something like this:
It's quite convenient the "ground" appears green, since that happens to be the colour for a normal with high y
(G
) and small x
and z
(r
and b
). Note how it occurs on the top of the small sphere too.
6: Antialiasing
You might have noticed the slightly jagged and pixelated edges on your sphere. This is because we are taking only a single ray (sample) per pixel, which is either sphere or background -- when many pixels will cover a region that is partly both. To combat this, we will use a form of anti-aliasing called supersampling.
Supersampling is simply using multiple rays (samples) per pixel, which are averaged to form the final colour value. We'll be using one of the simplest forms: averaging the colour of a set of randomised samples (rays) per pixel.
Task 6.1: Multisampling
So that each sample isn't identical, we're going to slightly randomise the ray directions within each pixel, for which we will use our old friend, the rand
crate. Add it to your package manifest. rand::random::<T>()
is a generic function that will return a random T
. In case of floats, rand::random::<f64>()
returns a random number .
In your main rendering loop, add a slight randomness to your ray direction by adding a random float to your pixel coordinate. Add this before you scale the coords down to the viewport (dividing by image size). This detail is important otherwise you'll be casting rays all over the place and your render will be totally screwed.
Add a variable let samples = n
to the top of main
. Update the render loop to take n
random samples of each pixel, and then set the final pixel colour to the average of the n
samples. Your render should be the same, but with smoother edges to your sphere's edges. Zoom in around the edge of the sphere and you should notice the effect. Use n=100
samples for now, but don't hardcode this in your loop.
Task 6.2: Camera Struct
Now seems like as good a time as any to abstract the camera out into its own type. Create a new file camera.rs
, and in it add a Camera
struct, that contains:
- The camera's position, the ray origin
- The position of the top left corner of the viewport
- The horizontal and vertical viewport size vectors.
Add a function to return the default camera, with the exact same configuration as before. Add also a method get_ray()
and move in the code that takes the coordinates of a pixel and returns the ray from the camera through it.
Check you haven't introduced any bugs by making sure your render is the same as before.
Task 6.3: Progress Bars
Taking 100 samples for each pixel is probably making your renderer start to chug again. If it's really taking too long, try dropping the number of samples, but we can add a progress bar as a nice little touch to help us see how long it's got left. We're going to use another crate: indicatif
(0.16 is required for the below styling to work, 0.17 changes the syntax).
Indicatif works by binding a progress bar to iterators, and then shows a progress bar in the console as the iterator progresses. Have a read over the docs and examples to get an example of how it works.
Remember that we're using Rayon's parallel iterators instead of regular iterators? Indicatif has support for those too. In Rust, crates can add optional features that are enabled/disabled in the package manifest. By enabling Indicatif's rayon
feature, it will work correctly with parallel iterators. Add the following line to your dependencies:
[dependencies]
indicatif = { version = "0.16", features = ["rayon"] }
Add a progress bar with a given length to your program by declaring one in main using ProgressBar::new()
. Configure its style and format to your liking (the style I used is shown below). Add it to your parallel iterator using progress_with()
.
bar.set_style(
ProgressStyle::default_bar()
.template(
"{spinner:.green} [{wide_bar:.green/white}] {percent}% - {elapsed_precise} elapsed {msg}",
)
.progress_chars("#>-")
.on_finish(ProgressFinish::WithMessage("-- Done!".into())),
);
By default, indicatif
updates and redraws the progress bar for every update, however we have hundreds of thousands of updates, so this can add significant lag. Limit this draw rate to times a second with:
bar.set_draw_rate(5);
Indicatif lets you create some really neat progress bars, so have a play around with it to customise it to your liking.
7: Diffuse Materials
We're about ready to start making objects look realistic. Diffuse objects (that don’t emit light) appear as a mix of their own colour spectrum and the colour of the light reflected off their surroundings. Diffuse objects randomly scatter the incoming light rays as they have a rough surface (at least at a microscopic level). So, if we send three rays into a crack between two diffuse surfaces they will each have different random behaviour, as shown.
Light rays may also be absorbed rather than reflected. Darker surface have higher levels of absorption (that's why it's dark, they don't reflect light). Lambertian reflectance is the property that defines an ideal diffusely reflecting surface, and we're going to model it.
Light hitting the surface at an angle further from the normal have less colour influence, as they hit at a shallower angle. This property can be closely modelled by sampling in a unit sphere which has the surface as a tangent.
The unit radius sphere tangent to the hit point of a surface which is outside the surface. This has its centre at , where is the normal to the surface at . Pick a random point inside the unit radius sphere and send a ray from the hit point to the random point , to give us a random vector , that will be the diffuse reflected ray.
We need a way to pick a random point in a unit sphere. A rejection method is the easiest way to do this: pick a random point in a unit cube, and reject it and try again if it's not in the sphere. If we then normalise the vector to be actually on the sphere, then it actually more accurately models Lambertian reflection. The original book has a much more detailed discussion of modelling diffuse reflection, and I encourage you to read over it.
Task 7.1: Lambertian Sampling
Write a function to do this by generating a vector whose elements are random numbers between -1 and 1. If the length of the vector is less than 1, then it's in the sphere.
We're going to update the colour function to be recursive, casting reflection rays in a random direction. The direction of the reflected ray will be , and will come from the impact point of the original ray (you can get all that data from your Hit
struct, remember). Halve the result of the recursive call so that each reflected ray has less and less intensity, and return that.
Task 7.2: Recursion Limit
If you ran this, you probably blew the stack with infinite recursive calls. We need to limit the number of child rays to prevent infinite recursion. Add a depth parameter to the colour
function to keep track of the current recursion depth. When the recursion hits max depth, say 50 rays for now, stop recursing and just return black.
You should have an image like so
Task 7.3: Gamma Rays
Take a look at the shadowing under the sphere. This picture is very dark, but our spheres only absorb half the energy on each bounce, so they are 50% reflectors. If you can’t see the shadow, don’t worry, we will fix that now. These spheres should look pretty light, about a light grey. The reason for this is that almost all image viewers assume that the image is “gamma corrected”, meaning the 0 to 1 values have some transform before being stored as a byte. There are many good reasons for that, but for our purposes we just need to be aware of it. To a first approximation, we can use “gamma 2” which means raising the colour to the power 1/gamma, or in our simple case 0.5, which is a square root.
Add a line to the From<Vec3> for Rgb<u8>
function to correct for this, taking the square root of all the values before converting them to bytes. Your render should look a bit lighter now, closer to what you'd expect for a 50% reflector:
Task 7.4: Shadow Acne
There is another subtle bug here. Some of the reflected rays aren't reflecting from exactly , but a small fraction off it, do to floating point imprecision. This can lead to some rays starting slightly behind their starting surface, erroneously then re-intersecting with it. Ignoring hits very near to 0 can fix this, by passing the minimum bound as 0.00001
instead of 0
to the hit function. The updated render looks lighter and much cleaner:
This fixes the problem known as "shadow acne".
8: Metal
We have diffuse materials modelled by Lambertian reflectance, so let's add a metal material. Before we do that, we'll create some abstractions for dealing with different materials and reflections.
Task 8.1: Lambertian Material
We're going to create a trait to describe how light scatters off different materials:
pub trait Material {
fn scatter(&self, incident_ray: &Ray, hit: &Hit) -> Option<Reflection>;
}
Material::scatter
takes the incident ray and the Hit
struct, and uses them to determine if there is a reflected ray. The Reflection
struct contains the reflected ray, and also a vector to describe how the colour of the reflected ray is attenuated (because different materials will change the colour of reflected rays differently). Create a new file material.rs
, and add Material
and Reflection
there.
We'll add an implementation of Material
to describe Lambertian reflectance. The Lambertian
struct needs a single field to describe how the colour of reflected rays is attenuated.
The Material
impl for Lambertian
should contain the logic that's currently in ray::colour
. Calculate the scatter direction, create the reflected ray, and return it inside a Reflection
struct. The amount the reflected ray is attenuated by is the colour of the material.
Task 8.2: Use Materials
We need to make our objects aware of the fact that they can be different materials too. The Sphere
struct needs an extra field, material
, the type of which should be any type implementing the Material
trait. That's right, your struct is going to need to be generic.
Add a field reflection
to the Hit
struct too. The idea is that then the Object::hit
method populates that field if the hit caused a reflected ray to be generated. Update Sphere::hit
to call the Material::scatter
from it's material
field, and use that to fill the reflection
field of Hit
.
Update ray::colour
to use the new Material
abstraction. If there is a reflected ray, make the recursive call same as before, returning the result multiplied by the colour attenuation.
You'll need to update the two spheres created in main
too to account for this. Make them both have colour (0.5, 0.5, 0.5)
There's a lot of re-architecting of the raytracer going on here, but nothing actually changes functionally yet. Make sure the rendered image is the same. If you set the random seed to be the same, Git will even tell you if your file has changed or not!
Task 8.3: Edge Cases
Take another closer look at Lambertian::scatter
. If the random unit vector generated is exactly opposite the normal vector, the two will sum to zero and the scatter direction will be zero, meaning we'll have no reflected ray. In our implementation, this means we get a black pixel where we should have colour. We need to avoid this by checking if the scatter direction is zero, and if it is we'll just reset the scatter direction to the normal.
Floating point zero is weird because it isn't exact (we already had to deal with shadow acne), so add a method to your vector to check if it is zero, returning true
if all three of its elements are within some small tolerance of zero (e.g. 1e-8
). If this is the case, replace the scatter direction with the hit normal.
Task 8.4: Smooth Reflection
Now we've built up our abstractions and tested they work on the existing logic, we can extend the raytracer to include a metal material.
For smooth metal the ray won't be randomly scattered. We need to do the maths to work out the direction of reflected rays.
The reflected ray direction in red is . The length of is and it goes in the same direction as , so we can calculate it as . So we have:
(Note the minus sign, since points inwards and outward.)
Write a function reflect(v: Vec3, normal: &Vec3) -> Vec3
to implement this. You could put this as a private method of Metal
, or as a module-scope function at the bottom of material.rs
.
This function is used to reflect rays off our metal material. Add a struct Metal
that has a single Colour
field, and implement Material
for it, calculating this reflected ray. Note that the reflected ray should only be returned if the dot product of it's direction with the hit normal is greater than zero. A reflection ray that is at an angle of greater than 90 degrees to the normal doesn't make sense (how are you reflecting from under the surface?).
Update your scene so you have four spheres:
- Center
(0, 0, -1)
, radius0.5
,(0.7, 0.3, 0.3)
lambertian - Center
(-1, 0 -1)
, radius0.5
,(0.8, 0.8, 0.8)
metal - Center
(0, 0, -1)
, radius0.5
,(0.8, 0.6, 0.2)
metal - Center
(0, -100.5, -1)
, radius100
,(0.8, 0.8, 0)
lambertian
Your new render should look like this. See how the metal spheres are reflecting the centre sphere, and you can see the other half of the ground sphere behind them.
Task 8.5: Fuzziness
Right now the metals look perfectly smooth, which is very rarely the case in real life. We can add a little fuzziness by randomising the reflected rays slightly, with a fuzz factor. We'll change the scattered ray direction to be , where f
is a scalar fuzz factor, and is a random unit vector (which you hopefully have from earlier). Add a fuzz
field to the Metal
material struct, and update the reflected ray direction to be a little fuzzy, reusing the random unit vector function from earlier.
Make the left sphere have a fuzziness of 0.3
, and the right sphere 1.0
. Your updated render should look like this:
9: Glass
Clear materials such as water and glass are dielectrics. When light hits them, it splits into a reflected ray and a refracted ray. We'll model this in our raytracer by randomly choosing between either reflection or refraction, and only generating one or the other per sample.
Refraction is described by Snell's law:
where and are angles from the normal, and and are the refractive indices of the two materials. We want to solve for , to get the angle of our new ray.
On the refracted side of the surface there is a refracted ray and a normal , and an angle between them. The ray can be split into the sum of it's two components parallel and perpendicular to :
We can solve for both those components:
click to expand the derivation of this
First we solve for . The first step here is to construct the perpendicular vector to the normal .
The green vector in the diagram is parallel to , and is equivalent to the combined vectors of then back up along the normal by , hence .
The current magnitude of this vector is , so to normalise we divide by . The magnitude we want is , so multiply by that: . But , leading to our final
Onto : The total magnitude of is . Rearrange this for to , which is simpy setting the magnitude of to whatever makes a unit vector. This is in the opposite direction to the normal, so use as the direction. To simplify, can also be expressed as a dot product of with itself. This gives us the final
Note that this still depends on knowing , but we can get this from the dot product:
If we restrict and to be unit vectors, ie , then:
This gives us in terms of known quantities (note as opposite direction to ):
Task 9.1: Refraction
Write a small helper function in material.rs
to return the direction of the refracted ray. There's a lot of maths here, but the idea is:
- Take the incident ray, the normal, and the ratio as arguments
- Calculate
- Calculate and
- Return the sum of the two
Task 9.2: Dielectric
You can refract rays, so let's add a dielectric material that does just that with it's scatter method. Create a new struct Dielectric
with a single field, it's refraction ratio (). Create a new Material
impl for it, such that scatter
returns a new reflected (*technically it's refracted now) ray with a colour attenutation of 1 (no attenuation), and direction vector calculated by your refract function. Don't forget to normalise your incident ray before using it, as we made the assumption that when we did the maths above.
An interesting thing to note is that if your ray comes from outside the sphere (ie, hit.front_face == true
), then you will need to set the refraction ratio to be it's reciprocal, as and are flipped.
Update the scene to change the left sphere to be a dielectric with a ratio of 1.5 (roughly glass), then render it and see what you get.
Task 9.3: Total Internal Reflection
That might not look quite right, which is because there's a flaw in our approximations.
When the ray is going from a material with a high refractive index to one with a lower one, there is no solution so Snell's law. Referring back to it:
If the ray is inside glass and outside is air (, ), then:
But the value of cannot be greater than 1, so the equality is broken and the solution does not exist: the glass cannot refract. In this case the ray must be reflected, which gives us the phenomenon known as total internal reflection.
Update Dielectric::scatter
to account for this, implementing total internal reflection when it cannot refract. You'll need to calculate both and , then if , reflect instead of refract.
You should get something that looks a bit more correct:
If you can't see much of a difference between the two for this scene, I wouldn't blame you. For the example scene, total internal reflection is never really visible as it will only happen as a ray tries to go from glass to air, so play around with the scene and see if you can see the difference.
Task 9.4: Schlick
Let's play around with our glass model again. Real glass has a reflectivity that varies with angle (look at a window from a steep angle and it becomes more of a mirror). We're going to implement this behaviour using a neat polynomial approximation called the Schlick approximation (because the actual equation is very big and a bit scary).
The reflectance coefficient, can be approximated by:
where
is the refractive index of the material, and is the angle between the normal and the incident light ray. Implement a function that calculates , taking and as parameters.
We'll use the function by checking if is greater than a random double every time we call Dielectric::scatter
, and reflect instead of refract if so. You should also still be reflecting if the conditions for total internal reflection are met.
Notice how the sphere looks a little fuzzier around the edges, and a bit more realistic?
10: Positionable Camera
Cameras are hard, and there's a lot of geometry here so follow closely.
10.1: FoV
We'll start by allowing an adjustable field of view (FoV): the angle that can be seen through camera. The image isn't square so the FoV will be different horizontally and vertically. We'll specify the vertical one in degrees.
The image below shows our FoV angle coming from the origin and looking into the plane, same as before. is therefore the height of our viewport, . The total height of our image will be , and the width will be height * aspect_ratio
.
Create a new()
function for Camera
to take an FoV and aspect ratio as arguments, and then calculate the viewport width and height instead of hardcoding them. Make sure you check your angle units!
Update your scene to use a camera with a 90 degree vertical FoV and 16:9 aspect ratio. Delete all the current objects, and add two spheres:
- We'll use a constant for brevity
- Centre , radius , blue lambertian material
- Centre , radius , red lambertian material
Your image should look like this, a wide-angle view of the two spheres. Play around with the FoV and see what it looks like. Making it wider will zoom out on the spheres, and narrower will zoom in. This can be a little fiddly to implement, but do make sure it's completely correct.
10.2: Direction
The next step is being able to move the camera to wherever we want. Consider two points: look_from
, the position of the camera where we are looking from; and look_at
, the point we wish to look at.
This gives us the location and look direction of the camera, but does not constrain the roll, or tilt, of the camera. Think about it as if you're looking from your eyes to a point, but you can still rotate your head about your nose. We need an "up" vector for the camera, which can be any vector orthogonal to the view direction (we can actually just use any vector we want, and then project it onto the plane). This will be the "view up" vector, .
We can use vector cross products to form an orthonormal basis to describe our camera's orientation: 3 unit vectors all at right angles to each other which describe the 3D space.
- is any vector that we specify
- is the vector
look_from - look_at
, so is our view direction - is the unit vector of the cross product
- is the cross product
- You cannot assume as we allow to not be orthogonal to the view direction
Update the camera new()
function to take look_from
and look_at
points as parameters, as well as a vup
vector. Calculate u
, v
, and w
, making sure to normalise them all. The new camera origin is look_from
, and the new horizontal/vertical vectors are u
and v
with magnitudes of the viewport's size. The top left corner of the viewport is calculated as before, but the -z
distance from the origin is now w
instead of just 1.0
.
Change the scene to the following:
- Sphere centre , Radius 0.5, Lambertian with colour
- Sphere centre , Radius 0.5, Dielectric with ratio 1.5
- Sphere centre , Radius 0.5, Metal with colour and 0 fuzziness
- Sphere centre , Radius 100, Lambertian with colour
- Camera looking from to with
vup
, 90 degree FoV and 16/9 aspect ratio
Your scene, as viewed from far away, should look like this:
You can zoom in a bit too. Change the camera settings to a 20 degree FoV:
11: Depth of Field
Our final feature will be Depth of Field, or defocus blur. Due to camera lenses and focus, only a small range of depths will appear in focus, anything too close or far will be out of focus and blurred. There is a plane in our image where everything will be in perfect focus, and everything closer or further will blur (only slightly at first). The distance between this focus plane and our camera lens is the focus distance. Real cameras have complex compound lenses, but we will use a thin lens approximation, more similar to how the eye works:
The range of rays that end up on a particular point of the resulting image go through all parts of the circlar lens, and point from there towards the focus point. Any object closer (or further), will have these intersection points (that should perfectly match for perfect focus) spread over its surface, making it out of focus.
Since our camera is infinitesimally small, we'll need to pretend it has an actual lens.
We'll implement this by making the origin of each ray a random point within a disc centered on the look_from
point, to model a lens. The larger the disc (lens), the larger the defocus blur.
We'll need a simple function to generate a random point within a unit circle, so using a similar technique as we did earlier, write a function to generate a vector with random x
and y
components with z = 0
that lies within a unit circle.
Update Camera::new()
to take an aperture width (lens size) and focus distance as parameters. The horizontal and vertical image size vectors should be scaled by the focus distance, and the top left corner should be moved backwards by scaling w
by the focus distance too. Add fields to the Camera
struct to store u
and v
, as well as the lens radius (half the aperture width).
Camera::get_ray()
needs updating to randomise the ray origins. Start by generating a random point within the lens , then calculate the new ray origin as . The position of the target pixel and can then be calculated as before to generate the ray.
Update the camera to the following settings:
look_from
look_at
vup
- FoV 20
- Aspect ratio 16/9
- Aperture 2.0
- Focus distance as the length between
look_from
andlook_at
The focus centre of the blue sphere lies in the focus plane so that is in focus, but the other two are blurred, as they are out of focus:
This specific scene looks a bit rubbish, so let's do one that's a bit nicer.
12: A Final Render
So our ray tracer now has everything we need to make the procedurally generated image I showed you at the very top. I'll just give you the code to generate the scene because writing out a description of how to do this would be not very practical.
fn random_scene() -> Scene {
let mut objects: Scene = vec![];
let ground = Box::new(Sphere::new(
v!(0, -1000, 0),
1000.0,
Lambertian::new(v!(0.5, 0.5, 0.5)),
));
objects.push(ground);
for a in -11..11 {
for b in -11..11 {
let a = a as f64;
let b = b as f64;
let material_choice: f64 = rand::random();
let center = v!(
a + 0.9 * rand::random::<f64>(),
0.2,
b + 0.9 * rand::random::<f64>()
);
if material_choice < 0.8 {
//diffuse
let material = Lambertian::new(v!(rand::random::<f64>()));
objects.push(Box::new(Sphere::new(center, 0.2, material)));
} else if material_choice < 0.95 {
//metal
let colour = v!(rand::random::<f64>() / 2.0 + 0.5);
let fuzz = rand::random::<f64>() / 2.0;
let material = Metal::new(colour, fuzz);
objects.push(Box::new(Sphere::new(center, 0.2, material)));
} else {
//glass
objects.push(Box::new(Sphere::new(center, 0.2, Dielectric::new(1.5))));
}
}
}
objects.push(Box::new(Sphere::new(
v!(0, 1, 0),
1.0,
Dielectric::new(1.5),
)));
objects.push(Box::new(Sphere::new(
v!(-4, 1, 0),
1.0,
Lambertian::new(v!(0.4, 0.2, 0.1)),
)));
objects.push(Box::new(Sphere::new(
v!(4, 1, 0),
1.0,
Metal::new(v!(0.7, 0.6, 0.5), 0.0),
)));
objects
}
Use the following camera and image settings:
- Image width of 1200 pixels
look_from
look_at
vup
- FoV 20
- Aspect ratio 1.5
- Aperture 0.1
- Focus distance 10
An interesting thing you might note is the glass balls don’t really have shadows which makes them look like they are floating. This is not a bug — you don’t see glass balls much in real life, where they also look a bit strange, and indeed seem to float on cloudy days. The glass balls don't block light like a solid sphere does, it just bends the light, so plenty of light still hits the surface underneath it.
What next?
What you've built is actually an incredibly cool thing, modelling physical light using a bit of simple geometry to build the basis of a 3D rendering engine, the principles of which are not dissimilar to that used in real gaming and 3D animation applications.
Play around with the scene, change the objects and their positions, put the camera at weird angles, see what cool pictures you can generate.
There are two more books that follow on from this Ray Tracing: The Next Week and Ray Tracing: The Rest of Your Life, which go on and add a bunch more features to the ray tracer. This was only up to the end of the first book so, the others are certainly worth a read, though you'll have to carcinise it yourself (or do it in C++, which despite all it's problems is still widely used and a good skill to have).
Raytracer Solutions
The full solution is available on Github. Feel free to browse through the commit history to see the stages of how I built my solution.
1: Images
Task 1.1
image
is great, this is dead simple if you find the right bits in the documentation. Learning to use Rust docs is an important skill.
use image::{Rgb, RgbImage};
fn main() {
let mut buffer = RgbImage::new(256, 256);
for (_, _, px) in buffer.enumerate_pixels_mut() {
*px = Rgb([255, 0, 0]);
}
buffer.save("render.png").expect("Could not save image");
}
This should yield you a big red square. Don't forget to include image
in your Cargo.toml
:
[dependencies]
image = "0.24.1"
Task 1.2
fn main() {
let width = 400;
let height = 400;
let mut buffer = RgbImage::new(256, 256);
for (i, j, px) in buffer.enumerate_pixels_mut() {
let r = i as f64 / (width - 1) as f64;
let g = j as f64 / (height - 1) as f64;
let b = 0.25;
*px = Rgb([r, g, b].map(|c| (c * 255.999) as u8))
}
buffer.save("render.png").expect("Could not save image");
}
We scale the range 0-1 from 0-255 by multiplying by 255.999, as the as
cast from float to int in Rust rounds down. I also increased the size of the image here to show off our nice gradient a bit better. I changed the size of the image here to demonstrate that it should work for images of any size (not just 256x256, and not just square). Try playing around with different image sizes and gradients.
2: Vectors
2.1
Our Vec3
struct with all its methods:
pub struct Vec3 {
pub x: f64,
pub y: f64,
pub z: f64,
}
impl Vec3 {
pub fn len(&self) -> f64 {
(self.x * self.x + self.y * self.y + self.z * self.z).sqrt()
}
pub fn normalise(self) -> Self {
self / self.len()
}
pub fn dot(&self, other: &Self) -> f64 {
self.x * other.x + self.y * other.y + self.z * other.z
}
pub fn cross(&self, other: &Self) -> Self {
Self {
x: self.y * other.z - self.z * other.y,
y: self.z * other.x - self.x * other.z,
z: self.x * other.y - self.y * other.x,
}
}
pub fn map<F>(self, mut f: F) -> Vec3
where
F: FnMut(f64) -> f64,
{
Vec3 {
x: f(self.x),
y: f(self.y),
z: f(self.z),
}
}
}
impl From<Vec3> for Rgb<u8> {
fn from(v: Vec3) -> Self {
image::Rgb(
[self.x, self.y, self.z].map(|c| (c * 255.999) as u8),
)
}
}
2.2
You want the #[derive]
to look like:
use derive_more::{Add, Constructor, Div, Mul, Neg, Sub};
#[derive(Debug, PartialEq, PartialOrd, Clone, Copy, Add, Div, Mul, Sub, Neg, Constructor)]
pub struct Vec3 {
pub x: f64,
pub y: f64,
pub z: f64,
}
Your two handwritten Mul
impls:
impl Mul<Vec3> for f64 {
type Output = Vec3;
fn mul(self, rhs: Vec3) -> Self::Output {
rhs.map(|x| x * self)
}
}
impl Mul for Vec3 {
type Output = Self;
fn mul(self, rhs: Self) -> Self::Output {
Vec3 {
x: self.x * rhs.x,
y: self.y * rhs.y,
z: self.z * rhs.z,
}
}
}
// Optionally a `impl Div for Vec3` like the mult may be useful later
2.3
Your macro should look as shown in the instructions. Don't worry if it was kinda confusing, macros are hard.
3: Rays
3.1
You should have a new file ray.rs
, and a mod ray;
statement in main.rs
. In ray.rs
:
use derive_more::Constructor;
#[derive(Debug, PartialEq, PartialOrd, Clone, Constructor)]
pub struct Ray {
pub origin: Point,
pub direction: Vec3,
}
impl Ray {
pub fn at(&self, t: f64) -> Point {
self.origin + self.direction * t
}
}
3.2
Our updated main
function, with all the camera and geometry definitions:
fn main() {
//image
let aspect_ratio = 16.0 / 9.0;
let img_width: u32 = 400;
let img_height = (img_width as f64 / aspect_ratio) as u32;
//camera and viewport
let view_height = 2.0;
let view_width = view_height * aspect_ratio;
let focal_length = 1.0;
//geometry
let origin: Point = v!(0);
let horizontal: Vec3 = v!(view_width, 0, 0); //horizontal size vector
let vertical: Vec3 = v!(0, -view_height, 0); //vertical size vector, negated because we start in the top left and move *down* when rendering
let top_left: Point = origin - horizontal / 2.0 - vertical / 2.0 - v!(0, 0, focal_length); //the position of the top left corner of our imgae
let mut buffer = RgbImage::new(img_width, img_height);
for (i, j, px) in buffer.enumerate_pixels_mut() {
//pixel coordinates as scalars from 0.0 <= t <= 1.0
let u = i as f64 / (img_width - 1) as f64;
let v = j as f64 / (img_height - 1) as f64;
//the direction of the ray
//start at top left, then go horizontally scaled by u and vertically by v
let ray_direction: Vec3 = top_left + u * horizontal + v * vertical - origin;
//save pixel colour to buffer
*px = ray::colour(&Ray::new(origin, ray_direction)).into();
}
buffer.save("render.png").expect("Could not save image");
}
And the simple green colour
function, under ray.rs
:
pub fn colour(ray: &Ray) -> Colour {
v!(0,1,0)
}
3.3
Our lerp:
pub fn colour(ray: &Ray) -> Colour {
let direction = ray.direction.normalise();
let t = 0.5 * (direction.normalise().y + 1.0); //scale from -1 < y < 1 to 0 < t < 1
//two colours to blend
let white: Colour = v!(1);
let blue: Colour = v!(0.5, 0.7, 1);
//blend
blue * t + white * (1.0 - t)
}
4: Spheres
### 4.1
The entirety of object.rs
is shown below. Pay careful attention to the quadratic formula in hit
.
use derive_more::Constructor;
use crate::{ray::Ray, vector::Point};
//a sphere
#[derive(Debug, Constructor)]
pub struct Sphere {
pub center: Point,
pub radius: f64,
}
//calculate ray-sphere intersection stuff
impl Sphere {
pub fn hit(&self, ray: &Ray) -> bool {
let oc = ray.origin - self.center;
let a = ray.direction.dot(&ray.direction);
let b = 2.0 * oc.dot(&ray.direction);
let c = oc.dot(&oc) - self.radius * self.radius;
let discriminant = b * b - 4.0 * a * c;
discriminant >= 0.0
}
}
This is the condition you want to add to your colour function too
let sphere = object::Sphere::new(v!(0, 0, -1), 0.5);
if sphere.hit(ray) {
return v!(1, 0, 0);
}
4.2
Your new parallel for_each
iterator:
buffer.enumerate_pixels_mut() //create the iterator over the buffer
.par_bridge() // bridge it to a parallel iterator
.for_each(|(i, j, px)| { //for each item in the iterator, execute this closure
//loop body is unchanged
});
If you're still really struggling with performance, ask someone to have a look over your code with you and we'll see if there's anything else we can do to speed it up.
5: Surface Normals & Multiple Objects
5.1
The updated Sphere::hit()
method:
impl Sphere {
pub fn hit(&self, ray: &Ray) -> Option<f64> {
let oc = ray.origin - self.center;
let a = ray.direction.dot(&ray.direction);
let b = 2.0 * oc.dot(&ray.direction);
let c = oc.dot(&oc) - self.radius * self.radius;
let discriminant = b * b - 4.0 * a * c;
if discriminant < 0.0 {
None
} else {
Some((-b - discriminant.sqrt()) / (a * 2.0))
}
}
}
Since the discriminant is non-negative, we can discard the -b + discriminant.sqrt()
case, since the negative case is always closer.
And Ray::colour()
:
pub fn colour(ray: &Ray) -> Colour {
let sphere = object::Sphere::new(v!(0, 0, -1), 0.5);
//if the sphere and ray return Some(t)
if let Some(t) = sphere.hit(ray) {
//calculate normal, scale and return it
let normal = (ray.at(t) - sphere.center).normalise();
(normal + v!(1)) / 2.0
} else { //else, same as before
let direction = ray.direction.normalise();
let t = 0.5 * (direction.normalise().y + 1.0); //scale from -1 < y < 1 to 0 < t < 1
//two colours to blend
let white: Colour = v!(1);
let blue: Colour = v!(0.5, 0.7, 1);
//blend
blue * t + white * (1.0 - t)
}
}
5.2
The Hit
struct:
pub struct Hit {
pub impact_point: Point,
pub normal: Vec3,
pub paramater: f64,
}
And the Object
trait:
// Represents objects within the scene
pub trait Object {
//determines if an object has been hit by a ray
//returns the impact point, the surface normal to the impact point, and the solution to the impact equation
//if there is no intersection, return None
fn hit(&self, ray: &Ray, bounds: (f64, f64)) -> Option<Hit>;
}
Sphere
will still now have a hit method, but it will be part of it's Object
implementation:
impl Object for Sphere {
fn hit(&self, ray: &Ray, bounds: (f64, f64)) -> Option<Hit> {
//calculate intersection
let oc = ray.origin - self.center;
let a = ray.direction.dot(&ray.direction);
let b = 2.0 * oc.dot(&ray.direction);
let c = oc.dot(&oc) - self.radius * self.radius;
let d = b * b - 4.0 * a * c;
if d < 0.0 {
return None;
}
//get the correct root, if one lies in the bounds
let mut root = (-b - d.sqrt()) / (2.0 * a);
if !(bounds.0..bounds.1).contains(&root) {
root = (-b + d.sqrt()) / (2.0 * a);
if !(bounds.0..bounds.1).contains(&root) {
return None;
}
}
let impact_point = ray.at(root);
let normal = (impact_point - self.center) / self.radius;
Some(Hit {
impact_point,
normal,
paramater: root,
})
}
}
Sphere is still a sphere, but it's also an object too. Rust makes it really easy for us to build expressive abstractions like this, which we do more of down the line when we start working with different materials too.
5.3
Something like this will work:
let impact_point = ray.at(root);
//the normal that is always opposite to the incident ray
let normal = (impact_point - self.center) / self.radius;
//make sure the normals always point outward from the sphere's surface regardless of incident ray direction
//set front_face accordingly
let (normal, front_face) = if ray.direction.dot(&normal) > 0.0 {
(-normal, false)
} else {
(normal, true)
};
5.4
Your Scene
type and it's Object
impl. See how we're making nice use of that object trait from earlier?
pub type Scene = Vec<Box<dyn Object + Sync>>;
impl Object for Scene {
fn hit(&self, ray: &Ray, bounds: (f64, f64)) -> Option<Hit> {
self.iter()
.filter_map(|o| o.hit(ray, bounds)) //filter out the ones that don't intersect
.min_by(|h1, h2| h1.paramater.partial_cmp(&h2.paramater).unwrap()) //sort by smallest parameter, returning lowest
}
}
Try not to worry about trait objects too much now, there's a lot of complexity associated with them (vtables, object safety) once you start to dig into it. All you need to understand is that dyn Object + Sync
is a type that implements both Object
and Sync
, and we need to Box
it on the heap because we don't know what those types are at compile time, so we can't reason about how big they are.
5.5
Our entire scene is defined like so in main()
:
//world
let objects: Scene = vec![
Box::new(Sphere::new(v!(0, 0, -1), 0.5)),
Box::new(Sphere::new(v!(0, -100.5, -1), 100.0)),
];
We then pass this to ray::colour
, which is updated as shown:
pub fn colour(scene: &impl Object, ray: &Ray) -> Colour {
if let Some(hit) = scene.hit(ray, (0.0, f64::INFINITY)) {
(hit.normal + v!(1)) / 2.0
} else {
let direction = ray.direction.normalise();
let t = 0.5 * (direction.normalise().y + 1.0); //scale from -1 < y < 1 to 0 < t < 1
//two colours to blend
let white: Colour = v!(1);
let blue: Colour = v!(0.5, 0.7, 1);
//blend
blue * t + white * (1.0 - t)
}
}
6: Antialiasing
6.1
Pay careful attention to where the randomness is added here. Note also how the colour is not accumulated into an RGB
type, but one of our own Vec3
types, and then converted to rgb at the last stage for precision. The body of the updated rendering loop:
//colour is a vector
let mut colour = v!(0);
for _ in 0..samples {
//randomness here
let u = (i as f64 + rand::random::<f64>()) / (img_width - 1) as f64;
let v = (j as f64 + rand::random::<f64>()) / (img_height - 1) as f64;
let ray_direction: Vec3 = top_left + u * horizontal + v * vertical - origin;
colour = colour + ray::colour(&objects, &Ray::new(origin, ray_direction));
}
//save pixel colour to buffer
*px = (colour / (samples as f64)).into(); //convert to RGB here
You could also draw the entire scene 100 times and average those out if you wanted, but it might require a bit more work to implement so this is the easy route.
6.2
The camera file should look as follows. We're literally just moving stuff over from main and encapsulating a few bits, that'll come in handy later when we make the camera a bit fancier.
use crate::{ray::Ray, v, Point, Vec3};
pub struct Camera {
origin: Point,
top_left: Point,
horizontal: Vec3,
vertical: Vec3,
}
impl Camera {
pub fn default() -> Self {
let aspect_ratio = 16.0 / 9.0;
let viewport_height = 2.0;
let viewport_width = aspect_ratio * viewport_height;
let focal_length = 1.0;
let origin: Point = v!(0, 0, 0);
let horizontal = v!(viewport_width, 0, 0);
let vertical = v!(0, -viewport_height, 0);
//the top left of our image is the origin, -1 away from the camera and up and right by half the height/width
let top_left: Point = origin - horizontal / 2.0 - vertical / 2.0 - v!(0, 0, focal_length);
Camera {
origin,
top_left,
horizontal,
vertical,
}
}
pub fn get_ray(&self, u: f64, v: f64) -> Ray {
let px_position = self.top_left + u * self.horizontal + v * self.vertical;
Ray::new(self.origin, px_position - self.origin)
}
}
main
is also edited to remove all this and add a single call to camera::Camera::default()
instead. The compiler will tell you which variables are used and unused where and what you can remove from main
. The render loop should get its rays from the camera using Camera::get_ray()
. Calculate u
and v
the same as before, but pass them to the camera:
let ray = camera.get_ray(u, v);
colour = colour + ray::colour(&objects, &ray);
6.3
The Indicatif code added in main:
println!("Rendering Scene...");
let bar = ProgressBar::new((img_width * img_height) as u64);
bar.set_style(
ProgressStyle::default_bar()
.template(
"{spinner:.green} [{wide_bar:.green/white}] {percent}% - {elapsed_precise} elapsed {msg}",
)
.progress_chars("#>-")
.on_finish(ProgressFinish::WithMessage("-- Done!".into())),
);
.progress_with(bar)
is added to the iterator chain just before the for_each()
call
buffer
.enumerate_pixels_mut()
.par_bridge()
.progress_with(bar)
.for_each(|(i, j, px)| {...
Again, I encourage you to style the progress bar yourself.
7: Diffuse Materials
7.1 & 7.2
I added my random unit vector function to the Vec3
struct, but you can put it wherever you think makes sense.
pub fn rand_unit() -> Self {
loop {
//random f64 range 0-1, scale it -1 to 1
let v = v!(rand::random::<f64>() * 2.0 - 1.0);
//if the vector lies in the unit sphere
if v.len() < 1.0 {
//normalise so it lies *on* the sphere and is a unit vector
break v.normalise();
}
}
}
Your updated ray::colour
function should look as shown
pub fn colour(scene: &impl Object, ray: &Ray, depth: u8) -> Colour {
if depth == 0 {
return v!(0);
}
if let Some(hit) = scene.hit(ray, (0.0, f64::INFINITY)) {
let direction = hit.normal + Vec3::rand_unit();
let origin = hit.impact_point;
0.5 * colour(scene, &Ray::new(origin, direction), depth - 1)
} else {
//... as before
Make sure to update the call site for the function to add the max_depth
parameter.
7.3
I added a call to map()
in Vec3::to_rgb()
to take the square root of everything before we do the byte conversion.
pub fn to_rgb(self) -> image::Rgb<u8> {
image::Rgb(
[self.x, self.y, self.z]
.map(|c| c.sqrt())
.map(|c| (c * 255.999) as u8),
)
}
Image encoding and colours is a much more complex topic than you might expect, so its worth looking into if you're interested.
7.4
Just changed the 0.0
to 0.00001
in the call to Scene::hit
in ray::colour
:
if let Some(hit) = scene.hit(ray, (0.00001, f64::INFINITY)) { //...
8: Metal
8.1
The entire contents of material.rs
is shown below:
use derive_more::Constructor;
use crate::{
object::Hit,
ray::Ray,
vector::{Colour, Vec3},
};
#[derive(Debug)]
pub struct Reflection {
pub ray: Ray,
pub colour_attenuation: Colour,
}
pub trait Material {
fn scatter(&self, incident_ray: &Ray, hit: &Hit) -> Option<Reflection>;
}
#[derive(Debug, Constructor)]
pub struct Lambertian(Colour);
impl Material for Lambertian {
fn scatter(&self, _: &Ray, hit: &Hit) -> Option<Reflection> {
//calculate reflected ray
let scatter_direction = hit.normal + Vec3::rand_unit();
let reflected_ray = Ray::new(hit.impact_point, scatter_direction);
//return it, along with the colour attenuation of it for this material
Some(Reflection {
ray: reflected_ray,
colour_attenuation: self.0,
})
}
}
8.2
The new Sphere
struct should look as follows, with the bounded generic type variable.
#[derive(Debug, Constructor)]
pub struct Sphere<M: Material> {
center: Point,
radius: f64,
material: M,
}
Hit
should have a new field pub reflection: Option<Reflection>
, and it should be filled at the bottom of Sphere::hit
impl<M: Material> Object for Sphere<M> {
fn hit(&self, ray: &Ray, bounds: (f64, f64)) -> Option<Hit> {
// all the same as before
//...
let mut h = Hit {
impact_point,
normal,
paramater: root,
front_face,
reflection: None,
};
h.reflection = self.material.scatter(ray, &h);
Some(h)
}
}
ray::colour
should look like this now too:
ub fn colour(scene: &impl Object, ray: &Ray, depth: u8) -> Colour {
if depth == 0 {
return v!(0);
}
if let Some(hit) = scene.hit(ray, (0.00001, f64::INFINITY)) {
if let Some(reflection) = hit.reflection {
reflection.colour_attenuation * colour(scene, &reflection.ray, depth - 1)
} else {
v!(0, 0, 0)
}
} else {
let direction = ray.direction.normalise();
let t = 0.5 * (direction.normalise().y + 1.0); //scale from -1 < y < 1 to 0 < t < 1
//two colours to blend
let white: Colour = v!(1);
let blue: Colour = v!(0.5, 0.7, 1);
//blend
blue * t + white * (1.0 - t)
}
}
Don't forget to update the two spheres in main
:
let objects: Scene = vec![
Box::new(Sphere::new(v!(0, 0, -1), 0.5, Lambertian::new(v!(0.5)))),
Box::new(Sphere::new(
v!(0, -100.5, -1),
100.0,
Lambertian::new(v!(0.5)),
)),
];
8.3
I added Vec3::is_zero()
, but you could also add it as a private helper function at the bottom if you wanted, or just inline it. It should like this:
pub fn is_zero(&self) -> bool {
let tolerance: f64 = 1e-8;
self.x.abs() < tolerance && self.y.abs() < tolerance && self.z.abs() < tolerance
}
This conditional check is then added to Lambertian::scatter
if scatter_direction.is_zero() {
scatter_direction = hit.normal;
}
8.4
The metal struct and impl should look like this:
#[derive(Debug, Constructor)]
pub struct Metal(Colour);
impl Material for Metal {
fn scatter(&self, incident_ray: &Ray, hit: &Hit) -> Option<Reflection> {
//the reflected ray direction
let reflection = reflect(incident_ray.direction, &hit.normal);
//the scattered ray
let scattered = Ray::new(hit.impact_point, reflection);
if scattered.direction.dot(&hit.normal) > 0.0 {
Some(Reflection {
ray: scattered,
colour_attenuation: self.0,
})
} else {
None
}
}
}
fn reflect(v: Vec3, normal: &Vec3) -> Vec3 {
v - 2.0 * v.dot(normal) * *normal
}
The new Scene
with four spheres is shown below too. This bit isn't hard, it's just boilerplate with constructors so I wouldn't blame you for copy-pasting this.
let objects: Scene = vec![
Box::new(Sphere::new(
//center
v!(0, 0, -1),
0.5,
Lambertian::new(v!(0.7, 0.3, 0.3)),
)),
Box::new(Sphere::new(
//ground
v!(0, -100.5, -1),
100.0,
Lambertian::new(v!(0.8, 0.8, 0.0)),
)),
Box::new(Sphere::new(
//left
v!(-1.0, 0.0, -1.0),
0.5,
Metal::new(v!(0.8, 0.8, 0.8)),
)),
Box::new(Sphere::new(
//right
v!(1.0, 0.0, -1.0),
0.5,
Metal::new(v!(0.8, 0.6, 0.2)),
)),
];
8.5
You'll need a new field in Metal
:
#[derive(Debug, Constructor)]
pub struct Metal {
colour: Colour,
fuzz: f64,
}
The new reflected ray direction in Metal::scatter
should look add a small random vector, as shown.
let reflection = reflect(incident_ray.direction, &hit.normal) + self.fuzz * Vec3::rand_unit();
9: Dielectrics
9.1
The refract
function should look like this:
fn refract(incident: Vec3, normal: &Vec3, ratio: f64) -> Vec3 {
let cos_theta = -incident.dot(normal);
let r_out_perp = ratio * (incident + cos_theta * *normal);
let r_out_par = -(1.0 - r_out_perp.dot(&r_out_perp)).abs().sqrt() * *normal;
r_out_par + r_out_perp
}
9.2
Dielectric
and its Material
impl:
#[derive(Debug, Constructor)]
pub struct Dielectric {
ratio: f64;
};
impl Material for Dielectric {
fn scatter(&self, incident_ray: &Ray, hit: &Hit) -> Option<Reflection> {
let ratio = if hit.front_face { 1.0 / self.ratio } else { self.ratio };
let refracted = refract(incident_ray.direction.normalise(), &hit.normal, ratio);
let out_ray = Ray::new(hit.impact_point, refracted);
Some(Reflection {
ray: out_ray,
colour_attenuation: v!(1),
})
}
}
9.3
The updated Dielectric::scatter
method:
fn scatter(&self, incident_ray: &Ray, hit: &Hit) -> Option<Reflection> {
let ratio = if hit.front_face { 1.0 / self.ratio } else { self.ratio };
let unit_direction = incident_ray.direction.normalise();
let cos_theta = -unit_direction.dot(&hit.normal);
let sin_theta = (1.0 - cos_theta * cos_theta).sqrt();
let scatter_direction = if (ratio * sin_theta) > 1.0 {
reflect(unit_direction, &hit.normal)
} else {
refract(unit_direction, &hit.normal, ratio)
};
let out_ray = Ray::new(hit.impact_point, scatter_direction);
Some(Reflection {
ray: out_ray,
colour_attenuation: v!(1),
})
}
9.4
The reflectance function for the Schlick approximation:
fn reflectance(cos_theta: f64, n: f64) -> f64 {
let r0 = f64::powi((1.0 - n) / (1.0 + n), 2);
r0 + (1.0 - r0) * f64::powi(1.0 - cos_theta, 5)
}
powi
raises a float to an integer power.
The if
expression that binds to scatter_direction
needs updating to add an extra condition for reflectance:
let scatter_direction = if (ratio * sin_theta > 1.0)
|| reflectance(cos_theta, ratio) > rand::random() {
//reflect
} else {
//refract
}
10: Positionable Camera
10.1
The new()
method for the camera:
pub fn new(fov: f64, aspect_ratio: f64) -> Self {
let theta = fov.to_radians();
let h = f64::tan(theta / 2.0);
let view_height = 2.0 * h;
let view_width = aspect_ratio * view_height;
let focal_length = 1.0;
let origin: Point = v!(0, 0, 0);
let horizontal = v!(view_width, 0, 0);
let vertical = v!(0, -view_height, 0);
let top_left: Point = origin - horizontal / 2.0 - vertical / 2.0 - v!(0, 0, focal_length);
Camera {
origin,
top_left,
horizontal,
vertical,
}
}
10.2
More changes to Camera::new()
:
pub fn new(look_from: Point, look_at: Point, vup: Vec3, fov: f64, aspect_ratio: f64) -> Self {
let theta = fov.to_radians();
let h = f64::tan(theta / 2.0);
let view_height = 2.0 * h;
let view_width = aspect_ratio * view_height;
let w = (look_from - look_at).normalise();
let u = vup.cross(&w).normalise();
let v = w.cross(&u);
let origin = look_from;
let horizontal = view_width * u;
let vertical = -view_height * v;
let top_left: Point = origin - horizontal / 2.0 - vertical / 2.0 - w;
Camera {
origin,
top_left,
horizontal,
vertical,
}
}
The code for the new scene too, because it's long:
let camera = camera::Camera::new(v!(-2, 2, 1), v!(0, 0, -1), v!(0, 1, 0), 20.0, 16.0/9.0);
let objects: Scene = vec![
Box::new(Sphere::new(
v!(0, 0, -1),
0.5,
Lambertian::new(v!(0.1, 0.2, 0.5)),
)),
Box::new(Sphere::new(
v!(-1.0, 0.0, -1.0),
0.5,,
Dielectric::new(1.5))),
Box::new(Sphere::new(
v!(1.0, 0.0, -1.0),
0.5,
Metal::new(v!(0.8, 0.6, 0.2), 0.0),
)),
Box::new(Sphere::new(
v!(0, -100.5, -1),
100.0,
Lambertian::new(v!(0.8, 0.8, 0.0)),
)),
];
11: Defocus Blur
The random vector in a unit circle function:
fn random_in_unit_circle() -> Vec3 {
//want random numbers -1 to 1
let dist = rand::distributions::Uniform::new_inclusive(-1.0, 1.0);
let mut rng = rand::thread_rng();
loop {
let v = v!(dist.sample(&mut rng), dist.sample(&mut rng), 0);
//if the vector lies in the unit sphere
if v.len() < 1.0 {
//normalise so it lies *on* the sphere
break v.normalise();
}
}
}
The updated Camera::new()
.
pub fn new(
look_from: Point,
look_at: Point,
vup: Vec3,
fov: f64,
aspect_ratio: f64,
aperture: f64,
focus_distance: f64,
) -> Self {
let theta = fov.to_radians();
let h = f64::tan(theta / 2.0);
let view_height = 2.0 * h;
let view_width = aspect_ratio * view_height;
let w = (look_from - look_at).normalise();
let u = vup.cross(&w).normalise();
let v = w.cross(&u);
let origin = look_from;
let horizontal = view_width * u * focus_distance;
let vertical = -view_height * v * focus_distance;
let top_left: Point = origin - horizontal / 2.0 - vertical / 2.0 - w * focus_distance;
let lens_radius = aperture / 2.0;
Camera {
origin,
top_left,
horizontal,
vertical,
u,
v,
lens_radius,
}
}
And the updated Camera::get_ray()
. Note how the parameters have been changed from (u, v)
to (s, t)
, because u
and v
now refer to the camera geometry instead of pixel positions.
pub fn get_ray(&self, s: f64, t: f64) -> Ray {
let rand = random_in_unit_circle() * self.lens_radius;
let origin = self.origin + self.u * rand.x + self.v * rand.y;
let px_position = self.top_left + s * self.horizontal + t * self.vertical;
//return the ray pointing at those pixels from camera origin
Ray::new(origin, px_position - origin)
}