mirror of
https://github.com/rust-lang/book.git
synced 2026-04-02 07:03:37 -04:00
Propagate nostarch edits to src
This commit is contained in:
committed by
Carol (Nichols || Goulding)
parent
5550c34f85
commit
765318b844
@@ -9,18 +9,16 @@ Correctness in our programs is the extent to which our code does what we intend
|
||||
it to do. Rust is designed with a high degree of concern about the correctness
|
||||
of programs, but correctness is complex and not easy to prove. Rust’s type
|
||||
system shoulders a huge part of this burden, but the type system cannot catch
|
||||
every kind of incorrectness. As such, Rust includes support for writing
|
||||
automated software tests within the language.
|
||||
everything. As such, Rust includes support for writing automated software tests.
|
||||
|
||||
As an example, say we write a function called `add_two` that adds 2 to whatever
|
||||
number is passed to it. This function’s signature accepts an integer as a
|
||||
parameter and returns an integer as a result. When we implement and compile
|
||||
that function, Rust does all the type checking and borrow checking that you’ve
|
||||
learned so far to ensure that, for instance, we aren’t passing a `String` value
|
||||
or an invalid reference to this function. But Rust *can’t* check that this
|
||||
function will do precisely what we intend, which is return the parameter plus 2
|
||||
rather than, say, the parameter plus 10 or the parameter minus 50! That’s where
|
||||
tests come in.
|
||||
Say we write a function `add_two` that adds 2 to whatever number is passed to
|
||||
it. This function’s signature accepts an integer as a parameter and returns an
|
||||
integer as a result. When we implement and compile that function, Rust does all
|
||||
the type checking and borrow checking that you’ve learned so far to ensure
|
||||
that, for instance, we aren’t passing a `String` value or an invalid reference
|
||||
to this function. But Rust *can’t* check that this function will do precisely
|
||||
what we intend, which is return the parameter plus 2 rather than, say, the
|
||||
parameter plus 10 or the parameter minus 50! That’s where tests come in.
|
||||
|
||||
We can write tests that assert, for example, that when we pass `3` to the
|
||||
`add_two` function, the returned value is `5`. We can run these tests whenever
|
||||
|
||||
@@ -19,21 +19,20 @@ attribute. Attributes are metadata about pieces of Rust code; one example is
|
||||
the `derive` attribute we used with structs in Chapter 5. To change a function
|
||||
into a test function, add `#[test]` on the line before `fn`. When you run your
|
||||
tests with the `cargo test` command, Rust builds a test runner binary that runs
|
||||
the functions annotated with the `test` attribute and reports on whether each
|
||||
the annotated functions and reports on whether each
|
||||
test function passes or fails.
|
||||
|
||||
When we make a new library project with Cargo, a test module with a test
|
||||
function in it is automatically generated for us. This module helps you start
|
||||
writing your tests so you don’t have to look up the exact structure and syntax
|
||||
of test functions every time you start a new project. You can add as many
|
||||
Whenever we make a new library project with Cargo, a test module with a test
|
||||
function in it is automatically generated for us. This module gives you a
|
||||
template for writing your tests so you don’t have to look up the exact
|
||||
structure and syntax every time you start a new project. You can add as many
|
||||
additional test functions and as many test modules as you want!
|
||||
|
||||
We’ll explore some aspects of how tests work by experimenting with the template
|
||||
test generated for us without actually testing any code. Then we’ll write some
|
||||
real-world tests that call some code that we’ve written and assert that its
|
||||
behavior is correct.
|
||||
test before we actually test any code. Then we’ll write some real-world tests
|
||||
that call some code that we’ve written and assert that its behavior is correct.
|
||||
|
||||
Let’s create a new library project called `adder`:
|
||||
Let’s create a new library project called `adder` that will add two numbers:
|
||||
|
||||
```console
|
||||
$ cargo new adder --lib
|
||||
@@ -63,16 +62,16 @@ cd ../../..
|
||||
<span class="caption">Listing 11-1: The test module and function generated
|
||||
automatically by `cargo new`</span>
|
||||
|
||||
For now, let’s ignore the top two lines and focus on the function to see how it
|
||||
works. Note the `#[test]` annotation before the `fn` line: this attribute
|
||||
indicates this is a test function, so the test runner knows to treat this
|
||||
function as a test. We could also have non-test functions in the `tests` module
|
||||
to help set up common scenarios or perform common operations, so we need to
|
||||
indicate which functions are tests by using the `#[test]` attribute.
|
||||
For now, let’s ignore the top two lines and focus on the function. Note the
|
||||
`#[test]` annotation: this attribute indicates this is a test function, so the
|
||||
test runner knows to treat this function as a test. We might also have non-test
|
||||
functions in the `tests` module to help set up common scenarios or perform
|
||||
common operations, so we always need to indicate which functions are tests.
|
||||
|
||||
The function body uses the `assert_eq!` macro to assert that 2 + 2 equals 4.
|
||||
This assertion serves as an example of the format for a typical test. Let’s run
|
||||
it to see that this test passes.
|
||||
The example function body uses the `assert_eq!` macro to assert that `result`,
|
||||
which contains the result of adding 2 and 2, equals 4. This assertion serves as
|
||||
an example of the format for a typical test. Let’s run it to see that this test
|
||||
passes.
|
||||
|
||||
The `cargo test` command runs all tests in our project, as shown in Listing
|
||||
11-2.
|
||||
@@ -84,36 +83,35 @@ The `cargo test` command runs all tests in our project, as shown in Listing
|
||||
<span class="caption">Listing 11-2: The output from running the automatically
|
||||
generated test</span>
|
||||
|
||||
Cargo compiled and ran the test. After the `Compiling`, `Finished`, and
|
||||
`Running` lines is the line `running 1 test`. The next line shows the name
|
||||
of the generated test function, called `it_works`, and the result of running
|
||||
that test, `ok`. The overall summary of running the tests appears next. The
|
||||
text `test result: ok.` means that all the tests passed, and the portion that
|
||||
reads `1 passed; 0 failed` totals the number of tests that passed or failed.
|
||||
Cargo compiled and ran the test. We see the line `running 1 test`. The next
|
||||
line shows the name of the generated test function, called `it_works`, and that
|
||||
the result of running that test is `ok`. The overall summary `test result: ok.`
|
||||
means that all the tests passed, and the portion that reads `1 passed; 0
|
||||
failed` totals the number of tests that passed or failed.
|
||||
|
||||
Because we don’t have any tests we’ve marked as ignored, the summary shows `0
|
||||
ignored`. We also haven’t filtered the tests being run, so the end of the
|
||||
summary shows `0 filtered out`. We’ll talk about ignoring and filtering out
|
||||
tests in the next section, [“Controlling How Tests Are
|
||||
Run.”][controlling-how-tests-are-run]<!-- ignore -->
|
||||
It's possible to mark a test as ignored so it doesn't run in a particular
|
||||
instance; we'll cover that in the [“Ignoring Some Tests Unless Specifically
|
||||
Requested”][ignoring]<!-- ignore --> section later in this chapter. Because we
|
||||
haven't done that here, the summary shows `0 ignored`. We can also pass an
|
||||
argument to the `cargo test` command to run only tests whose name matches a
|
||||
string; this is called filtering and we'll cover that in the [“Running a Subset
|
||||
of Tests by Name”][subset]<!-- ignore --> section. We also haven’t filtered the
|
||||
tests being run, so the end of the summary shows `0 filtered out`.
|
||||
|
||||
The `0 measured` statistic is for benchmark tests that measure performance.
|
||||
Benchmark tests are, as of this writing, only available in nightly Rust. See
|
||||
[the documentation about benchmark tests][bench] to learn more.
|
||||
|
||||
[bench]: ../unstable-book/library-features/test.html
|
||||
|
||||
The next part of the test output, which starts with `Doc-tests adder`, is for
|
||||
the results of any documentation tests. We don’t have any documentation tests
|
||||
yet, but Rust can compile any code examples that appear in our API
|
||||
documentation. This feature helps us keep our docs and our code in sync! We’ll
|
||||
discuss how to write documentation tests in the [“Documentation Comments as
|
||||
The next part of the test output starting at `Doc-tests adder` is for the
|
||||
results of any documentation tests. We don’t have any documentation tests yet,
|
||||
but Rust can compile any code examples that appear in our API documentation.
|
||||
This feature helps keep your docs and your code in sync! We’ll discuss how to
|
||||
write documentation tests in the [“Documentation Comments as
|
||||
Tests”][doc-comments]<!-- ignore --> section of Chapter 14. For now, we’ll
|
||||
ignore the `Doc-tests` output.
|
||||
|
||||
Let’s change the name of our test to see how that changes the test output.
|
||||
Change the `it_works` function to a different name, such as `exploration`, like
|
||||
so:
|
||||
Let’s start to customize the test to our own needs. First change the name of
|
||||
the `it_works` function to a different name, such as `exploration`, like so:
|
||||
|
||||
<span class="filename">Filename: src/lib.rs</span>
|
||||
|
||||
@@ -128,12 +126,12 @@ Then run `cargo test` again. The output now shows `exploration` instead of
|
||||
{{#include ../listings/ch11-writing-automated-tests/no-listing-01-changing-test-name/output.txt}}
|
||||
```
|
||||
|
||||
Let’s add another test, but this time we’ll make a test that fails! Tests fail
|
||||
when something in the test function panics. Each test is run in a new thread,
|
||||
and when the main thread sees that a test thread has died, the test is marked
|
||||
as failed. We talked about the simplest way to cause a panic in Chapter 9,
|
||||
which is to call the `panic!` macro. Enter the new test, `another`, so your
|
||||
*src/lib.rs* file looks like Listing 11-3.
|
||||
Now we'll add another test, but this time we’ll make a test that fails! Tests
|
||||
fail when something in the test function panics. Each test is run in a new
|
||||
thread, and when the main thread sees that a test thread has died, the test is
|
||||
marked as failed. In Chapter 9, we talked about how the simplest way to panic
|
||||
is to call the `panic!` macro. Enter the new test as a function named
|
||||
`another`, so your *src/lib.rs* file looks like Listing 11-3.
|
||||
|
||||
<span class="filename">Filename: src/lib.rs</span>
|
||||
|
||||
@@ -156,17 +154,17 @@ test fails</span>
|
||||
|
||||
Instead of `ok`, the line `test tests::another` shows `FAILED`. Two new
|
||||
sections appear between the individual results and the summary: the first
|
||||
section displays the detailed reason for each test failure. In this case,
|
||||
`another` failed because it `panicked at 'Make this test fail'`, which happened
|
||||
on line 10 in the *src/lib.rs* file. The next section lists just the names of
|
||||
all the failing tests, which is useful when there are lots of tests and lots of
|
||||
displays the detailed reason for each test failure. In this case, we get the
|
||||
details that `another` failed because it `panicked at 'Make this test fail'` on
|
||||
line 10 in the *src/lib.rs* file. The next section lists just the names of all
|
||||
the failing tests, which is useful when there are lots of tests and lots of
|
||||
detailed failing test output. We can use the name of a failing test to run just
|
||||
that test to more easily debug it; we’ll talk more about ways to run tests in
|
||||
the [“Controlling How Tests Are Run”][controlling-how-tests-are-run]<!-- ignore
|
||||
--> section.
|
||||
|
||||
The summary line displays at the end: overall, our test result is `FAILED`.
|
||||
We had one test pass and one test fail.
|
||||
The summary line displays at the end: overall, our test result is `FAILED`. We
|
||||
had one test pass and one test fail.
|
||||
|
||||
Now that you’ve seen what the test results look like in different scenarios,
|
||||
let’s look at some macros other than `panic!` that are useful in tests.
|
||||
@@ -176,14 +174,13 @@ let’s look at some macros other than `panic!` that are useful in tests.
|
||||
The `assert!` macro, provided by the standard library, is useful when you want
|
||||
to ensure that some condition in a test evaluates to `true`. We give the
|
||||
`assert!` macro an argument that evaluates to a Boolean. If the value is
|
||||
`true`, `assert!` does nothing and the test passes. If the value is `false`,
|
||||
the `assert!` macro calls the `panic!` macro, which causes the test to fail.
|
||||
Using the `assert!` macro helps us check that our code is functioning in the
|
||||
way we intend.
|
||||
`true`, nothing happens and the test passes. If the value is `false`, the
|
||||
`assert!` macro calls `panic!` to cause the test to fail. Using the `assert!`
|
||||
macro helps us check that our code is functioning in the way we intend.
|
||||
|
||||
In Chapter 5, Listing 5-15, we used a `Rectangle` struct and a `can_hold`
|
||||
method, which are repeated here in Listing 11-5. Let’s put this code in the
|
||||
*src/lib.rs* file and write some tests for it using the `assert!` macro.
|
||||
*src/lib.rs* file, then write some tests for it using the `assert!` macro.
|
||||
|
||||
<span class="filename">Filename: src/lib.rs</span>
|
||||
|
||||
@@ -220,8 +217,8 @@ a glob here so anything we define in the outer module is available to this
|
||||
|
||||
We’ve named our test `larger_can_hold_smaller`, and we’ve created the two
|
||||
`Rectangle` instances that we need. Then we called the `assert!` macro and
|
||||
passed it the result of calling `larger.can_hold(&smaller)`. This expression
|
||||
is supposed to return `true`, so our test should pass. Let’s find out!
|
||||
passed it the result of calling `larger.can_hold(&smaller)`. This expression is
|
||||
supposed to return `true`, so our test should pass. Let’s find out!
|
||||
|
||||
```console
|
||||
{{#include ../listings/ch11-writing-automated-tests/listing-11-06/output.txt}}
|
||||
@@ -245,8 +242,8 @@ result, our test will pass if `can_hold` returns `false`:
|
||||
```
|
||||
|
||||
Two tests that pass! Now let’s see what happens to our test results when we
|
||||
introduce a bug in our code. Let’s change the implementation of the `can_hold`
|
||||
method by replacing the greater than sign with a less than sign when it
|
||||
introduce a bug in our code. We’ll change the implementation of the `can_hold`
|
||||
method by replacing the greater-than sign with a less-than sign when it
|
||||
compares the widths:
|
||||
|
||||
```rust,not_desired_behavior,noplayground
|
||||
@@ -265,20 +262,19 @@ less than 5.
|
||||
|
||||
### Testing Equality with the `assert_eq!` and `assert_ne!` Macros
|
||||
|
||||
A common way to test functionality is to compare the result of the code under
|
||||
test to the value you expect the code to return to make sure they’re equal. You
|
||||
could do this using the `assert!` macro and passing it an expression using the
|
||||
`==` operator. However, this is such a common test that the standard library
|
||||
A common way to verify functionality is to test for equality between the result
|
||||
of the code under test and the value you expect the code to return. You could
|
||||
do this using the `assert!` macro and passing it an expression using the `==`
|
||||
operator. However, this is such a common test that the standard library
|
||||
provides a pair of macros—`assert_eq!` and `assert_ne!`—to perform this test
|
||||
more conveniently. These macros compare two arguments for equality or
|
||||
inequality, respectively. They’ll also print the two values if the assertion
|
||||
fails, which makes it easier to see *why* the test failed; conversely, the
|
||||
`assert!` macro only indicates that it got a `false` value for the `==`
|
||||
expression, not the values that led to the `false` value.
|
||||
expression, without printing the values that led to the `false` value.
|
||||
|
||||
In Listing 11-7, we write a function named `add_two` that adds `2` to its
|
||||
parameter and returns the result. Then we test this function using the
|
||||
`assert_eq!` macro.
|
||||
parameter, then we test this function using the `assert_eq!` macro.
|
||||
|
||||
<span class="filename">Filename: src/lib.rs</span>
|
||||
|
||||
@@ -295,13 +291,12 @@ Let’s check that it passes!
|
||||
{{#include ../listings/ch11-writing-automated-tests/listing-11-07/output.txt}}
|
||||
```
|
||||
|
||||
The first argument we gave to the `assert_eq!` macro, `4`, is equal to the
|
||||
result of calling `add_two(2)`. The line for this test is `test
|
||||
tests::it_adds_two ... ok`, and the `ok` text indicates that our test passed!
|
||||
We pass `4` as the argument to `assert_eq!`, which is equal to the result of
|
||||
calling `add_two(2)`. The line for this test is `test tests::it_adds_two ...
|
||||
ok`, and the `ok` text indicates that our test passed!
|
||||
|
||||
Let’s introduce a bug into our code to see what it looks like when a test that
|
||||
uses `assert_eq!` fails. Change the implementation of the `add_two` function to
|
||||
instead add `3`:
|
||||
Let’s introduce a bug into our code to see what `assert_eq!` looks like when it
|
||||
fails. Change the implementation of the `add_two` function to instead add `3`:
|
||||
|
||||
```rust,not_desired_behavior,noplayground
|
||||
{{#rustdoc_include ../listings/ch11-writing-automated-tests/no-listing-04-bug-in-add-two/src/lib.rs:here}}
|
||||
@@ -313,53 +308,52 @@ Run the tests again:
|
||||
{{#include ../listings/ch11-writing-automated-tests/no-listing-04-bug-in-add-two/output.txt}}
|
||||
```
|
||||
|
||||
Our test caught the bug! The `it_adds_two` test failed, displaying the message
|
||||
`` assertion failed: `(left == right)` `` and showing that `left` was `4` and
|
||||
`right` was `5`. This message is useful and helps us start debugging: it means
|
||||
the `left` argument to `assert_eq!` was `4` but the `right` argument, where we
|
||||
had `add_two(2)`, was `5`.
|
||||
Our test caught the bug! The `it_adds_two` test failed, and the message tells
|
||||
us that the assertion that fails was `` assertion failed: `(left == right)` ``
|
||||
and what the `left` and `right` values are. This message helps us start
|
||||
debugging: the `left` argument was `4` but the `right` argument, where we had
|
||||
`add_two(2)`, was `5`. You can imagine that this would be especially helpful
|
||||
when we have a lot of tests going on.
|
||||
|
||||
Note that in some languages and test frameworks, the parameters to the
|
||||
functions that assert two values are equal are called `expected` and `actual`,
|
||||
and the order in which we specify the arguments matters. However, in Rust,
|
||||
they’re called `left` and `right`, and the order in which we specify the value
|
||||
we expect and the value that the code under test produces doesn’t matter. We
|
||||
could write the assertion in this test as `assert_eq!(add_two(2), 4)`, which
|
||||
would result in a failure message that displays `` assertion failed: `(left ==
|
||||
right)` `` and that `left` was `5` and `right` was `4`.
|
||||
Note that in some languages and test frameworks, the parameters to equality
|
||||
assertion functions are called `expected` and `actual`, and the order in which
|
||||
we specify the arguments matters. However, in Rust, they’re called `left` and
|
||||
`right`, and the order in which we specify the value we expect and the value
|
||||
the code produces doesn’t matter. We could write the assertion in this test as
|
||||
`assert_eq!(add_two(2), 4)`, which would result in the same failure message
|
||||
that displays `` assertion failed: `(left == right)` ``.
|
||||
|
||||
The `assert_ne!` macro will pass if the two values we give it are not equal and
|
||||
fail if they’re equal. This macro is most useful for cases when we’re not sure
|
||||
what a value *will* be, but we know what the value definitely *won’t* be if our
|
||||
code is functioning as we intend. For example, if we’re testing a function that
|
||||
is guaranteed to change its input in some way, but the way in which the input
|
||||
is changed depends on the day of the week that we run our tests, the best thing
|
||||
to assert might be that the output of the function is not equal to the input.
|
||||
what a value *will* be, but we know what the value definitely *shouldn’t* be.
|
||||
For example, if we’re testing a function that is guaranteed to change its input
|
||||
in some way, but the way in which the input is changed depends on the day of
|
||||
the week that we run our tests, the best thing to assert might be that the
|
||||
output of the function is not equal to the input.
|
||||
|
||||
Under the surface, the `assert_eq!` and `assert_ne!` macros use the operators
|
||||
`==` and `!=`, respectively. When the assertions fail, these macros print their
|
||||
arguments using debug formatting, which means the values being compared must
|
||||
implement the `PartialEq` and `Debug` traits. All the primitive types and most
|
||||
of the standard library types implement these traits. For structs and enums
|
||||
that you define, you’ll need to implement `PartialEq` to assert that values of
|
||||
those types are equal or not equal. You’ll need to implement `Debug` to print
|
||||
the values when the assertion fails. Because both traits are derivable traits,
|
||||
as mentioned in Listing 5-12 in Chapter 5, this is usually as straightforward
|
||||
as adding the `#[derive(PartialEq, Debug)]` annotation to your struct or enum
|
||||
definition. See Appendix C, [“Derivable Traits,”][derivable-traits]<!-- ignore
|
||||
--> for more details about these and other derivable traits.
|
||||
implement the `PartialEq` and `Debug` traits. All primitive types and most of
|
||||
the standard library types implement these traits. For structs and enums that
|
||||
you define yourself, you’ll need to implement `PartialEq` to assert equality of
|
||||
those types. You’ll also need to implement `Debug` to print the values when the
|
||||
assertion fails. Because both traits are derivable traits, as mentioned in
|
||||
Listing 5-12 in Chapter 5, this is usually as straightforward as adding the
|
||||
`#[derive(PartialEq, Debug)]` annotation to your struct or enum definition. See
|
||||
Appendix C, [“Derivable Traits,”][derivable-traits]<!-- ignore --> for more
|
||||
details about these and other derivable traits.
|
||||
|
||||
### Adding Custom Failure Messages
|
||||
|
||||
You can also add a custom message to be printed with the failure message as
|
||||
optional arguments to the `assert!`, `assert_eq!`, and `assert_ne!` macros. Any
|
||||
arguments specified after the one required argument to `assert!` or the two
|
||||
required arguments to `assert_eq!` and `assert_ne!` are passed along to the
|
||||
arguments specified after the required arguments are passed along to the
|
||||
`format!` macro (discussed in Chapter 8 in the [“Concatenation with the `+`
|
||||
Operator or the `format!`
|
||||
Macro”][concatenation-with-the--operator-or-the-format-macro]<!-- ignore -->
|
||||
section), so you can pass a format string that contains `{}` placeholders and
|
||||
values to go in those placeholders. Custom messages are useful to document
|
||||
values to go in those placeholders. Custom messages are useful for documenting
|
||||
what an assertion means; when a test fails, you’ll have a better idea of what
|
||||
the problem is with the code.
|
||||
|
||||
@@ -379,8 +373,8 @@ so instead of checking for exact equality to the value returned from the
|
||||
`greeting` function, we’ll just assert that the output contains the text of the
|
||||
input parameter.
|
||||
|
||||
Let’s introduce a bug into this code by changing `greeting` to not include
|
||||
`name` to see what this test failure looks like:
|
||||
Now let’s introduce a bug into this code by changing `greeting` to exclude
|
||||
`name` to see what the default test failure looks like:
|
||||
|
||||
```rust,not_desired_behavior,noplayground
|
||||
{{#rustdoc_include ../listings/ch11-writing-automated-tests/no-listing-06-greeter-with-bug/src/lib.rs:here}}
|
||||
@@ -393,10 +387,10 @@ Running this test produces the following:
|
||||
```
|
||||
|
||||
This result just indicates that the assertion failed and which line the
|
||||
assertion is on. A more useful failure message in this case would print the
|
||||
value we got from the `greeting` function. Let’s change the test function,
|
||||
giving it a custom failure message made from a format string with a placeholder
|
||||
filled in with the actual value we got from the `greeting` function:
|
||||
assertion is on. A more useful failure message would print the value from the
|
||||
`greeting` function. Let’s add a custom failure message composed of a format
|
||||
string with a placeholder filled in with the actual value we got from the
|
||||
`greeting` function:
|
||||
|
||||
```rust,ignore
|
||||
{{#rustdoc_include ../listings/ch11-writing-automated-tests/no-listing-07-custom-failure-message/src/lib.rs:here}}
|
||||
@@ -413,17 +407,16 @@ debug what happened instead of what we were expecting to happen.
|
||||
|
||||
### Checking for Panics with `should_panic`
|
||||
|
||||
In addition to checking that our code returns the correct values we expect,
|
||||
it’s also important to check that our code handles error conditions as we
|
||||
expect. For example, consider the `Guess` type that we created in Chapter 9,
|
||||
Listing 9-13. Other code that uses `Guess` depends on the guarantee that `Guess`
|
||||
instances will contain only values between 1 and 100. We can write a test that
|
||||
ensures that attempting to create a `Guess` instance with a value outside that
|
||||
range panics.
|
||||
In addition to checking return values, it’s important to check that our code
|
||||
handles error conditions as we expect. For example, consider the `Guess` type
|
||||
that we created in Chapter 9, Listing 9-13. Other code that uses `Guess`
|
||||
depends on the guarantee that `Guess` instances will contain only values
|
||||
between 1 and 100. We can write a test that ensures that attempting to create a
|
||||
`Guess` instance with a value outside that range panics.
|
||||
|
||||
We do this by adding another attribute, `should_panic`, to our test function.
|
||||
This attribute makes a test pass if the code inside the function panics; the
|
||||
test will fail if the code inside the function doesn’t panic.
|
||||
We do this by adding the attribute `should_panic` to our test function. The
|
||||
test passes if the code inside the function panics; the test fails if the code
|
||||
inside the function doesn’t panic.
|
||||
|
||||
Listing 11-8 shows a test that checks that the error conditions of `Guess::new`
|
||||
happen when we expect them to.
|
||||
@@ -462,14 +455,14 @@ We don’t get a very helpful message in this case, but when we look at the test
|
||||
function, we see that it’s annotated with `#[should_panic]`. The failure we got
|
||||
means that the code in the test function did not cause a panic.
|
||||
|
||||
Tests that use `should_panic` can be imprecise because they only indicate that
|
||||
the code has caused some panic. A `should_panic` test would pass even if the
|
||||
test panics for a different reason from the one we were expecting to happen. To
|
||||
make `should_panic` tests more precise, we can add an optional `expected`
|
||||
parameter to the `should_panic` attribute. The test harness will make sure that
|
||||
the failure message contains the provided text. For example, consider the
|
||||
modified code for `Guess` in Listing 11-9 where the `new` function panics with
|
||||
different messages depending on whether the value is too small or too large.
|
||||
Tests that use `should_panic` can be imprecise. A `should_panic` test would
|
||||
pass even if the test panics for a different reason from the one we were
|
||||
expecting. To make `should_panic` tests more precise, we can add an optional
|
||||
`expected` parameter to the `should_panic` attribute. The test harness will
|
||||
make sure that the failure message contains the provided text. For example,
|
||||
consider the modified code for `Guess` in Listing 11-9 where the `new` function
|
||||
panics with different messages depending on whether the value is too small or
|
||||
too large.
|
||||
|
||||
<span class="filename">Filename: src/lib.rs</span>
|
||||
|
||||
@@ -477,18 +470,17 @@ different messages depending on whether the value is too small or too large.
|
||||
{{#rustdoc_include ../listings/ch11-writing-automated-tests/listing-11-09/src/lib.rs:here}}
|
||||
```
|
||||
|
||||
<span class="caption">Listing 11-9: Testing that a condition will cause a
|
||||
`panic!` with a particular panic message</span>
|
||||
<span class="caption">Listing 11-9: Testing for a `panic!` with a particular
|
||||
panic message</span>
|
||||
|
||||
This test will pass because the value we put in the `should_panic` attribute’s
|
||||
`expected` parameter is a substring of the message that the `Guess::new`
|
||||
function panics with. We could have specified the entire panic message that we
|
||||
expect, which in this case would be `Guess value must be less than or equal to
|
||||
100, got 200.` What you choose to specify in the expected parameter for
|
||||
`should_panic` depends on how much of the panic message is unique or dynamic
|
||||
and how precise you want your test to be. In this case, a substring of the
|
||||
panic message is enough to ensure that the code in the test function executes
|
||||
the `else if value > 100` case.
|
||||
100, got 200.` What you choose to specify depends on how much of the panic
|
||||
message is unique or dynamic and how precise you want your test to be. In this
|
||||
case, a substring of the panic message is enough to ensure that the code in the
|
||||
test function executes the `else if value > 100` case.
|
||||
|
||||
To see what happens when a `should_panic` test with an `expected` message
|
||||
fails, let’s again introduce a bug into our code by swapping the bodies of the
|
||||
@@ -512,15 +504,15 @@ figuring out where our bug is!
|
||||
|
||||
### Using `Result<T, E>` in Tests
|
||||
|
||||
So far, we’ve written tests that panic when they fail. We can also write tests
|
||||
that use `Result<T, E>`! Here’s the test from Listing 11-1, rewritten to use
|
||||
`Result<T, E>` and return an `Err` instead of panicking:
|
||||
Our tests so far all panic when they fail. We can also write tests that use
|
||||
`Result<T, E>`! Here’s the test from Listing 11-1, rewritten to use `Result<T,
|
||||
E>` and return an `Err` instead of panicking:
|
||||
|
||||
```rust,noplayground
|
||||
{{#rustdoc_include ../listings/ch11-writing-automated-tests/no-listing-10-result-in-tests/src/lib.rs}}
|
||||
```
|
||||
|
||||
The `it_works` function now has a return type, `Result<(), String>`. In the
|
||||
The `it_works` function now has the `Result<(), String>` return type. In the
|
||||
body of the function, rather than calling the `assert_eq!` macro, we return
|
||||
`Ok(())` when the test passes and an `Err` with a `String` inside when the test
|
||||
fails.
|
||||
@@ -540,6 +532,9 @@ test`.
|
||||
|
||||
[concatenation-with-the--operator-or-the-format-macro]:
|
||||
ch08-02-strings.html#concatenation-with-the--operator-or-the-format-macro
|
||||
[bench]: ../unstable-book/library-features/test.html
|
||||
[ignoring]: ch11-02-running-tests.html#ignoring-some-tests-unless-specifically-requested
|
||||
[subset]: ch11-02-running-tests.html#running-a-subset-of-tests-by-name
|
||||
[controlling-how-tests-are-run]:
|
||||
ch11-02-running-tests.html#controlling-how-tests-are-run
|
||||
[derivable-traits]: appendix-03-derivable-traits.html
|
||||
|
||||
@@ -2,37 +2,36 @@
|
||||
|
||||
Just as `cargo run` compiles your code and then runs the resulting binary,
|
||||
`cargo test` compiles your code in test mode and runs the resulting test
|
||||
binary. You can specify command line options to change the default behavior of
|
||||
`cargo test`. For example, the default behavior of the binary produced by
|
||||
`cargo test` is to run all the tests in parallel and capture output generated
|
||||
during test runs, preventing the output from being displayed and making it
|
||||
easier to read the output related to the test results.
|
||||
binary. The default behavior of the binary produced by `cargo test` is to run
|
||||
all the tests in parallel and capture output generated during test runs,
|
||||
preventing the output from being displayed and making it easier to read the
|
||||
output related to the test results. You can, however, specify command line
|
||||
options to change this default behavior.
|
||||
|
||||
Some command line options go to `cargo test`, and some go to the resulting test
|
||||
binary. To separate these two types of arguments, you list the arguments that
|
||||
go to `cargo test` followed by the separator `--` and then the ones that go to
|
||||
the test binary. Running `cargo test --help` displays the options you can use
|
||||
with `cargo test`, and running `cargo test -- --help` displays the options you
|
||||
can use after the separator `--`.
|
||||
can use after the separator.
|
||||
|
||||
### Running Tests in Parallel or Consecutively
|
||||
|
||||
When you run multiple tests, by default they run in parallel using threads.
|
||||
This means the tests will finish running faster so you can get feedback quicker
|
||||
on whether or not your code is working. Because the tests are running at the
|
||||
same time, make sure your tests don’t depend on each other or on any shared
|
||||
state, including a shared environment, such as the current working directory or
|
||||
environment variables.
|
||||
When you run multiple tests, by default they run in parallel using threads,
|
||||
meaning they finish running faster and you get feedback quicker. Because the
|
||||
tests are running at the same time, you must make sure your tests don’t depend
|
||||
on each other or on any shared state, including a shared environment, such as
|
||||
the current working directory or environment variables.
|
||||
|
||||
For example, say each of your tests runs some code that creates a file on disk
|
||||
named *test-output.txt* and writes some data to that file. Then each test reads
|
||||
the data in that file and asserts that the file contains a particular value,
|
||||
which is different in each test. Because the tests run at the same time, one
|
||||
test might overwrite the file between when another test writes and reads the
|
||||
file. The second test will then fail, not because the code is incorrect but
|
||||
because the tests have interfered with each other while running in parallel.
|
||||
One solution is to make sure each test writes to a different file; another
|
||||
solution is to run the tests one at a time.
|
||||
test might overwrite the file in the time between another test writing and
|
||||
reading the file. The second test will then fail, not because the code is
|
||||
incorrect but because the tests have interfered with each other while running
|
||||
in parallel. One solution is to make sure each test writes to a different file;
|
||||
another solution is to run the tests one at a time.
|
||||
|
||||
If you don’t want to run the tests in parallel or if you want more fine-grained
|
||||
control over the number of threads used, you can send the `--test-threads` flag
|
||||
@@ -100,8 +99,8 @@ code in a particular area, you might want to run only the tests pertaining to
|
||||
that code. You can choose which tests to run by passing `cargo test` the name
|
||||
or names of the test(s) you want to run as an argument.
|
||||
|
||||
To demonstrate how to run a subset of tests, we’ll create three tests for our
|
||||
`add_two` function, as shown in Listing 11-11, and choose which ones to run.
|
||||
To demonstrate how to run a subset of tests, we’ll first create three tests for
|
||||
our `add_two` function, as shown in Listing 11-11, and choose which ones to run.
|
||||
|
||||
<span class="filename">Filename: src/lib.rs</span>
|
||||
|
||||
@@ -128,8 +127,8 @@ We can pass the name of any test function to `cargo test` to run only that test:
|
||||
```
|
||||
|
||||
Only the test with the name `one_hundred` ran; the other two tests didn’t match
|
||||
that name. The test output lets us know we had more tests than what this
|
||||
command ran by displaying `2 filtered out` at the end of the summary line.
|
||||
that name. The test output lets us know we had more tests that didn't run by
|
||||
displaying `2 filtered out` at the end.
|
||||
|
||||
We can’t specify the names of multiple tests in this way; only the first value
|
||||
given to `cargo test` will be used. But there is a way to run multiple tests.
|
||||
|
||||
@@ -2,12 +2,12 @@
|
||||
|
||||
As mentioned at the start of the chapter, testing is a complex discipline, and
|
||||
different people use different terminology and organization. The Rust community
|
||||
thinks about tests in terms of two main categories: *unit tests* and
|
||||
*integration tests*. Unit tests are small and more focused, testing one module
|
||||
in isolation at a time, and can test private interfaces. Integration tests are
|
||||
entirely external to your library and use your code in the same way any other
|
||||
external code would, using only the public interface and potentially exercising
|
||||
multiple modules per test.
|
||||
thinks about tests in terms of two main categories: unit tests and integration
|
||||
tests. *Unit tests* are small and more focused, testing one module in isolation
|
||||
at a time, and can test private interfaces. *Integration tests* are entirely
|
||||
external to your library and use your code in the same way any other external
|
||||
code would, using only the public interface and potentially exercising multiple
|
||||
modules per test.
|
||||
|
||||
Writing both kinds of tests is important to ensure that the pieces of your
|
||||
library are doing what you expect them to, separately and together.
|
||||
@@ -89,8 +89,8 @@ tests, you first need a *tests* directory.
|
||||
|
||||
We create a *tests* directory at the top level of our project directory, next
|
||||
to *src*. Cargo knows to look for integration test files in this directory. We
|
||||
can then make as many test files as we want to in this directory, and Cargo
|
||||
will compile each of the files as an individual crate.
|
||||
can then make as many test files as we want, and Cargo will compile each of the
|
||||
files as an individual crate.
|
||||
|
||||
Let’s create an integration test. With the code in Listing 11-12 still in the
|
||||
*src/lib.rs* file, make a *tests* directory, create a new file named
|
||||
@@ -105,9 +105,9 @@ Let’s create an integration test. With the code in Listing 11-12 still in the
|
||||
<span class="caption">Listing 11-13: An integration test of a function in the
|
||||
`adder` crate</span>
|
||||
|
||||
We’ve added `use adder` at the top of the code, which we didn’t need in the
|
||||
unit tests. The reason is that each file in the `tests` directory is a separate
|
||||
crate, so we need to bring our library into each test crate’s scope.
|
||||
Each file in the `tests` directory is a separate crate, so we need to bring our
|
||||
library into each test crate’s scope. For that reason we add `use adder` at the
|
||||
top of the code, which we didn’t need in the unit tests.
|
||||
|
||||
We don’t need to annotate any code in *tests/integration_test.rs* with
|
||||
`#[cfg(test)]`. Cargo treats the `tests` directory specially and compiles files
|
||||
@@ -123,15 +123,11 @@ seeing: one line for each unit test (one named `internal` that we added in
|
||||
Listing 11-12) and then a summary line for the unit tests.
|
||||
|
||||
The integration tests section starts with the line `Running
|
||||
target/debug/deps/integration_test-1082c4b063a8fbe6` (the hash at the end of
|
||||
your output will be different). Next, there is a line for each test function in
|
||||
tests/integration_test.rs`. Next, there is a line for each test function in
|
||||
that integration test and a summary line for the results of the integration
|
||||
test just before the `Doc-tests adder` section starts.
|
||||
|
||||
Similarly to how adding more unit test functions adds more result lines to the
|
||||
unit tests section, adding more test functions to the integration test file
|
||||
adds more result lines to this integration test file’s section. Each
|
||||
integration test file has its own section, so if we add more files in the
|
||||
Each integration test file has its own section, so if we add more files in the
|
||||
*tests* directory, there will be more integration test sections.
|
||||
|
||||
We can still run a particular integration test function by specifying the test
|
||||
@@ -147,25 +143,22 @@ This command runs only the tests in the *tests/integration_test.rs* file.
|
||||
|
||||
#### Submodules in Integration Tests
|
||||
|
||||
As you add more integration tests, you might want to make more than one file in
|
||||
the *tests* directory to help organize them; for example, you can group the
|
||||
test functions by the functionality they’re testing. As mentioned earlier, each
|
||||
file in the *tests* directory is compiled as its own separate crate.
|
||||
As you add more integration tests, you might want to make more files in the
|
||||
*tests* directory to help organize them; for example, you can group the test
|
||||
functions by the functionality they’re testing. As mentioned earlier, each file
|
||||
in the *tests* directory is compiled as its own separate crate, which is useful
|
||||
for creating separate scopes to more closely imitate the way end users will be
|
||||
using your crate. However, this means files in the *tests* directory don’t
|
||||
share the same behavior as files in *src* do, as you learned in Chapter 7
|
||||
regarding how to separate code into modules and files.
|
||||
|
||||
Treating each integration test file as its own crate is useful to create
|
||||
separate scopes that are more like the way end users will be using your crate.
|
||||
However, this means files in the *tests* directory don’t share the same
|
||||
behavior as files in *src* do, as you learned in Chapter 7 regarding how to
|
||||
separate code into modules and files.
|
||||
|
||||
The different behavior of files in the *tests* directory is most noticeable
|
||||
when you have a set of helper functions that would be useful in multiple
|
||||
integration test files and you try to follow the steps in the [“Separating
|
||||
Modules into Different Files”][separating-modules-into-files]<!-- ignore -->
|
||||
section of Chapter 7 to extract them into a common module. For example, if we
|
||||
create *tests/common.rs* and place a function named `setup` in it, we can add
|
||||
some code to `setup` that we want to call from multiple test functions in
|
||||
multiple test files:
|
||||
The different behavior of *tests* directory files is most noticeable when you
|
||||
have a set of helper functions to use in multiple integration test files and
|
||||
you try to follow the steps in the [“Separating Modules into Different
|
||||
Files”][separating-modules-into-files]<!-- ignore --> section of Chapter 7 to
|
||||
extract them into a common module. For example, if we create *tests/common.rs*
|
||||
and place a function named `setup` in it, we can add some code to `setup` that
|
||||
we want to call from multiple test functions in multiple test files:
|
||||
|
||||
<span class="filename">Filename: tests/common.rs</span>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user