Dan Fandrich 7e515e6094 CI: enable parallel testing in CI builds | 1 year ago | |
---|---|---|
.. | ||
certs | 5 months ago | |
data | 4 months ago | |
http | 4 months ago | |
libtest | 4 months ago | |
server | 4 months ago | |
unit | 4 months ago | |
.gitignore | 4 months ago | |
CI.md | 4 months ago | |
CMakeLists.txt | 4 months ago | |
FILEFORMAT.md | 5 months ago | |
Makefile.am | 4 months ago | |
README.md | 9 months ago | |
appveyor.pm | 1 year ago | |
azure.pm | 1 year ago | |
config.in | 2 years ago | |
conftest.py | 4 months ago | |
devtest.pl | 1 year ago | |
dictserver.py | 2 years ago | |
directories.pm | 1 year ago | |
ech_combos.py | 8 months ago | |
ech_tests.sh | 5 months ago | |
ftpserver.pl | 4 months ago | |
getpart.pm | 1 year ago | |
globalconfig.pm | 5 months ago | |
http-server.pl | 1 year ago | |
http2-server.pl | 7 months ago | |
http3-server.pl | 7 months ago | |
memanalyze.pl | 1 year ago | |
negtelnetserver.py | 1 year ago | |
nghttpx.conf | 2 years ago | |
pathhelp.pm | 1 year ago | |
processhelp.pm | 1 year ago | |
requirements.txt | 1 year ago | |
rtspserver.pl | 1 year ago | |
runner.pm | 5 months ago | |
runtests.md | 4 months ago | |
runtests.pl | 4 months ago | |
secureserver.pl | 1 year ago | |
serverhelp.pm | 1 year ago | |
servers.pm | 6 months ago | |
smbserver.py | 1 year ago | |
sshhelp.pm | 1 year ago | |
sshserver.pl | 5 months ago | |
stunnel.pem | 5 months ago | |
test1119.pl | 5 months ago | |
test1132.pl | 1 year ago | |
test1135.pl | 1 year ago | |
test1139.pl | 5 months ago | |
test1140.pl | 5 months ago | |
test1165.pl | 10 months ago | |
test1167.pl | 7 months ago | |
test1173.pl | 5 months ago | |
test1175.pl | 5 months ago | |
test1177.pl | 8 months ago | |
test1222.pl | 5 months ago | |
test1275.pl | 6 months ago | |
test1276.pl | 1 year ago | |
test1477.pl | 5 months ago | |
test1486.pl | 6 months ago | |
test1488.pl | 5 months ago | |
test1544.pl | 1 year ago | |
test971.pl | 11 months ago | |
testcurl.md | 4 months ago | |
testcurl.pl | 1 year ago | |
testutil.pm | 5 months ago | |
tftpserver.pl | 1 year ago | |
util.py | 2 years ago | |
valgrind.pm | 1 year ago | |
valgrind.supp | 3 years ago |
See the "Requires to run" section for prerequisites.
In the root of the curl repository:
./configure && make && make test
To run a specific set of tests (e.g. 303 and 410):
make test TFLAGS="303 410"
To run the tests faster, pass the -j (parallelism) flag:
make test TFLAGS="-j10"
"make test" builds the test suite support code and invokes the 'runtests.pl'
perl script to run all the tests. The value of TFLAGS
is passed
directly to 'runtests.pl'.
When you run tests via make, the flags -a
and -s
are passed, meaning
to continue running tests even after one fails, and to emit short output.
If you'd like to not use those flags, you can run 'runtests.pl' directly.
You must chdir
into the tests directory, then you can run it like so:
./runtests.pl 303 410
You must have run make test
at least once first to build the support code.
To see what flags are available for runtests.pl, and what output it emits, run:
man ./tests/runtests.1
After a test fails, examine the tests/log directory for stdout, stderr, and output from the servers used in the test.
en_US.UTF-8
localeThe Python-based test servers support both recent Python 2 and 3. You can figure out your default Python interpreter with python -V
Please install python-impacket in the correct Python environment. You can use pip or your OS' package manager to install 'impacket'.
On Debian/Ubuntu the package names are:
On FreeBSD the package names are:
On any system where pip is available:
You may also need to manually install the Python package 'six' as that may be a missing requirement for impacket on Python 3.
All test servers run on "random" port numbers. All tests should be written to use suitable variables instead of fixed port numbers so that test cases continue to work independent on what port numbers the test servers actually use.
See FILEFORMAT
for the port number variables.
The test suite runs stand-alone servers on random ports to which it makes requests. For SSL tests, it runs stunnel to handle encryption to the regular servers. For SSH, it runs a standard OpenSSH server.
The listen port numbers for the test servers are picked randomly to allow users to run multiple test cases concurrently and to not collide with other existing services that might listen to ports on the machine.
The HTTP server supports listening on a Unix domain socket, the default location is 'http.sock'.
For HTTP/2 and HTTP/3 testing an installed nghttpx
is used. HTTP/3
tests check if nghttpx supports the protocol. To override the nghttpx
used, set the environment variable NGHTTPX
. The default can also be
changed by specifying --with-test-nghttpx=<path>
as argument to configure
.
Tests which use the ssh test server, SCP/SFTP tests, might be badly influenced by the output of system wide or user specific shell startup scripts, .bashrc, .profile, /etc/csh.cshrc, .login, /etc/bashrc, etc. which output text messages or escape sequences on user login. When these shell startup messages or escape sequences are output they might corrupt the expected stream of data which flows to the sftp-server or from the ssh client which can result in bad test behavior or even prevent the test server from running.
If the test suite ssh or sftp server fails to start up and logs the message 'Received message too long' then you are certainly suffering the unwanted output of a shell startup script. Locate, cleanup or adjust the shell script.
The test script will check that all allocated memory is freed properly IF
curl has been built with the CURLDEBUG
define set. The script will
automatically detect if that is the case, and it will use the
memanalyze.pl
script to analyze the memory debugging output.
Also, if you run tests on a machine where valgrind is found, the script will
use valgrind to run the test with (unless you use -n
) to further verify
correctness.
The runtests.pl
-t
option enables torture testing mode. It runs each
test many times and makes each different memory allocation fail on each
successive run. This tests the out of memory error handling code to ensure
that memory leaks do not occur even in those situations. It can help to
compile curl with CPPFLAGS=-DMEMDEBUG_LOG_SYNC
when using this option, to
ensure that the memory log file is properly written even if curl crashes.
If a test case fails, you can conveniently get the script to invoke the
debugger (gdb) for you with the server running and the same command line
parameters that failed. Just invoke runtests.pl <test number> -g
and then
just type 'run' in the debugger to perform the command through the debugger.
All logs are generated in the log/ subdirectory (it is emptied first in the runtests.pl script). They remain in there after a test run.
A curl build with --enable-debug
offers more verbose output in the logs.
This applies not only for test cases, but also when running it standalone
with curl -v
. While a curl debug built is
not suitable for production, it is often helpful in tracking down
problems.
Sometimes, one needs detailed logging of operations, but does not want to drown in output. The newly introduced connection filters allows one to dynamically increase log verbosity for a particular filter type. Example:
CURL_DEBUG=ssl curl -v https://curl.se
will make the ssl
connection filter log more details. One may do that for
every filter type and also use a combination of names, separated by ,
or
space.
CURL_DEBUG=ssl,http/2 curl -v https://curl.se
The order of filter type names is not relevant. Names used here are case insensitive. Note that these names are implementation internals and subject to change.
Some, likely stable names are tcp
, ssl
, http/2
. For a current list,
one may search the sources for struct Curl_cftype
definitions and find
the names there. Also, some filters are only available with certain build
options, of course.
All test cases are put in the data/
subdirectory. Each test is stored in
the file named according to the test number.
See FILEFORMAT
for a description of the test case file
format.
gcc provides a tool that can determine the code coverage figures for the
test suite. To use it, configure curl with CFLAGS='-fprofile-arcs
-ftest-coverage -g -O0'
. Make sure you run the normal and torture tests to
get more full coverage, i.e. do:
make test
make test-torture
The graphical tool ggcov
can be used to browse the source and create
coverage reports on *nix hosts:
ggcov -r lib src
The text mode tool gcov
may also be used, but it doesn't handle object
files in more than one directory correctly.
The runtests.pl script provides some hooks to allow curl to be tested on a machine where perl can not be run. The test framework in this case runs on a workstation where perl is available, while curl itself is run on a remote system using ssh or some other remote execution method. See the comments at the beginning of runtests.pl for details.
Test cases used to be numbered by category ranges, but the ranges filled
up. Subsets of tests can now be selected by passing keywords to the
runtests.pl script via the make TFLAGS
variable.
New tests are added by finding a free number in tests/data/Makefile.inc
.
Here's a quick description on writing test cases. We basically have three kinds of tests: the ones that test the curl tool, the ones that build small applications and test libcurl directly and the unit tests that test individual (possibly internal) functions.
Each test has a master file that controls all the test data. What to read, what the protocol exchange should look like, what exit code to expect and what command line arguments to use etc.
These files are tests/data/test[num]
where [num]
is just a unique
identifier described above, and the XML-like file format of them is
described in the separate FILEFORMAT
document.
A test case that runs the curl tool and verifies that it gets the correct data, it sends the correct data, it uses the correct protocol primitives etc.
The libcurl tests are identical to the curl ones, except that they use a
specific and dedicated custom-built program to run instead of "curl". This
tool is built from source code placed in tests/libtest
and if you want to
make a new libcurl test that is where you add your code.
Unit tests are placed in tests/unit
. There's a tests/unit/README
describing the specific set of checks and macros that may be used when
writing tests that verify behaviors of specific individual functions.
The unit tests depend on curl being built with debug enabled.