Skip to content

Latest commit

 

History

History
243 lines (169 loc) · 10.7 KB

guide-to-testing.md

File metadata and controls

243 lines (169 loc) · 10.7 KB

Prerequisites

Before we get into writing tests, please make sure you have pre-commit hook for styling tools setup so CI won't fail from these

Instructions here

Writing Tests for Press

Writing tests involve running tests locally (duh). So let's get that setup. (You'll only have to do this once)

Make a test site

Tests can leave fake records. This will pollute your local setup. So, get yourself a test site. You can get these commands from the CI workflow file too, but I'll save you some time. You can name the site and set password to whatever.

bench new-site --db-root-password admin --admin-password admin test_site
bench --site test_site install-app press
bench --site test_site add-to-hosts # in case you wanna call APIs
bench --site test_site set-config allow_tests true

Finally, you need to start bench as some of the tests may want to trigger background jobs, which would fail if background workers aren't there

bench start

As you write tests you'll occasionally want to remove all test data in your test site from time to time. So, here ya go:

bench --site test_site reinstall --yes

Writing tests

This is the hard part. Because of Press's dependency with outside world, it's hard to isolate unit tests to this project. Regardless it's still possible with plain old python's built in libraries.

Majority of this is done with the help of python's unittest.mock library. We use this library to mock parts of code when referencing things that are out of Press's control.

Eg: We can mock all Agent Job creation calls by decorating the TestCase class like so

@patch.object(AgentJob, "enqueue_http_request", new=Mock())
class TestSite(unittest.TestCase):

We use patch.object decorator here so that every instance of AgentJob object will have it's enqueue_http_request method be replaced by whatever we pass in the new argument, which in this case is Mock() which does nothing. You can think of it as a pass. But it has other uses as you'll find if you keep reading.

Note: Class decorators aren't inherited, so you'll have to do this on all classes you want to mock http request creation for Agent Job

Mocking Agent Jobs End-to-end

There's also a decorator you can use to fake the result of an agent job. For example, you may do it like so:

with fake_agent_job("Install App on Site", "Success"):
install_app(site.name, app2.name)
poll_pending_jobs()
site.reload()
self.assertEqual(len(site.apps), 2)

This way you can use the name of the type of job and fake a response from the same.

You may also fake the output obtained from the job which you can then use to test the callback that uses the same:

with fake_agent_job(
"Restore Site",
"Success",
data=frappe._dict(
output="""frappe 15.0.0-dev HEAD
insights 0.8.3 HEAD
"""
),
):
restore(
site.name,
{
"database": database,
"public": public,
"private": private,
},
)
poll_pending_jobs()

It is also possible to fake multiple jobs in the same context, for when multiple jobs are processed in the same request or job:

with fake_agent_job("Update Site Configuration"), fake_agent_job(
"Backup Site",
data={
"backups": {
"database": {
"file": "a.sql.gz",
"path": "/home/frappe/a.sql.gz",
"size": 1674818,
"url": "https://a.com/a.sql.gz",
},
"public": {
"file": "b.tar",
"path": "/home/frappe/b.tar",
"size": 1674818,
"url": "https://a.com/b.tar",
},
"private": {
"file": "a.tar",
"path": "/home/frappe/a.tar",
"size": 1674818,
"url": "https://a.com/a.tar",
},
"site_config": {
"file": "a.json",
"path": "/home/frappe/a.json",
"size": 595,
"url": "https://a.com/json",
},
},
"offsite": {
"a.sql.gz": "bucket.frappe.cloud/2023-10-10/a.sql.gz",
"a.tar": "bucket.frappe.cloud/2023-10-10/a.tar",
"b.tar": "bucket.frappe.cloud/2023-10-10/b.tar",
"a.json": "bucket.frappe.cloud/2023-10-10/a.json",
},
},
), fake_agent_job("New Site from Backup"), fake_agent_job(
"Archive Site"
), fake_agent_job(
"Remove Site from Upstream"
), fake_agent_job(
"Add Site to Upstream"
), fake_agent_job(
"Update Site Configuration"
):
site_migration.start()
poll_pending_jobs()
poll_pending_jobs()

Note that with this, you can't fake 2 results for the same type of job. This is still a limitation. As a workaround, you can have multiple with statements for such cases.

This is all done with the help of the responses library by intercepting the http requests for the same.

Note that you shouldn't mock AgentJob.enqueue_http_request when using the above decorator as that will interfere with the request interception need to fake the job results

Now that we've learned to mock the external things, we can go about mocking internal things, which forms the basis of testing, which is

  1. Make test records
  2. Perform operation (i.e Run code that will on production)
  3. Test the test records for results

Making test records

Making test records is also kind of a pain as we have validations all around code that will need to be passed every time you create a doc. This is too much cognition. Therefore, we can create utility functions (with sensible defaults) to make test record of the corresponding Doctype in their own corresponding test files (for organization reasons). These functions will be doing the bare minimum to make a valid document of that doctype.

Eg: create_test_bench in test_bench.py can be imported and used whenever you need a valid bench (which itself has dependencies on many other doctypes)

You can also add default args to these utility functions as you come across the need. Just append to end so you won't have to rewrite pre-existing tests.

You write a test by writing a method in the TestCase. Make the method name as long as you want. Test methods are supposed to test a specific case. When the test breaks eventually (serving it's purpose), the reader should be able to tell what it's trying to test is supposed without even having to read the code. Making the method name small is pointless; we're never going to reference this method anywhere in code, ever. Eg:

def test_default_domain_is_renamed_along_with_site(self):
"""Ensure default domains are renamed when site is renamed."""
site = create_test_site("old-name")
old_name = site.name
new_name = "new-name.fc.dev"
self.assertTrue(frappe.db.exists("Site Domain", site.name))
site.rename(new_name)
rename_job = self._fake_succeed_rename_jobs()
process_rename_site_job_update(rename_job)
self.assertFalse(frappe.db.exists("Site Domain", old_name))
self.assertTrue(frappe.db.exists("Site Domain", new_name))

You can also go the extra mile and write a function docstring. This docstring will be shown in the output when the testrunner detects that the test has failed.

Rerunnability

Not a real word, but I like to be able to re-run my tests without having to nuke the database. Leaving the database in an "empty state" after every test is a very easy way to achieve this. This also makes testing for things like count of docs super easy. Lucky for us there's a method in TestCase that's run after every individual test in the class. It's called tearDown.

We can easily do

def tearDown(self):
   frappe.db.rollback()

And every doc you create (in foreground at least) will not be committed into the database.

Note: If the code you're testing calls frappe.db.commit, be sure to mock it cuz otherwise docs will get committed till that point regardless.

You can mock certain lines while testing a piece of code with the patch decorator too. Eg:

from unittest.mock import MagicMock, patch

# this will mock all the frappe.db.commit calls in server.py while in this test suite
@patch("press.press.doctype.server.server.frappe.db.commit", new=MagicMock)
class TestBench(unittest.TestCase):

You can also use the patch decorator on test methods too. Eg:

@patch("press.press.doctype.site.backups.GFS")
@patch("press.press.doctype.site.backups.FIFO")
def test_press_setting_of_rotation_scheme_works(self, mock_FIFO, mock_GFS):
"""Ensure setting rotation scheme in press settings affect rotation scheme used."""
press_settings = create_test_press_settings()
press_settings.backup_rotation_scheme = "FIFO"
press_settings.save()
cleanup_offsite()
mock_FIFO.assert_called_once()
mock_GFS.assert_not_called()
The decorator passes the mocked function (which is a Mock() object) along as an argument, so you can later do asserts on it (if you want to).

You can even use the decorator as context manager if you don't want to mock things for the entirety of the test.

with patch.object(
OffsiteBackupCheck,
"_get_all_files_in_s3",
new=lambda x: ["remote_file1", "remote_file2", "remote_file3"],
):
OffsiteBackupCheck()

Here, we're actually faking the output of the function which usually calls a remote endpoint that's out of our control by adding the new argument to the method.

Note: When you use asserts on Mock object, Document comparisons will mostly work as expected as we're overriding eq of Document class during tests (check before_test.py). This is because by default when 2 Document objects are compared, only their id() is checked, which will return False as the objects will be different in memory.

Note: If you need to mock some Callable while preserving it's function, (in case you want to do asserts on it, you can use the wraps kwarg instead of new). Eg:

@patch(
"press.press.doctype.database_server_mariadb_variable.database_server_mariadb_variable.Ansible",
wraps=Ansible,
)
@patch(
"press.press.doctype.database_server.database_server.frappe.enqueue_doc",
wraps=foreground_enqueue_doc,
)
def test_ansible_playbook_triggered_with_correct_input_on_update_of_child_table(
self, mock_enqueue_doc: Mock, Mock_Ansible
):
server = create_test_database_server()
server.append(
"mariadb_system_variables",
{"mariadb_variable": "innodb_buffer_pool_size", "value_int": 1000},
)
server.save()
args, kwargs = Mock_Ansible.call_args

Here, we check what args was Ansible constructor was called with.

That's pretty much all you need to write safe, rerunnable tests for Press. You can checkout https://docs.python.org/3/library/unittest.mock.html for more things you can do with the standard python libraries. If your editor and plugins are setup configured nicely, you can even do TDD with ease.

Protip: When you have test records you want across a TestCase, then you can simply use the create the test record in setUp method of the same. The test records can be assigned to member variables. Eg:

def setUp(self):
   self.team = create_test_team()

Background jobs

Since background jobs are forked off of a different process, our mocks and patches are not going to hold there. Not only that, but we can't control/predict when the background job will run and finish. So, when your code involves creating a background job, we can simply mock the call so that it runs in foreground instead. There's a utility method you can use to achieve this with ease:

@patch(
"press.press.doctype.database_server.database_server.frappe.enqueue_doc",
new=foreground_enqueue_doc,
)
def test_skip_implies_persist_and_not_dynamic(self, Mock_Ansible):

Running tests

You can run all of the tests with the following command.

bench --site test_site run-tests --app press

But you'll never have to. That's what CI is for. Instead, you'll mostly want to use:

bench --site test_site run-tests --app press --module press.press.doctype.some_doctype.test_some_doctype

This is because while writing bugs, your changes will mostly affect that one module only and since we don't have many tests to begin with, it won't take very long to run a module's test by itself anyway. Give your eyes a break while this happens.

You can also run individual test with:

bench --site test_site run-tests --module  press.press.doctype.some_doctype.test_some_doctype --test test_very_specific_thing

You most likely won't enjoy running commands manually like this. So you'd want to check out this vim plugin or this vscode plugin

Note: frappe_test plugin doesn't populate vim's quickfix list yet. Though Frappe's test runner output isn't very pyunit errorformat friendly, you can still make it work with a custom errorformat and some hacks to set makeprg

References