From Chaos to Clarity: Streamlining End-to-End Testing with Django and SvelteKit
Learn the secrets to boosting your web development process with Django and SvelteKit. Our article reveals how to seamlessly blend these powerful frameworks for unmatched speed and reliability.
For reasons listed in my Model W Architecture document, my framework of choice for the backend is Django (tldr; the ORM) and until another better option emerges in the world of JavaScript this is not going to change. On the other hand, my experience has shown that if you do a professional website you will eventually outgrow the htmx and other lightweight frameworks, making it a necessity to systematically turn towards meta-frameworks such as SvelteKit, Nuxt, Nuxt.js or Astro — to quote the most famous.
This is what we’re systematically doing at WITH and the combination works well. But you absolutely must figure ways to align all this properly — and there are no official ways to do this.
Today we’re going to explore one specific friction point: end-to-end testing.
Why to test?
Some will tell you that you need to cover 100% of your code base with unit and e2e tests while others will say “testing is doubting”. So while we’re not here for a theoretical lesson on the benefits of tests, we are goingt to focus on why we would want to have those, which in turns allows us to decide what we want to test.
The speed factor
First, nobody gets the code right the first time. Personally with my 20 years of coding I think that once I managed to land about 1000 lines of code that worked on the first time, while being extremely focused on what I was doing. The typical development cycle looks more like: write a bunch of lines, see where it breaks, repeat until it works.
As a developer, you will learn to code faster and with less mistakes over time but there is nothing you can do about it right now. Just code more and it will sink in. This leaves you with the second part of the process: how fast can you see where it breaks?
Obviously the answer to that question is largely dependent on what you are currently testing. If you’re talking about CSS, then a second screen with the page you’re currently integrating along with a good meta-framework that implements HMR properly should be the easiest way to go.
On the other hand if you’re creating Django models and/or APIs using DRF, a lot of the code that you are going to write is going to be declarative — only to be later picked up by the meta functions of Django and turned into a usable project. Which means that there is literally no code for you to test, it’s mostly configuration1.
But if you are working on the typical front/back architecture that we’ve discussed earlier, most of the things that you’re ever going to want to test in an automated way are the end-to-end user stories.
If you test those manually, you will be clicking on many buttons and filling up many forms. On and on again. For test cycles of 30 seconds to 5 minutes usually.
On the other hand if you automate those tests you can probably drop the testing time to a couple of seconds. We can estimate that on average it’s going to be about 10 times faster than manual testing.
Now let’s consider the following simple equation:
Let’s consider that:
The time spent testing manually is equal to the time spent coding
The automated test is 10 times faster than the manual test
Even if you don’t understand the math formalism, you understand that in the end testing your code automatically while you develop is almost twice faster. The bias here of course is that you still need to write this test. That’s why we’ll explore tools that make this as easy as possible, so that the benefits are not swallowed by the plumbing.
Overall it’s hard to quantify exactly how much productivity gain2 we’re talking but it should help you go about two times faster — and in the worst case scenario it seems unlikely that it will be slower than testing manually. More importantly we’re just talking about the immediate benefits of testing.
Ease of mind when changing things
Any application that lives long enough will reach the point where no single human brain can comprehend the entirety of its features at the same time. There are just too many moving parts. And this point arrives much sooner than you think, especially in environments like mine where people move from project to projet all the time.
Essentially: how do you know if something that you change will break anything in the project without testing everything? Leading to the subsequent question: how do you even know what to test?
The answer is that you cannot know what broke if you don’t test it, so indeed you have to test everything. Which can be done with for example a large testing booklet written and maintained manually — aka not — or also with automated tests that run every time you push your code into the repo (and on your machine while you dev).
The second option is absolutely better in the sense that:
If all the tests are written, it will be exhaustive
And since it’s all automated, each test should be extremely fast
This way you reduce a QA process to a few seconds of test instead of potentially hours of man time spent. With the guarantee that everything is executed in stable conditions and in a repeatable way.
Onboarding of newcomers
Overall tests will show you how to use the app and how to use the code. All a newcomer has to do to understand everything that you can do with the application is to watch the tests unfold.
Let’s note that this is partly true because tests will often be cryptic and hard to document. A better way to approach this topic is with BDD and — spoiler alert — pytest-bdd. But that’s for another article, we are focused here on the Django/Svelte integration.
Picking the right tools
While I am not going to list every single test runner and framework out there — that would be an entirely different article — here are the constraints I’m settings for myself in this quest for automated tests.
The first aspect is that Django-based tests have the ability to write directly into the database, which is in turn cleaned up after each test. When your application is essentially just transforming a DB schema into an API, that’s really something you want to be able to do. Without that you’re in for some very awkward mocking. The core idea is thus to run tests from Django — I even considered wrapping Django’s tests from JS but in the end that was not necessary.
The default test framework in Django is the standard unittest, and while honorable there are more friendly and powerful options out there. Namely pytest, which as you will see right below will be the backbone of our strategy. The first thing is to integrate it with Django’s tests and this happens with pytest-django.
The main issue however that I have with testing in Django is that, while it has a LiveServerTestCase (and the pytest equivalent), it kinds of wants you to use Selenium and no offense for that precursor tool but oh boy is it unusable. Last time I wrote e2e tests with Django and Selenium I ended up writing more utils than tests.
Thankfully things have changed and we are now able to use Playwright through the pytest-playwright plugin. While I don’t particularly like Microsoft I must admit that it has two very interesting characteristics.
Firstly it has very semantic selectors which will use accessibility attributes in order to find elements on the page. This is great because while you test your features you know that if you don’t have to resort to crude CSS selectors it means that at least what you test looks more or less decent in terms of accessibility.
And secondly it has an auto-wait feature on all the selectors, which is by far the most annoying thing that you end up doing all the time with Selenium.
To summarize, we’re going to go with:
Django
itself and its testing facilitiespytest
as test runnerpytest-django
for the Django integrationpytest-playwright
for the browser testing
Implementation time!
In order to proceed to demonstrate how all those tools work together, created a sample project on GitHub which contains mostly the boilerplate that you will need along with an example of how to use everything together.
The project is extremely simple in itself: there is one model that is exposed through an API with one page that displays all the instances returned by the API. Really just the bare minimum to write a test that shows all we discussed above.
Lots of small details are going to be left out from this explanation that focuses on the big picture. The source code being entirely available, any shadow can be lifted by inspecting it. If you intend to run the project yourself, have a read at the README.
Boilerplate
We’ve got two projects which are fairly close to default Django and SvelteKit projects located in the api and front folders.
API
Let’s first have a look at our dependencies. Quite obviously we’ll see there Django alongside its best friend DRF for the API management part.
On the testing side we have 3 plugins on top of pytest:
pytest-django
— Takes care of the Pytest/Django integration, and specifically takes care of managing the database and live serverpytest-playwright
— Integration of Pytest and Playwright in order to be able to test things within a browserpytest-env
— Small utility that allows to define environment variables when Pytest runs, which is super useful if like me you follow the 12 factors philosophy: it allows to have a static configuration for running tests.
Since we’re talking about end-to-end tests, I figured that it would not necessarily make sense to pin them to a specific Django app and rather I’ve created a dedicated test folder for it.
In order to be able to run the tests, you need to make sure to configure the settings modules and the environment in the pyproject.toml
file:
[tool.pytest.ini_options]
DJANGO_SETTINGS_MODULE = "e2e_django.settings"
env = [
"DJANGO_ALLOW_ASYNC_UNSAFE=true",
]
Front
Honestly I’ve changed nothing to the front-end except create the page that displays the thing we want to test.
Front/API sync
The part that was elusive to me for the longest time was: how can I synchronize the front-end and the back-end — especially in regards to the database management that I’m mentioning earlier.
Turns out, with a little bit of eblow grease and pytest magic it’s fairly easy.
First we need to talk about pytest’s fixtures. If you’re a Django developer you probably hear “fixture” and think “right to load data into the database”. But it’s not that at all. They are a mechanism of dependency injection specialized for tests.
For example you could say: I have a “user” fixture that is a user from the database and that is scoped to each individual test. If a test requires the “user” fixture then the user will be created into the database and will be cleaned after each test.
Both the Playwright and the Django plugin use them heavily for giving you access to their various features. Typicall if you ask the page
fixture for your test then Playwright will be started but ohterwise it will not.
The same applies for the live_server
from Django and in our case we’ll be able to leverage this in order to start and stop the front-end while testing.
This can be done relatively easily if you exploit the fact that both the front and the API are in the same repository. You can compute accurately the absolute path of the front-end and start scripting there.
Which is exactly what the front_server() and its friends are doing in the conftest.py file — a file that can inject global fixtures into different tests under the same module. While you can read the source code directly, let’s review the key points:
We use Popen to start the Vite server in preview mode, which is close enough to production for our needs. A fixture can just yield an object, and the function will suspend until all tests that need it are done. This is what we do, and after the yield finishes we just shut down the process.
The process is bound to port 0. This is a special way to tell the system “just pick any available port”. Which allows to not have to decide for a static port number thus limiting the risks of failure. The Vite server will print the chosen port when starting, so we just parse stdout to get it.
In the end we simply yield the base URL of this front-end server and then our tests will be able to connect to it in any way they want.
This example is done with Vite because that is what powers SvelteKit, but while the detail of the commands you would have to run would be different there are equivalents of this in every single front-end framework so you’ll just need to adapt it accordingly.
Writing the test
Now that we’re able to summon the front-end (through the code above) and the browser (through Playwright) it’s time for us to write a test!
Be careful, this is actually very disappointing because it’s way too simple. First we create the items that we want to see through a fixture:
@pytest.fixture
def some_items(transactional_db):
return [
Item.objects.create(name="Foo"),
Item.objects.create(name="Bar"),
]
Now we create a test that requires 3 fixtures:
front_server
— The server we’ve created abovesome_items
— The items defined herepage
— The Playwright control object
@pytest.mark.django_db(transaction=True)
def test_content(front_server, some_items, page: Page):
page.goto(str(httpx.URL(front_server).join("/")))
for item in some_items:
item_name_escaped = repr(item.name)[1:-1]
assert (
page.locator(f"li:has-text('{item.id}: {item_name_escaped}')").count() == 1
)
This way we’re able to send the browser to the front-end and check the content of the page based on the expected items we’re looking for. That’s it!
Running the GitHub Action
If you’re making automated tests, it’s usually a good idea to run them automatically. Fortunately it’s really easy to do with GitHub Actions. We’ll define a workflow that triggers on push.
Beyond the installation of dependencies, let’s check some interesting steps of that workflow:
- name: Run tests
run:
.venv/bin/python -m pytest --junitxml=/tmp/test-results.xml
--tracing=on --video=on --screenshot=on
working-directory: ./api
When running the tests, we keep the results in JUnit format and ask Playwright to record pictures and videos of all tests. Let’s note that if your project scales up you probably just want to record failing tests and not all tests, otherwise you’ll eat up artifact storage pretty quick.
- name: Publish test report
uses: mikepenz/action-junit-report@v4
if: always()
with:
report_paths: "/tmp/test-results.xml"
check_name: "API Pytest Report"
Since we’re able to export the outcome as a a JUnit file, we use an action that transforms it into a nice recap for the action.
- name: Keep Playwright artifacts
uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-traces
path: api/test-results/
Finally, we’ll save temporarily the Playwright videos and screenshots into a GitHub Action artifact, which allows to analyze in-depth failed tests (for example using the online Trace Viewer).
Conclusion
After establishing that automated testing is well-worth going through the trouble of establishing a well-oiled testing infrastructure, we set to explore how this can be accomplished with Django and a Javascript meta-framework such as SvelteKit.
While this requires a little bit of boilerplate and adaptation — after all, those two worlds are not exactly thought to work togeter — we can see that we can obtain both the convenience of Django’s tests with their database management and the power of modern front-end test frameworks such as Playwright.
In the end the tests run completely autonomously on GitHub Actions and produce both nice reports and in-depth traces that allow analysis in case of failure.
This whole structure is easy to use on a daily basis and can boost your coding speed up to two times!
For a broad meaning of configuration. And of course you can write specific functions and algorithms in the backend, for which the use of unit tests is perfect. But the vast majority of the code you’re writing in a Django project is actually written by Django. Which is why I like Django.
If anyone has heard of a valid experiment on the topic, I’ll take. What I’ve found is mostly studies on 12 subjects so I’m not going to take that as too solid.