K
Kysely•12mo ago
manda putra

Question : How do you write integration test, and avoid flakynees?

Everytime I wrote test for the backend, I always do integration test that include real database call. I dont really like creating a mock. But the downside is, having a lot of test that have dependence on many table would create a flaky test sometimes. So we apply some rule to write integration test : 1. Always clean the data after the test run 2. Don't manipulate database seeder 3. Create a data seeder for test 4. Make a daily CI to run all test to detect flaky tests That is enough, but sometimes we still got some flaky test, and I found this library https://www.npmjs.com/package/pg-promise-sandbox, its seems promising that we can isolate and make out test suite as a whole transaction. But I'm still not sure how to apply that in kysley. My question is : 1. Is there any rules that I should know on writing integration test especially for database call? Is there anything that you guys might add? 2. Can we integrate kysley with pg-promise-sandbox so we can make all of the database call as a transaction, so we wont have many conflicting data accross test? Thanks in advance! Waiting for your opinion guys! :))
Solution:
I always clear the database before each individual test. It's also important to make sure each test is 100% independent and after a test, no queries keep running. If those are taken care of, there should be no flakiness. Wrapping tests in a transaction and rolling it back after each test might be slightly faster, but the database clearing speed has never been an issue in my tests. I've had projects with over a thousand tests and the runtime hasn't been an issue (well, it has, but not due to clearing the DB). It's much easier to just nuke+populate the DB before each test....
Jump to solution
3 Replies
Solution
koskimas
koskimas•12mo ago
I always clear the database before each individual test. It's also important to make sure each test is 100% independent and after a test, no queries keep running. If those are taken care of, there should be no flakiness. Wrapping tests in a transaction and rolling it back after each test might be slightly faster, but the database clearing speed has never been an issue in my tests. I've had projects with over a thousand tests and the runtime hasn't been an issue (well, it has, but not due to clearing the DB). It's much easier to just nuke+populate the DB before each test.
manda putra
manda putra•12mo ago
@koskimas how do you nuke and populate? Is it just a simple drop db and then populate again using predefined sql backup? And for before each individual test do you mean like this?
describe('test 1', () => {
test('1.0', async () => {
await nukeAndPopulate()
})
test('2.0', async () => {
await nukeAndPopulate()
})
})
describe('test 1', () => {
test('1.0', async () => {
await nukeAndPopulate()
})
test('2.0', async () => {
await nukeAndPopulate()
})
})
Or is it like this :
beforeAll(async () => {
await nukeAndPopulate()
})

describe('test 1', () => {
test('1.0', async () => {
//...
})
test('2.0', async () => {
//...
})
})
beforeAll(async () => {
await nukeAndPopulate()
})

describe('test 1', () => {
test('1.0', async () => {
//...
})
test('2.0', async () => {
//...
})
})
koskimas
koskimas•12mo ago
You can take a look at kysely's test suite https://github.com/kysely-org/kysely/blob/master/test/node/src/where.test.ts It seems that I actually nuke the DB after the test there 😅
Want results from more Discord servers?
Add your server
More Posts