Stress Testing/Performance Testing
Hello, I'd like to get some input from you guys on stress testing and performance testing.
I and some friends have been working together on various projects and especially of late have been trying to optimize and research different forks and performance of different systems. As we've done so we've found a lot of conflicting arguments over which forks are better and what to do for performance. As such we have started talking about testing the performance of different softwares and different settings, but I'd like to get some information on what you guys would say should be done to make this as accurate as possible and as automatable as possible.
Here are some questions I'd like addressed.
For stress testing, would it be most accurate by having a pregenned world of a specific size with default settings set on each software tested?
Would using bots rather than actual players change the performance at all or are bots seen and treated as regular players by the server? specifically bots that are created/run outside of the server and made to join.
What should be used to make these tests most accurate in being able to measure performance differences between server softwares as well as between having/not having plugins or datapacks, etc.
I'm sure I'll have more questions, but I will post them here as needed.
I and some friends have been working together on various projects and especially of late have been trying to optimize and research different forks and performance of different systems. As we've done so we've found a lot of conflicting arguments over which forks are better and what to do for performance. As such we have started talking about testing the performance of different softwares and different settings, but I'd like to get some information on what you guys would say should be done to make this as accurate as possible and as automatable as possible.
Here are some questions I'd like addressed.
For stress testing, would it be most accurate by having a pregenned world of a specific size with default settings set on each software tested?
Would using bots rather than actual players change the performance at all or are bots seen and treated as regular players by the server? specifically bots that are created/run outside of the server and made to join.
What should be used to make these tests most accurate in being able to measure performance differences between server softwares as well as between having/not having plugins or datapacks, etc.
I'm sure I'll have more questions, but I will post them here as needed.