Normal timers vs y_timers
#1

Normal timers vs y_timers
Benchmarking test with GetServerTickrate() function


Introduction
I was looking at a benchmark example made by Southclaws the other day which was a documentation about single timers vs global timers. I asked on Discord regarding this because I was unsure myself and when reading the conclusion I saw that he mentioned that timers in general was very cheap. Now in my gamemode, I have used y_timers because I fancy the precision, unlike SetTimer() which comes very off the accuracy chart rather quickly which isn't really a secret. So I thought:
  • Would GetServerTickrate() deliver a huge difference in outcome when using the same code, but equivalent for y_timers, as Southclaws used in his benchmarking example?
I have followed Southclaws definition of what GetServerTickrate gives, which is also stated in the wiki-page: "The tickrate values show the output of GetServerTickrate which is essentially the server's version of FPS. This is a rough guide to server performance, most of the time it should be sticking around 150-180. Dropping below 100 means degraded server performance and slower update response for players and sync."



Disclaimer
I am very much aware that I do not have the "proper" knowledge to perform actual benchmarking and I am only doing this to hopefully start a discussion where I will also learn something from regarding the topic. I am also aware of that perhaps gathering up to 700 different test results and then taking the average of them would be a lot more precise but I thought of giving this a chance and see if I could conclude something. Last but not least, this was mostly a fun project and a fun thing to do! It reminded me of old-school testing and writing a bit "school-alike" report, which I haven't done in a long while. While it's not perfect, I hope that it is informative enough.



Code & Tests
Before I start talking about the results I have followed the instructions on Southclaws' github (https://gist.github.com/Southclaws/7...8e3b67c632483d) and for the "normal timer" testing I used this code:
Code:
#include <a_samp>


static const UPDATE_INTERVAL = 100;
static const SAMPLES = 40;
static const WARM_UP_PERIOD = 10;
static const TIMERS_PER_UPDATE = A;
static const TIMERS_INTERVAL = B;
static const TIMERS_STAGGER = 1;
static lastID;
static tick;


main() {
    SetTimer("update", UPDATE_INTERVAL, true);
}

forward update();
public update() {
    if(tick < WARM_UP_PERIOD) {
        tick += 1;
        return;
    }

    if(tick == SAMPLES + WARM_UP_PERIOD) {
        return;
    }

    for(new i; i < TIMERS_PER_UPDATE; ++i) {
        lastID = SetTimer("none", TIMERS_INTERVAL + (TIMERS_STAGGER * i), true);
    }

    printf("[%04d]: lastID: %08d tickrate: %04d", tick - WARM_UP_PERIOD, lastID, GetServerTickRate());
    tick += 1;
}
When doing the y_timer tests I used this code which is more or less the same as below but a "translation" so to speak:
Code:
#include <a_samp>
#include <YSI\y_timers>


static const UPDATE_INTERVAL = 100;
static const SAMPLES = 40;
static const WARM_UP_PERIOD = 10;
static const TIMERS_PER_UPDATE = A;
static const TIMERS_INTERVAL = B;
static const TIMERS_STAGGER = 1;
static lastID;
static tick;
static nonei;

main() {
    
}

task update[UPDATE_INTERVAL]() {
    if(tick < WARM_UP_PERIOD) {
        tick += 1;
        return;
    }

    if(tick == SAMPLES + WARM_UP_PERIOD) {
        return;
    }

    for(new i; i < TIMERS_PER_UPDATE; ++i) {
		nonei = i;
        lastID = defer none();
    }

    printf("[%04d]: lastID: %08d tickrate: %04d", tick - WARM_UP_PERIOD, lastID, GetServerTickRate());
    tick += 1;
}

timer none[TIMERS_INTERVAL + (TIMERS_STAGGER * nonei)]()
{

}
Important! Only "staggering" tests were performed as that is where Southclaws conclusion came in of that timers are cheap.
Important! The tests were not done simultaneously.


Definition of "A" and "B"
Four bigger tests were made which held both A and B being the same integer for each type of timer (so 8 tests in total).
  • Test 1: A = 1000 | B = 1000
  • Test 2: A = 1000 | B = 10
  • Test 3: A = 10000 | B = 1000
  • Test 4: A = 10000 | B = 10


Results
All chart's X-axis show the amount of samples (also known as how many times update() is called) while the Y-axis displays the tickrate. I forgot to add those labels in Excel. The general formula is: x-axis times the amount of timers created every update()-call:
x * timers/update = total amount of timers created throughout the session


Test 1
In this first test where I had 1000 timers created every 100 ms, and them having the interval of 1000 ms there was not a lot difference, but there is a clear difference in the end where y_timers is holding a higher and steadier tickrate at the end.



Test 2
The second test had similar result as the first test having the y_timers giving a more stable result for the server's tickrate toward the end while the normal timers made the server lose tickrate. This test had 1000 timers be created every 100 ms and the timers had an interval of 10 ms.



Test 3
When creating 10000 timers every 100 ms and having the timers have an interval of 1000 ms, there was not much difference at all between normal and y_timers. This was however the first test that showed a tickrate below 100 which to above's definition "[...]below 100 means degraded server performance[...]" states that it is bad. This should however come natural as a lot of timers are being created and run at the same time.



Test 4
In the last test I got also a similar result from both timer types whereas y_timers had slightly higher tickrate, but barely notable, especially since both of them went below 100. In this test 10000 timers were created every 100 ms and they had an interval of 10 ms.




Conclusion
It seems that when a lot of timers are created in such small intervals (100 ms updating) it won't matter which type of timer you are using, it will become a heavy burden for the server in any case. However it is highly noticeable when the server doesn't have too high challenging timer-counts to take care of, that y_timers are performing better than the normal timer type.



Additional personal notes
If one would go into more testing, perhaps a more accurate "breaking point" which shows exactly when the normal timer text starts to go below 100 tickrate and same thing for y_timers. However I suppose this all depends also on what type of resources the server has to play around with.

I just wanted to mention Southclaws again because without his documentation I wouldn't have even known where to start (or even gotten the idea from) so again thank you and here is the link to his benchmarking tests: https://gist.github.com/Southclaws/7...8e3b67c632483d

All data can be downloaded here by clicking "Download". File format is for Excel, "xlsx".
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)