As Client your sitting front of an Monitor, that refreshes 60 or if faster used 75 times a second, as this is a rate, that movements are shown smooth ..., and we (our Eyes) don't get a problem with the flicker repeat and tired. So 60 times a screen is printed new, maybe on the data new in GPU.
So next we have the more important frame rate fps, where we use (also) 60 fps if we have fast GPU most.
I just test, if I set fps in Client to 75, still have 60 fps - so we take this a a base further.
So as the screen and the update rate from the Monitor and GPU will show us changes of a moving item, player just
60 times per minute = we get every (1/60) 0.0166 s or every 16.666 ms is a refresh seen for our eyes.
So we cant see any movement in the time between 2 frames (FPS) as there are no more updates between and they might be useless to calculate for GPU and sometime also CPU as just simple they are never shown, only the last data got in and calculated will be seen. Every between is useless ...
So far right ?
Ok, what other parameter might pass in here:
* client far distance has ms 300 / 0.3 s (see RTT 0.300)
--> client get data late, data from him arrive late at server, but data can only be processed when available.
* client has a local server connected, let it be 20 ms / 0.020 s (RTT 0.020)
--> remember, fps 60 is all 0.0166 ms new picture, so here it is nearby get new data and can use it for refresh calculation, and it will be used without a lot of useless work, not seen
* I am sitting nearby, I have RTT 0.001 s or 1 ms
--> later
- Code: Select all
# Length of a server tick and the interval at which objects are generally updated over network.
dedicated_server_step = 0.09
- Code: Select all
# Length of time between NodeTimer execution cycles
nodetimer_interval = 0.4
Is there any other setting I missed, might work with in this part between server CPU calculates something - send to client - client calculates screen with GPU and show it ?
conclusion so far:
* as a client monitor/screen cant show faster then mainly 60 fps, mean every 0.01666 s / 16.66 ms a new data would be needed for a also new shown frame on screen.
If faster it would not be seen on a screen, as overrun from new data get in before the last calculated screen could have been shown.
Imagine you get every 0.008 s / 8 ms a new update, but every second send data cant be shown as screen updates just in 16,66 ms, mean 50% of all work calculating new positions etc. are without any use ...
and so we go next step
dtime
- Code: Select all
on_step(self, dtime) — Callback method called every server tick.
dtime — time since last call (usually 0.05 seconds)
so here I am not sure yet, as I also put in second parameter in .conf
- Code: Select all
# Length of a server tick and the interval at which objects are generally updated over network.
dedicated_server_step = 0.09
server_step = 0.05
forgotten added later:
- Code: Select all
# Time in between active block management cycles
active_block_mgmt_interval = 2.0
- Code: Select all
# Length of time between ABM execution cycles
abm_interval = 1.0
however, when I put an output to a globalstep
- Code: Select all
minetest.register_globalstep(function(dtime)
handle_active_blocks_timer = handle_active_blocks_timer + dtime
if dtime < 0.2 or handle_active_blocks_timer >= (dtime * 3) then
print(string.format("elapsed time: %.2fms", (os.clock() - t1) * 1000).." get "..dtime)
handle_active_blocks_timer = 0.1
move_entities_globalstep_part1(dtime)
move_entities_globalstep_part2(dtime)
end
end)
- Code: Select all
elapsed time: 123.61ms get 0.090000003576279
elapsed time: 123.85ms get 0.090000003576279
elapsed time: 124.16ms get 0.090000003576279
elapsed time: 124.36ms get 0.090000003576279
elapsed time: 124.62ms get 0.090000003576279
elapsed time: 124.89ms get 0.090000003576279
elapsed time: 125.18ms get 0.090000003576279
elapsed time: 125.50ms get 0.090000003576279
elapsed time: 125.75ms get 0.090000003576279
elapsed time: 126.02ms get 0.090000003576279
elapsed time: 126.28ms get 0.090000003576279
elapsed time: 126.62ms get 0.090000003576279
elapsed time: 126.97ms get 0.090000003576279
elapsed time: 127.27ms get 0.090000003576279
elapsed time: 127.56ms get 0.090000003576279
elapsed time: 127.85ms get 0.090000003576279
elapsed time: 128.17ms get 0.090000003576279
elapsed time: 128.49ms get 0.090000003576279
elapsed time: 128.78ms get 0.090000003576279
elapsed time: 129.05ms get 0.090000003576279
elapsed time: 129.36ms get 0.090000003576279
elapsed time: 129.61ms get 0.090000003576279
I wanna remember,
* a client can show every 0.01666 s / 16.66 ms a new frame to the human user,
* but globalstep updates 3 time each ms to force new data created if possible.
This mean for me that globalstep "starts actions" 16.666 ms / (1/3) 0.333 ms = 50 times between one new data packet would be needed from Client but is send only every 90 ms.
So take this it mean 90 ms / 0.333 ms = 180 times globalstep forces something to work for nothing as data not send to clients
or
* "dedicated_server_step = 0.09" just send data every 90 ms ?
Should this be then better nearby the possible fps rate needed update of data, mean 0.016 ??
This might mean, just before a new frame should be shown to player, it will also be generated with new data
For nothing ? what are this updates of positions and items, nodes done by the server CPU worth when cant be used ?
* neither from server as send all 90 ms, better would be 0.016
* neither from the client as it just can show every 0.0166 ms given update as of fps ?
(The code up is my most using CPU time on server, provider gives it 40% - 40% where maybe 1% would be enough - 40% / 50 times useless updated = just 1% left)
so how to slow down globalstep ? -
how to change to minetest.after ?
* I suggest it might depend on the "dedicated_server_step" also, mean it is still fast enough when repeat every
--> 1 ms, mean save already 66% of CPU time, but still 90 times inside one send-data-repeat, or 16 mines in one screen-refresh !
--> between all possible will still be fast enough I think, and save a lot of CPU time on server
--> 16 ms, that would mean near synchrony of possible update new data at client, including similar set of dedicated_server_step to 0.016 = new date is send more often, just often enough to deliver every new screen new data
or also the other was, optimize also "dedicated_server_step" to get better fixed to possible fps rate ?
* I suggest 0.016 instead of 0.9 if server can handle it, or then in steps multi of it: 0.033, 0.05, 0.066, 0.084, ...to optimize send data to needed to refresh screen.
(I work with set to 0,001 and it works - just poor slow/low connected clients ? as I flood them with data ...)
All will depend on servers CPU and network possibilities, also client can get in and work out with CPU GPU.
And then how often is the client sending new date?
Is it the same given set from the server, or can the client decide ?
* Then also here would be better if nearby screen update, mean 0.016 if network can hold it.