How to determine, how often an item needs to be updated ?

How to determine, how often an item needs to be updated ?

Postby Festus1965 » Sun Mar 31, 2019 11:36 am

I sit on a problem basic ...

As Client your sitting front of an Monitor, that refreshes 60 or if faster used 75 times a second, as this is a rate, that movements are shown smooth ..., and we (our Eyes) don't get a problem with the flicker repeat and tired. So 60 times a screen is printed new, maybe on the data new in GPU.

So next we have the more important frame rate fps, where we use (also) 60 fps if we have fast GPU most.
I just test, if I set fps in Client to 75, still have 60 fps - so we take this a a base further.

So as the screen and the update rate from the Monitor and GPU will show us changes of a moving item, player just
60 times per minute = we get every (1/60) 0.0166 s or every 16.666 ms is a refresh seen for our eyes.

So we cant see any movement in the time between 2 frames (FPS) as there are no more updates between and they might be useless to calculate for GPU and sometime also CPU as just simple they are never shown, only the last data got in and calculated will be seen. Every between is useless ...

So far right ?

Ok, what other parameter might pass in here:
* client far distance has ms 300 / 0.3 s (see RTT 0.300)
--> client get data late, data from him arrive late at server, but data can only be processed when available.

* client has a local server connected, let it be 20 ms / 0.020 s (RTT 0.020)
--> remember, fps 60 is all 0.0166 ms new picture, so here it is nearby get new data and can use it for refresh calculation, and it will be used without a lot of useless work, not seen

* I am sitting nearby, I have RTT 0.001 s or 1 ms
--> later

Code: Select all
#    Length of a server tick and the interval at which objects are generally updated over network.
dedicated_server_step = 0.09

--> sent over network every 0.09 s - mean an update of data got send - 0.9 / 0.0166 = so near 54 frames are created depending on old data - or what the client is able to imagine himself ? or server calculated data between to sending times a lot for nothing worth the client ? as not send so often ...

Code: Select all
#    Length of time between NodeTimer execution cycles
nodetimer_interval = 0.4
--> where are nodetimer used for ? 0.4 / 0.016 (ms a new screen is shown) = 24 fps happen before a new nodetimer repeat ?


Is there any other setting I missed, might work with in this part between server CPU calculates something - send to client - client calculates screen with GPU and show it ?

conclusion so far:
* as a client monitor/screen cant show faster then mainly 60 fps, mean every 0.01666 s / 16.66 ms a new data would be needed for a also new shown frame on screen.
If faster it would not be seen on a screen, as overrun from new data get in before the last calculated screen could have been shown.
Imagine you get every 0.008 s / 8 ms a new update, but every second send data cant be shown as screen updates just in 16,66 ms, mean 50% of all work calculating new positions etc. are without any use ...


and so we go next step
dtime
Code: Select all
 on_step(self, dtime) — Callback method called every server tick.
    dtime — time since last call (usually 0.05 seconds)

so here I am not sure yet, as I also put in second parameter in .conf
Code: Select all
#    Length of a server tick and the interval at which objects are generally updated over network.
dedicated_server_step = 0.09
server_step = 0.05
but this dedicated_server_step seams to used for dtime ? (tested change it)

forgotten added later:
Code: Select all
#    Time in between active block management cycles
active_block_mgmt_interval = 2.0

how does this fix into all ? every
Code: Select all
#    Length of time between ABM execution cycles
abm_interval = 1.0
and this ?


however, when I put an output to a globalstep
Code: Select all
minetest.register_globalstep(function(dtime)
   handle_active_blocks_timer = handle_active_blocks_timer + dtime
   if dtime < 0.2 or handle_active_blocks_timer >= (dtime * 3) then
      print(string.format("elapsed time: %.2fms", (os.clock() - t1) * 1000).." get "..dtime)
      handle_active_blocks_timer = 0.1
      move_entities_globalstep_part1(dtime)
      move_entities_globalstep_part2(dtime)
   end
end)
so I get
Code: Select all
elapsed time: 123.61ms get 0.090000003576279
elapsed time: 123.85ms get 0.090000003576279
elapsed time: 124.16ms get 0.090000003576279
elapsed time: 124.36ms get 0.090000003576279
elapsed time: 124.62ms get 0.090000003576279
elapsed time: 124.89ms get 0.090000003576279
elapsed time: 125.18ms get 0.090000003576279
elapsed time: 125.50ms get 0.090000003576279
elapsed time: 125.75ms get 0.090000003576279
elapsed time: 126.02ms get 0.090000003576279
elapsed time: 126.28ms get 0.090000003576279
elapsed time: 126.62ms get 0.090000003576279
elapsed time: 126.97ms get 0.090000003576279
elapsed time: 127.27ms get 0.090000003576279
elapsed time: 127.56ms get 0.090000003576279
elapsed time: 127.85ms get 0.090000003576279
elapsed time: 128.17ms get 0.090000003576279
elapsed time: 128.49ms get 0.090000003576279
elapsed time: 128.78ms get 0.090000003576279
elapsed time: 129.05ms get 0.090000003576279
elapsed time: 129.36ms get 0.090000003576279
elapsed time: 129.61ms get 0.090000003576279
looking this result with default 0.09 gives me 0.090000003576279 with an average of 3 time check EVERY 1 ms. Do I change dedicated_server_step to 0.05 I get 0.050000000745058 also check it 3 time a ms.

I wanna remember,
* a client can show every 0.01666 s / 16.66 ms a new frame to the human user,

* but globalstep updates 3 time each ms to force new data created if possible.
This mean for me that globalstep "starts actions" 16.666 ms / (1/3) 0.333 ms = 50 times between one new data packet would be needed from Client but is send only every 90 ms.
So take this it mean 90 ms / 0.333 ms = 180 times globalstep forces something to work for nothing as data not send to clients
or
* "dedicated_server_step = 0.09" just send data every 90 ms ?
Should this be then better nearby the possible fps rate needed update of data, mean 0.016 ??
This might mean, just before a new frame should be shown to player, it will also be generated with new data


For nothing ? what are this updates of positions and items, nodes done by the server CPU worth when cant be used ?
* neither from server as send all 90 ms, better would be 0.016
* neither from the client as it just can show every 0.0166 ms given update as of fps ?


(The code up is my most using CPU time on server, provider gives it 40% - 40% where maybe 1% would be enough - 40% / 50 times useless updated = just 1% left)

so how to slow down globalstep ? -
how to change to minetest.after ?
* I suggest it might depend on the "dedicated_server_step" also, mean it is still fast enough when repeat every
--> 1 ms, mean save already 66% of CPU time, but still 90 times inside one send-data-repeat, or 16 mines in one screen-refresh !
--> between all possible will still be fast enough I think, and save a lot of CPU time on server
--> 16 ms, that would mean near synchrony of possible update new data at client, including similar set of dedicated_server_step to 0.016 = new date is send more often, just often enough to deliver every new screen new data

or also the other was, optimize also "dedicated_server_step" to get better fixed to possible fps rate ?
* I suggest 0.016 instead of 0.9 if server can handle it, or then in steps multi of it: 0.033, 0.05, 0.066, 0.084, ...to optimize send data to needed to refresh screen.
(I work with set to 0,001 and it works - just poor slow/low connected clients ? as I flood them with data ...)


All will depend on servers CPU and network possibilities, also client can get in and work out with CPU GPU.


And then how often is the client sending new date?
Is it the same given set from the server, or can the client decide ?
* Then also here would be better if nearby screen update, mean 0.016 if network can hold it.
Festus1965
Member
 
Posts: 841
Joined: Sun Jan 03, 2016 11:58 am
GitHub: Minetest-One
In-game: Thomas Explorer

Re: How to determine, how often an item needs to be updated

Postby Festus1965 » Sun Mar 31, 2019 2:29 pm

one last thing from other direction:
the code left to this
Code: Select all
local t1 = os.clock()
local done = 0

minetest.register_globalstep(function(dtime)
      t2 = os.clock()
      move_entities_globalstep_part1(dtime)
      move_entities_globalstep_part2(dtime)
      done = done + 1
      print(string.format("elapsed t1: %.3f ms", (os.clock()-t1)*1000).." : "..string.format("elapsed t2: %.3f ms", (os.clock()-t2)*1000).." repeats: "..done.." / dtime: "..string.format("%.3f",dtime))
end)
mean not check about parameter, let it run every time it want.

Guessing what?
* set dedicated_server_step = 0.09 (default)
getting 113 repeat of running code per 10 sec -mean- every 0.0885 s / 88.5 ms
= (90/88.5) = 10.16 time more often then data is send to client = useless
= 88/16.6 = only every 5.3th update on screen depends on new data = 10*5.3 useless work done at server

* set dedicated_server_step = 0.05 (see wiki)
getting 207 repeat through this code between per 10 sec -mean- every 0.0482 s / 48.2 ms
= (50/48.2) = 1.03 time faster then data send via network = perfect ! just in time
= 48.2/16.6 = only every 2.89th update of a screen still get new data for a change

* set dedicated_server_step = 0.016 (near fps)
getting 337 repeat in 10 sec -mean- every 0.296 s / 29.6 ms
= (16/29.6) = 0.5 faster, or 2 slower, mean only every 2nd data packet contains new data
= (29.6/16.6) = every 1.78th update is base on new data at screen

still to slow ? to see a real moving, mean get new data to every update of screen

* set dedicated_server_step = 0.001 (test)
getting 338 repeats ... nothing more to get out of CPU or globalstep ...
same with set 0.0001 ... 339 repeats

where is the optimum then at cant get every 16 ms ? but not send to often same old data to client ?
we stuck here about near under 30 ms

* 0.033 still 314 repeats in 10 sec = every 32.8 ms
but look here every 33 ms data should be send, and there are new data every 32.8 sec ...
WOW
CPU here (i7-4770 3.4 GHz) cant do much faster or wont as I see not a 1% of change in usage,
(that mean there is some other thing limiting it ?)
so then we set the sending to client just near this, so most sending mean also new data ?
and so every 2nd frame at 60 fps is a new one.

All wrong ?
Festus1965
Member
 
Posts: 841
Joined: Sun Jan 03, 2016 11:58 am
GitHub: Minetest-One
In-game: Thomas Explorer

Re: How to determine, how often an item needs to be updated

Postby Festus1965 » Sun Mar 31, 2019 2:47 pm

and after this we have to check
Code: Select all
active_block_mgmt_interval = 4.0
abm_interval = 2.0
nodetimer_interval = 0.4
some to read here

when we also sending all 33 ms new data to client (this depend on pipeworks globalstep check / others will follow), what not optimizing the other runs also ?
but for this I need to understand these here first

one part in this is the setting of a warning, when ABM took longer at serverenvironment.cpp#L1374
I still don't know if ABM run are cut after that one after the needed time, or al ABM was running through just telling that ...

However, I set to 300 and have silence, but when all ABM run sure in a time-frame of 300ms, or 200ms most time
then normally to wait for the next run of all ABM much longer then 500 ms does not make much sense, as the time waiting - so i think - the list of next abm need to "do" is getting longer,
and the earlier is restart them, the less of them have to be worked out.
BUT when every 33 ms data is send to client, it takes 200/33 every 6th packet contains then this updated abm data and only every 5th time the screen is renewed, that might be to see.


so which of them do what ?
Code: Select all
active_block_mgmt_interval = 4.0
Length of time between active block management cycles
something is managing the active blocks, building list or what ?

and
Code: Select all
abm_interval = 2.0
Length of time between Active Block Modifier (ABM) execution cycles
the distance where ABM get started (and cause took longer haha), but does it mean every 2 second or 2 second after last finished ?
even when first is meant, every 2000 ms, the environment allows only 200 ms to do it ?

Here I need to guesses until now, but set this to 0.5 would run them every 500 ms, and they need never longer then 300 at me, and if ... what will happen ?

But if the list is updated just all 4000 ms, for what working them every 2s when this list is new just every 2nd time...

No, here I an in full darkness !

So please some clear hint.
Festus1965
Member
 
Posts: 841
Joined: Sun Jan 03, 2016 11:58 am
GitHub: Minetest-One
In-game: Thomas Explorer

Re: How to determine, how often an item needs to be updated

Postby rubenwardy » Sun Mar 31, 2019 3:29 pm

The server step should not match the render step. This is completely unnecessary, and will actually result in worse performance. The client already interpolates stuff so things appear smooth. For animating things, make sure you're using move_to rather than set_pos.

Don't go below 0.05 on a server. If you have more than 30 users online, you should probably even increase it to something like 0.2!
rubenwardy
Moderator
 
Posts: 5706
Joined: Tue Jun 12, 2012 6:11 pm
GitHub: rubenwardy
In-game: rubenwardy

Re: How to determine, how often an item needs to be updated

Postby Festus1965 » Sun Mar 31, 2019 4:34 pm

rubenwardy wrote:The server step should not match the render step. This is completely unnecessary, and will actually result in worse performance. The client already interpolates stuff so things appear smooth. For animating things, make sure you're using move_to rather than set_pos.

Don't go below 0.05 on a server. If you have more than 30 users online, you should probably even increase it to something like 0.2!
????

I didn't post render up there ...
I didn't ask about programming mod with set_pos or ...
oh, just about server_step ... but is is named "dedicated_server_step" as also a "server_step" is there
and I set on main server to "dedicated_server_step = 0.033" and two hours still all run
(before I had 0.0001 for near two months ... and lag under 0.2 also)

add: learn about render, why might this be important:
found dev wiki
Code: Select all
The main loop: Invokes the client, the server, the environment and the rendering.
...
Render step
Create a list of MapBlocks in rendering range foreach(MapBlocks in rendering range):
If it is not in front of the player, skip it Draw the faces of the MapBlock
...
The mesh generator is managed by the Client.
In Minetest 0.3.1, occlusion culling was added to the render step
In Minetest 0.4.3, the list of MapBlocks to be rendered is cached, and sorted by texture
...
but the loop is the best
Code: Select all
Loop
* Read input (who, both server and client ?)
* Run client (also steps the environment)
* Run server
* Update camera (who, we are just after run server)
* Calculate what block is the crosshair pointing to
* If the player left/right clicked, send a remove/add node command to server (and here we are at client, but that should have run more up 2nd entry)
* Render scene (who, client now or again server ?)

fast:
* kick al people with RTT higher than 20-50? - as they stop the server running, as server has to wait for there data

where is the render_step set ? If this is the real name of it ...
"Create a list of MapBlocks" and what setting belongs to this ?
Festus1965
Member
 
Posts: 841
Joined: Sun Jan 03, 2016 11:58 am
GitHub: Minetest-One
In-game: Thomas Explorer



Return to Modding Discussion



Who is online

Users browsing this forum: Google Bot [Bot] and 0 guests