Changes between Version 8 and Version 9 of Ticket #9801, comment 2


Ignore:
Timestamp:
Oct 4, 2023, 6:49:51 PM (2 years ago)
Author:
Tom Goddard

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #9801, comment 2

    v8 v9  
    11The C# reading code allocates an array for the full size of the message and fills it as data is read, so there is no concatenation to slow down the read.
    22
    3 One possibility is that the async read is not getting CPU time very often because the Unity is not querying the task often enough, maybe only once per rendering update loop.  The code currently reads while data is available and then does a stream.ReadSync() call. I could test that by always doing a synchronous stream.Read() until the whole message is read.  Try that.  The same problem could be slowing the writing to the socket using WriteAsync().  Can change that to Write() as a test.  With synchronous read and write it took the same amount of time.  Printing the synchronous read time gives 1.5 seconds for 31 Mbytes with a synchronous write.  That is not too far off the expected time and could be due to stream buffering which is not optimal for large messages.  If I use async read and write then the read time is reported as 1.7 seconds.  So only about 15% slower with async.  Decoding the message and opening the model took 0.9 seconds after the message read, and encoding the message took 0.8 seconds (not including serializing by concatenating length, message type, and json).  The decode of the json message to utf8 took 0.04 seconds and the JsonUtility converting json to unity object took 0.6 seconds.  So these timings account for the ~3 second transfer time. Each of these timings was on the second load so file data and module initializations are done.
     3One possibility is that the async read is not getting CPU time very often because the Unity is not querying the task often enough, maybe only once per rendering update loop.  The code currently reads while data is available and then does a stream.ReadSync() call. I could test that by always doing a synchronous stream.Read() until the whole message is read.  Try that.  The same problem could be slowing the writing to the socket using WriteAsync().  Can change that to Write() as a test.  With synchronous read and write it took the same amount of time.  Printing the synchronous read time gives 1.5 seconds for 31 Mbytes with a synchronous write.  That is not too far off the expected time and could be due to stream buffering which is not optimal for large messages.  If I use async read and write then the read time is reported as 1.7 seconds.  So only about 15% slower with async.  Decoding the message and opening the model took 0.9 seconds after the message read, and encoding the message took 0.8 seconds (not including serializing by concatenating length, message type, and json). So these timings account for the ~3 second transfer time. Each of these timings was on the second load so file data and module initializations are done.
    44
    5 Maybe the large JSON message encoding and decoding is also slow?
     5The slightly slower than expected socket data transfer may be because both headsets are on the same RBVIVR wifi and the simultaneous data stream for both headsets lowers the wifi bandwidth.
     6
     7Maybe the large JSON message encoding and decoding is also slow?  Yes.  JsonUtility converting json to unity OpenModelMessage instance took 0.6 seconds and converting the OpenModelMessage to json took 0.7 seconds. The decode of the json message to utf8 took 0.04 seconds.  That 1.3 seconds should be close to 0 if more efficient code were used.