Hi,
I'm planning to use streams, so I'm performing some tests while developing the project. My current MC setting is 2 nodes, both needed to mine so that a block can not be validated without the 2 nodes being up and running (that may not be mandatory within the field of the project - but it is just the way I realised the 'issue' I may have).
Then, I have simulated 'a node crash'. Only one node is left and running, so no more block are validated, no more 'blocktime' on pending blocks added to the stream after the 'crash'. Then, I relaunch the second node, and pending blocks start again getting validated. Everything normal except that some blocks get validated at the same time, hence I have successive blocks in the stream with the exact same 'blocktime'.
Hence my question/note/remark: assuming streams can/may be used for time-based series (values submitted on a regular basis, every X minutes), it seems to me, in such situation, but also from a global perspective, that 'blocktime' can not be used in order to extract/deduce the exact time the value was submitted to the stream. It looks like to me that 'blocktime' can not reflect 'for sure' this characteristic. If one have to gather 'timereceived' within the transaction's data in order to retrieve the data submitted timestamp, how one can extract as fast as 'liststreamkeyitems' the last 100 data/values for example from a stream with their exact/effective 'submitted' time value and not the time at which it was effectively validated in the stream?
Let me know If I'm missing something here. Any thoughts welcome. Thanks.