Please enable JavaScript to view this site.

DigiView Plug-in Guide

Navigation: Development Tips

Timestamps and TimeScale usage

Scroll Prev Top Next More

Timestamps:

DigiView hardware uses very large timestamp counters ( > 50 bits).  All time is measured relative to the TRIGGER POINT.  Time prior to the trigger is represented with negative numbers.  Time following the trigger is represented with positive numbers.  All timestamp values passed between the application and the plug-in are represented with SIGNED INT64s.

All timestamps sent to your plug-in are guaranteed to be in chronological order. Any given timestamp will be larger than any previously received timestamp.  We require that your plug-in send us chronological data as well.  For convenience, we allow a few exceptions where your plug-in can send us back-to-back fields with the SAME timestamp.

TimeScale

DigiView uses scaled timestamps in its internal data structures to eliminate the need to deal with floating point values.  This greatly improves parsing, displaying and searching performance.  For example, a 400MHz sample rate results in a 2.5ns resolution.  When we store these timestamps, we scale the time to a whole number by multiplying it by 2.  In this case, TimeScale would be 2, telling your plug-in that all timestamps (to and from your plug-in) are scaled 2x.  This approach allows the entire application (including your plug-ins) to work with 64 bit integer time.
 
Many plug-ins do not care about absolute time.  The fields generated by the plug-in usually use the timestamp from a particular event.  The FIELD timestamp is blindly set equal to the EVENT timestamp;  no need to scale it.  In these cases, you can ignore the fact the timestamps have been scaled.
 
The only time a plug-in cares about absolute time is if it is doing timing analysis or an ASYNC type protocol.  In those cases, the plug-in has to be concerned about real-time and must compensate for the scaled values it receives and must return. You might be tempted to convert each received timestamp to real-time by dividing it by the TimeScale.   Then you could directly subtract timestamps to measure real-time duration.  Then, when you need to send a field back, you would take the real-time timestamp and multiply it by the TimeScale to return properly scaled time.  DON'T!  This results in a lot of needless floating point math and can have a considerable performance impact.
 
Instead of converting scaled-time timestamps to real-time, you should convert your real-time parameters to scaled-time.  This is a single integer operation that occurs once before the data streaming starts.  Then during the parse calls, you continue working with scaled numbers.  Many field timestamps will be set to some timestamp received from an event (no math required).  Anywhere you require calculated times, you can use integer math to calculate a scaled time.  This converts all of the math in the parse-time routines to integers.  It also confines the usage of any math at all to the time checks themselves (rather than every received event and every sent field).
 
Examples:
- If you have a timeout configuration item, then you would multiply it by the TimeScale before storing it for internal use.  To check timestamps for the timeout condition:
if ((newscaledtimestamp-oldscaledtimestamp) > scaledtimeout)    ///// timed out
 
- If you have a BAUD RATE parameter, you would immediately convert it to a scaled time duration:       ScaledBitTime = (1/baudrate)*TimeScale.
 
TimeScale usage is demonstrated in the AsyncWD example.