I've started http://wiki.linuxcnc.org/cgi-bin/emcinfo.pl?ContribuedHalFiles
for that purpose
can someone tell me when I got disconnected (and/or the last thing you see from me)?
on this channel the last thing I see from you is at 22:25 last night
ok"several RT filters ..." ?
there was no indication here that I was disconnected - I said more stuff to Jon, then some to Alex this morning
nothing in reply, and I was surprised that Jon appeared to have left his machine connected all night :)
my many disconnects result in a "connection timed out" after about a half hour or so
yeah, normally Chatzilla will see the disconnect and try reconnecting a few times (after various wait periods)
SWPadnos: you got client quit, not timeout
05:18 -!- SWPadnos [n=Me@emc/developer/SWPadnos] has quit [Remote closed the
I think I may have done it accidentally - at some point, I accidentally hit the "offline" button on the main Mozilla window. I guess Chatzill adidn't reconnect when I hit it again to go online
[18:12:38] <jepler> http://emergent.unpy.net/index.cgi-files/sandbox/timedeltas.png
this is interesting behavior -- the time differences recorded by timedelta tend to vary in big jumps
is that in CPU clocks or nanoseconds?
rtapi_get_time() so it's nanoseconds and dependent on the rtos to provide a good estimate
so it's probably just granularity in that function
somewhere around 800ns in this case
as I recall, that function took a very long time to execute
I've heard that too
yeah, get_time sucks
3 s32 RO 9203 timedelta.0.time
about 9000 of that is probably get_time
what exactly does timedelta do?
tells you the time between thread runs
jmkasunich__: it's part of the hal-only latency test, scripts/latency-test
calls rtapi_get_time() and does a little bookkeeping
in a hal-scope-able way
so the 25K offset is roughly the period
actually, that granularity leads me to believe that the timing source is the 1.19mumble MHz clock
and the +/- ~1000 variations are latency
SWPadnos: that sounds like a good explanation
jepler: can you zoom in and determine the actual granularity (to a few percent anyway)?
and/or change the rtapi_get_time to readTSC
see if it matches 1/1.19mumble
the dominant value seems to be 25143, with the value just below being 24305 and the one just above is 25891
they aren't always exactly the same -- there are some 25152 and some 25143 as the 'middle value'
[235309.833545] RTAI[sched]: Linux timer freq = 250 (Hz), CPU freq = 1193180 hz.
[235309.833795] RTAI[sched]: timer setup = 838 ns, resched latency = 2514 ns.
1/(25143-24305) = 1.19
ok, so the minor variations are due to code/cache randomness, and the major variation is due to the scheduling tick coming from the 1.193180 clock
isn't there an RTAI call for converting clocks to nanoseconds and/or vise versa?
no, I think there's one for converting from ns to interrupt clock times
it would be nice to have such a function in rtapi, and maybe implement rtapi_get_time in termsof it
I thought there was one, but it's very time consuming. I was thinking that calling it once to get a scale factor would be a good thing
actually, it would make sense to have the clock/ns scale in HAL data somewhere - it seems that it's useful in a number of places
kernel 2.6 (and maybe earlier) have cpu_khz
the last time I tried to figure out clock to seconds conversion, I gave up
finding a portable method is tough
I remember that
I have something for ns to clocks in hal_parport now
but I got it wrong the first few times so I dunno if it's right now
and it's certainly not accurate in all the available bits
hey - don't forget that I helped to get it wrong :)
you mean you helped me write the opposite conversion function, and like a chump I used that instead of the right one
that means both functions are written, then
oh, then it's just a matter of choosing the correct one
jmkasunich__ is now known as jmkasunich
fenn_ is now known as fenn
using the cycle counter (rtapi_get_clocks()) there are a lot more distinct values measured
do a fourier transform on it?
for the purposes of a latency test, rtapi_get_time() doesn't seem so bad -- it even gives the base thread something to do, instead of being nearly empty
hmm, I have some recollection that rtapi_get_time() freezes or doesn't work on some platforms
or was it some specific version of rtapi_get_time()..
I thought it was just said to be slow
I think there are systems where the rtai part is missing, and a kernel call was thrown in
which is preemptive I think
you mean that on some rtai system, rt_get_cpu_time_ns() is not realtime? or that on some rtlinux system gethrtime() isn't realtime?
that would be really really crappy
rtapi_get_time() used to call do_gettimeofday() a while ago
check rtai_rtapi.c line 560
I remember we tried 4-5 iterations to make it work right, but that didn't work
as it is now it's safe, but takes quite a while on some systems
that's fine with me, as long as it's 5-10us -- similar to the amount of CPU that would be used to actually do step generation
yeah, might be ok in this case
can you run latency-test on sim?
yes but what would the results mean?
maybe that it's good enough for a servo machine?
sim isn't just non-RT
it also doesn't have any I/O capability
rtapi_app can easily get I/O capability with a setuid bit and a call to iopl()
that's why I said earlier: "rtapi_app could be hacked in short order to use whatever delay technique is in fashion now, to do I/O, and refuse creation of more than one thread.."
yep, some "RT=userspace" option is possible, but not the way sim is right now