I'm getting a compile error on trunk.
Anyone else seeing it. rs274ngc.hh
I believe that I tested the stuff I added to that file recently.
I think lerman also made a commit to that file. you may have a conflict now after updating.
If there's a "<<<<" in the file then that's what happened.
phrase not found
ok maybe you're right then
I jumped to the conclusion that this was the cause because the compile farm was OK
EMC: 03jepler 07TRUNK * 10emc2/src/emc/rs274ngc/rs274ngc.hh: apparently that usage is OK on older machines, but not on Hardy Heron's gcc
there, that should fix it
Ah. thanks guy.
probably no one here... :-) But here it goes anyway...
I understand that there is a shared mem i/f between the emctask and the rt motion stuff
this shared mem is basically a struct being updated once per servo loop
assuming that the trajectory planner spits out info to this buffer at the _same_ rate as the servo loop
wouldn't this be a problem?
just an example: servo loop rate is 1 ms
trajectory is 1 ms as well
there is a buffer though (inside RT)
since emctask si userspace prog this can not guarentee that it can put stuff in the struct in a timely manner
for motion stuff
yep, i know the buffer (some 2000), but during startup ?
or during end for that matter?
I don't follow
my bad, just rushing into it... :)
ok, as I see it, we have emctask comm with motion using the shmem i/f
please correct me if im wrong... :)
emctask takes care of reading the g-code file
using the rs274 intepreter
emctasks also takes care of the trajectory computations; this can be for example at 1 ms intervals
emctask sends motion commands to the rt motion subsystem
through the shmem if
emctask does no such thing as traj computations
ok, so i misunderstood something... :)
emctask calls the interpreter once per cycle
interpreter converts teh g-code to canonical commands
which go on a buffer
emctask takes stuff from that buffer and passes it to motion
isn't the interpreter part of emctask?
it gets linked together
but different logical parts
AH! So the the canonical commands goes to the motion prog?
so you get a command like: linear move to end location
(the move itself is lots longer than 1 ms)
so the rt subsystem is actually also doing a lot of math computations?
ok, that's where I went wrong then... In my mind I would have put all the math stuff in emctask and them made a big buffer of shmem and make the rt suff only do simple things
That's what happens when you guess instead of reading the code... :)
pmbdk: what you say has some advantages
but it also bears some problems
for example if you want to do threading
or any other synchronized movements
the sync needs to be RT, and motion needs to follow an external source (not time anymore)
yes, i see...
what's the external source?
encoder on the spindle
another painful complication is feed override
we have 3 sources of feed override
FO in the GUI
adaptive FO while machining
and there was something else that I forget right now
these are some reasons why it's hard to connect USB thingies to emc2
ok, i can see why the encoder following makes sense; if this is not available time is default i guess? (as you can see i'm not an emc expert :-))
hopefully though I will be able to do so anyway... :)
it depends on the move
the only alternative I have is to do everything myself; I did this 20 years ago in fortran and don't want to do it again... :)
for encoder synced moves there are special g-codes
pmbdk: what do you want to accomplish?
well, basically i want to control my hexapod with emc...
ok, sounds good
so you probably want to look at kins first
however i don't really need the rt stuff
well, i already did the kinematics and my controller works fine this way
so what I need is actually a way to interface with my user-space driver
since my user-space driver buffers everything there is no rt requirements
the electronics on the hexapod takes care of safety critical stuff
so in principle i need to:
1) Send movement commands to each of the axis every ms; my driver do all the computations so I only need x/y/z/a/b/c from emc
2) sync time with emc somehow (so emc does not send me too many movement commands)
3) interface to my userspace driver from emc in order to control i/o (again no rt requirements)
well.. you can do more things..
and thats it... :)
I would connect at canon level
you get GUI + task + interp
And canon level is where? :-)
I guess it needs to be after the rt motion computations
do you have the sources somewhere?
canon is still in userlevel
no motion computations
look at emc2/src/emc/task/emccanon.hh
i would _really_ like to have motion do all the work so i got 1 ms samples out.
yes, i looked there first; but then i need to do all the motion computations myself... Not that it is that difficult, but still... No need to invent the wheel... :)
might be easier to just get the code from src/emc/motion/tp*
i was hoping it all could be done by using emc compiled in sim mode
well.. theoretically you can do it, but I don't see a way for syncing stuff
in sim mode everything runs fine, so I "just" need to interface to the three points mentioned above
yes, the syncing also bothers me
yes, but in sim mode you don't get 1ms outputs
on the other hand; currently it must be done also in sim mode
i'm trying to figure out myself what i _do_ get in sim mode... :)
if I loiok at motion.c
the very last part is enclosed in #ifndef RTAPI
if this the sim part?
here it seems that it simply waits ~10 ms and then goes on
this should be relatively easy to change in order to do external sync
with my userspace driver
but I don't know if I then miss a lot of stuff
RTAPI stuff gets compiled both for sim and RT
it just works a bit differently
otoh on sim you don't have any timing certainties
it can take 1ms this time, and 10 the next time
well, that's not a problem
since I need to buffer everything at the user-space driver anyway
this obviously means that i get a latency of a few hundred ms between actual movement and computed movement, but this is only critical when something odd happens
at these time I need to do some manual tweaing anyway
so what you are saying is that the #ifndef RTAPI part of motion.c is not for sim mode?
hmmm, that was ambigous... :-)
let me ask another way... :-) Is the #ifndef RTAPI part of motion.c used for sim mode?
ifndef is if not defined
RTAPI gets defined even if in sim mode
ok, so the last part of motion.c is actually not used in sim mode... Then I need to figure out how the syncing (or at least timing) is done in sim mode
it's done pretty much like in RT
there is a HAL thread (that runs like a pthread iirc)
and there are functions added to that thread
the functions get executed in order
so you would have to add your function to that thread
I'd suggest you start reading the HAL docs first
ok, i'll do that... :)
on a totally unrelated matter: Does AXIS only gets the current position or does it get all positions?
what do you mean?
So does it need to interpolate between positions it gets or does it have an interface to the actual traversed path
I'm assuming that the AXIS GUI does not have an interface to the RT modules as such
it only talks to task
it gets the current position from motion
and task only knows about the end-points of all segments of a trajectory
since motion does the math stuff
So in principle AXIS does not know the exact path; only at the points where it interacted with task?
I'm probably wrong here... :)
there is a periodic update
where task gets the current position from motion
and AXIS uses that to put the tooltip there
AXIS otoh also uses the same interpreter to generate the preview of the part (converting it to lines/arcs, etc and displaying it before motion actually happened)
ah... :-) That would have been my next question how that magic happened... :)
But in reality AXIS does not know the actual traversed path completely?
but it doesn't know it's broken down
it knows the tooltip will move from 0,0,0 to 10,10,10
but it only knows there's a line from start to stop
Yes, from the interpreter it knows this; but during milling it will also need the actual path (I assume); this is only known by motion (I assume) at the actual time-point.
I assume that there can be a difference from computed and actual path?
there is a periodic update
in the status buffer
motion puts the current position there
and AXIS polls it from there
it might be interesting to try if that works good enough in your case too
pmbdk: try getting used to emc2/src/emc/usr_intf/halui.cc
that's the simplest connection to emc2 using NML channels
The reason why I'm asking is that once im done implementing my hexapod stuff (haha, that'll be the day :-)) i'd like to try updating AXIS to visualize the actual milled part (not just path)
pmbdk: look at this if you haven't yet: http://axis.unpythonic.net/01169689661
though it isnt true 3d, more like 2.5d
Cool! :-) Is this based on simple height "rods" you simply cut off?
Because for hexapods that will not work... :-(
Any chance this will be part of newer EMC versions?
uh.. dunno, i think it would be a cool addition to vismach
But it sure looks impressive! .-)
it needs to be modified to show the part being changed, instead of just the end result
yes, that's would be even more impressive...
My idea was to use GTS and simply remove material based on mesh differences
But I have no idea about the performance of such a thing
alex_joni: I don't think this would be fast enough; I would sure miss a lot of commanded positions....
on the other hand i could add a small buffer containing the last N points...
and then use nml to get it...
hmm, interesting idea....
although from what i've read it is not easy to add new nml channels...?
pmbdk: why can't you just write a hal comp for it?
or use emc's kinematics layer
(or both if your hardware is weird enough)
fenn: I don't know enough about hal components to know what i'm talking about... :-) Can a hal component get access to the x/y/z coordinates and i/o pins at the same time?
i'm currently reading up on the hal users guide
and are hal components also used in simulated setups?
I would really like to run it in sim mode as it is then much easier to use on non-rt patched kernels
yes, sim uses hal including "realtime" components. no realtime hardware drivers, though.
so you can have 'and2' but not 'hal_parport'
ok I can see from the first section in the hal user manual that iocontrol and motion interacts with hal...
what "interacts" means is then what i need to read up on... .)
It means that they read hal inputs and drive hal outputs. for instance, motion drives axis.0.motor-pos-cmd with a commanded motor position, and reads axis.1.motor-pos-fb and uses that value to e.g., determine if there is a "following error" condition.
so, since I need to interface to both motion and io could I then simply (although I don't know if its simple) write a hal component to get commanded motion and i/o from the gui and then somehow pass this on to a user-space driver?
and do this in sim mode... :)
you create a component that has whatever assortment of inputs you need, then you hook them together using hal signals. For instance, if you start with the sim configuration, it has already hooked the pin axis.1.motor-pos-fb to the signal Ypos. If you want the Y motor position, you would also hook Ypos to yourcomp.y-position-input (or whatever you decided to call the pin)
(you hook things together in the "hal" files using the "link" or "net" commands)
how has hardy-i386 running?
ok, will have to look more into hal components...
is there an already existing hal component which works under sim mode and still have access to io and motion?
just so i can take a look at the source ...
pmbdk: as jepler said, you don't have to have access to io and motion
you export hal pins from your component
and those get connected to some motion pins and to some io pins
(but for this to make sense you need to be acustomed with HAL)
as long as I know that it can be done, no problem... Then I just need to read up on things and try it out
so this would work in sim mode as well?
yes, but not as "well"
the biggest problem for me now is that i don't know where to start; i think that i have now gotten four answers to the same question... :)
alex_joni: not as "well"?
yes, RT works better than sim
but if the hal approach is the "best" (whatever that is) approach I will go down that road...
alex_joni: ah, ok... However for me it's not a problem that i get some latency; however i will _need_ all positions though... So
if sim mode means for example that hal pins would not received all position commands (i don't know the correct hal nomenclature) then I would need to go in another direction
with a "rt" component you will get all positions, no matter whether you're running with true or simulated realtime. with a "userspace" component, you will never be guaranteed to get all positions.
such as hacking a few hooks in the c code where I know the positions are and the io
how does the external device know when to command the positions? are they timestamped or something?
jepler: Then the hal approach will not work... :-(
at least in sim mode
I setup an internal counter in the microcontroller which i then refer to when I communicate with it from the PC
seems like you need a different kind of 'simulator' that runs the servo thread ahead, generating positions and timestamps, until a queue fills
the count frequency is configurable, but currently set to 1 ms update rate
yes, I will need to sync the emc sim to my external userspace driver somehow... I was hoping that this could be done simply in the simulated mode for example by slowing down the simulator at a central point... For example in motion or wherever the time is taken care of in the simulated mode
I think you want a different scheme - "hurry up and wait" - generate positions as fast as possible until the queue is full, then try to keep it full.
(maybe I'm just saying the obvious things)
yes, that would nice... :)
but I think I still need some guidance on where I need to hook things up in the source...
I guess HAL is out the question?
I think lots of people are interested in this style of running, where an external device is needed but realtime is not. I hope you can find a nice solution.
cradek: me too :)
cradek: another option is using sampler
[13:37:51] <jepler> http://emergent.unpy.net/files/sandbox/positionlog.hal http://emergent.unpy.net/files/sandbox/positionlog-ini.patch
that's what I was just toying with
Actually before I knew much other than the name of emc I thought that this was already an integral part of emc
from listening to jepler it sounds like you can't just use hal/sampler/etc if you don't have realtime
right.. I wouldn't trust it on sim
pmbdk: no, emc doesn't currently need/use these kinds of external devices. they are very limiting.
cradek: you can modify the way rtapi_app tries to approximate the requested thread timing
cradek: I mean, one could
cradek: for instance you could modify rtapi_app to run as fast as possible, and the sampler "realtime component" to sleep when the queue is full.
how about block instead of sleep?
and halt the whole thread?
yes, you would have to halt the whole (only) thread
anyway, once you've modified your configuration as shown above, you can run 'halsampler' and it will spew 1ms-sampled positions onto the terminal. Or you could run 'halsampler|yourapp' to put that on yourapp's standard input, or popen("halsampler") inside yourapp
jepler: this sounds too easy... :-)
but I understand you correctly in assuming that this will only work with RT?
no, I was running it on a machine with simulated realtime
unless some modifications are made in sim mode
but there's no guarantee that it won't fall behind
I will have to try this at home! :)
so it depends what you mean by "work"
"work" is "work when I test it" .-)
No, obviously I would at least need to have an indicator that I falled behind...
I believe it does what I said, but there's no guarantee that the average rate at which 1ms-sampled positions are given to you is not (say) 1.001ms
ok, so I still need to sync it somehow of course....
is there any way to sens messages to the timing "subsystem" some how?
there's certainly no "try harder" button
do you know where the timing sor sim mode is made?
in src/rtapi/rtapi_sim.c there is a function sim_rtapi_run_threads. it calls the function maybe_sleep.
so somehow I will need to change that... And to add an interface of some sorts to this....
Piece of cake... .-)
probably a stup question based on my limited knowledge of HAL: Wouldn't it be possible in the HAL component (may a customized version of it) to send something back in order to slow the timing down whenever say half the buffer size is reached and then back again when it goes the other way?
So something like changing the timing from ~0 ms to ~2 ms whenever I want to run a t 1 ms
so my question is really: Can I change something in order for the hal component to send something back to the sim_rtapi functions
not as it is
you would have to add some "delay" parameter which gets used between thread executions
alex_joni: yes, something like that...
I _really_ have to read up on the hal components... :)
alex_joni: are you traveling?
skunkworks_: not atm
will be next week
ok - you wanted me to remind you. You had some links on the puma iirc
[14:14:31] <jepler> http://emergent.unpy.net/files/sandbox/timer_control.patch
but with this patch you've lost all relationship to external time, except if it's enforced by whatever is reading from halsampler in userspace
jepler: Cool! :-)
Can you leave it in your sandbox for a few hours? :-)
pmbdk: oh stuff has stayed in there for *years*
I guess the external time relationship is lost in any case in sim mode...
as long as I know that the samples are equidistant from emc's point of view...
In any case this is a great starting point for me!
yes; in sampler.c you're getting 1ms samples. in halsampler.c you are getting all those 1ms samples unless it prints "overrun".
(well, not necessarily 1ms -- whatever the period is of the thread you add the sampler to)
jepler: yep... :-)
thanks alot! :-)
i can't wait to get home a try it out...
dammit i can't type...
btw if anyone is interested in seing a crappy video of the hexapod doing absolutely nothing than moving, I can upload it somewhere... :-)
ok, uploaded it...
I should mention that this is a crappy video from the first time we tested it after putting upside/down
the video is taken using a mobile phone, so the quality is really lousy
and the video format is some weird 3gp which will probably not run off-the-shelf on linux...
quicktime is needed afaik
[14:53:47] <pmbdk> http://bohn-hansen.dk/usenet/MOV00020.3gp
shows fine here with 8.04
that looks very well made
ok, I have never actually tried it on linux... I figured it was some proprietary format which only QT knew anything about... :-)
qt as in quicktime, not the toolkit
fenn: Thanks... :-)
It did take some time to do... But it's good fun... :-)
I can't take crdit for everything, though, one of my colleagues did most of the mechanics
are you using emc to run it?
nope, but hopefully I will be soon... :-)
neat, that looks great
I "just" need to add an interface for non-realtime machines to emc... :-)
with some help from you guys... :-)
well you don't NEED to. running in realtime would work just fine and guarantee you have no queue starvation ever.
that is pretty cool. - what are you running with it now?
pmbdk Why do you think it is easier to modify EMC2 to the needs of your machine, than your machine to the ordinary signals provided by EMC for such a machine.
cradek: Well, one of my design goals was to make the interface to the PC very limited... I really don't like to have a modern PC with it's complexities running anything dangerous... And believe me, this machine _is_ dangerous... :-)
i hope you dont think your ad-hoc software is any safer
I also needed electronics doing fast pulse generation, i.e. > 400 kHz, and although I could probably have used something emc supported, by original design goal of separating the PC and the machine left me with some custom electronics
fenn: Actually I'
fenn: Actually I'm quite sure that my customized firmware (which runs the critical part of the machine) is safer than anything running on a pc... :-) Simply due to it's simplicity
what do you mean ad-hoc software?
perhaps but you still shouldn't rely on anything electronic for safety
fenn: Well, at some points you need electronics sensors... Mechanics can not do everything... :-)
cradek: i'm referring to the fact that his firmware has probably not been tested very thoroughly
I bet my magnetic contact on the door to the room - which is directly connected to the power - is safer than anything mechanic I could come up with... :-)
yep, that's electrical though, not electronic
fenn: The firmware has been tested quite thoroughly; even though emc might have a lot of users and have been tested by many persons it is extremely large and complex and for this reason alone I think I can safely say that by very simple firmware if better tested than emc (no offense:-)). In any case the operating system also has an impact, and I know for a fact that during the test of the...
...controller two of my PC's died and the controller still runs with no problems
This controller does all the kinematics and stepper driving?
If so, what information do you send it to move in X from 0 to 1?
The controller is very simple; it simply takes commands from the PC and steps the servos as well as control the I/O and doing safety critical checks. In principle I could probably have used some of emc's supported boards, but at the time we began our development we had no idea what emc was... :-)
The PC implements the kinematics (this runs in something like 3x realtime with 1 ms sample rate so I would have needed a PC on the controller anyway to implement this on the controller)
What I'm trying to get at is what language does your controller use to get information from the PC?
What do the commands look like?
rayh: Oh, that's a home-made and extremely simple protocol.
No problem. I'm just asking how my PC would tell your controller to move from X0 to X1 at a feedrate of 45mm/min.
something like <start of frame> N x<pulses per next timestep><checksum>
something even simpler for IO
rayh: This is the task of my user-space driver
And you would send <start of frame> N x<pulses per next timestep><checksum> for each motor?
One of the great things about separating the suff is also that it is quite difficult to do anything harmful (hexapods have many more degrees of freedom in the harmful-area :-)) sine the controller have limits on the extends
No, the N x is for N axes. I actually have 7 axes... :-)
I'm not asking why, I'm asking how?
So the x<pulses per next timestep> is the X command?
So for each timepoint (say for example 1 ms) I send 7 pulse integers within a single frame
So something like <start of frame> <pulses per next timestep for axis 1>...<pulses per next timestep for axis 7> <checksum>
at the controller it is buffered
But each of <pulses per next timestep for axis #> is simply a command to step a single motor?
so probably the most simple language to do such stuf... :-)
<start of frame> also indicates what kind of data comes after
and the sync is done by sending back ack from the controller
it really couldn't be simpler...
So there are no kinematics translation from individual motors to a Cartesian systems down on that controller?
in principle not... I did however implement something simple in order to test for obvious wrong axis motion
it _is_ still possible to some some weird motion, but not likely though
and by weird I mean weird as in harming one or several of the legs
I did this once; don't want to do it again... :)
Well, during testing of the legs I simply sent out pulses to see how it looked
So EMC2 would need to run a hexapod kinematic module and then somehow send the resulting number of steps for each motor to your controller.
there were no check of anything and the magnetic inductors were not used... :)
I know a bit about hexapod singularities. They can be scary.
at some point my collegue needed to do some tests and just spawned the very limited test gui and entered some wrong number
I've done that.
it took something like 400 ms before the alarm on the servo went off and closed down... A hexapod with 6 degress of dreedom is not fun when suddenly one of the degress are missing and the other 5 servos still runs
instead of just shutting down the power my colleague actually jumb in between the legs to save the leg...
Beleive me, THAT was scary!
i still dont understand what went wrong though
VERY bad idea.... Luckily eh wasn't harmed...
well first of all at that point the hexapod was upside/down wrt. to the video
The original K&T hexapod would occasionally pick the wrong kinematic solution and rip the balls out of a ballnut.
when you only have 5 working servos left and the sixth one is drifting the legs can at some point touch one of the other legs
this can also happen with 6 servos on of course...
rayh: how was that fixed? pick the solution closest to the previous solution?
That is how we do it.
rayh: He... :-) I have implemented sanity checks on virtually everythin in the userspace driver and we have tested it a lot so hopefully this will not happen... But who knows when some day we need to change the geometry slightly... :-)
But we don't have any way of preventing the platform from moving into a singularity.
or banging into a wall :)
(different kind of singularity :)
fenn: Tried that to... :-)
That is the walking version of a hexapod.
But pmbdk aren't you thinking of replacing that user space driver with something related to EMC2?
Well you don't need a walking version to do that... .-)
EMC2 has good joint or axis travel limits in software.
What we don't have is a complex work envelope definition.
For a trivial kinematic machine this might exclude a vice or other work holding device.
For a puma type robot it might be the reach and exclude the pedestal.
For a Hexapod it would be all the space inside the singularities.
I should mention that the Hexapod we have currently have something like 96 calibration parameters, so it's not really a generic hexapod... :-)
The geometry you use is very similar to the NIST cable geometry.
Well I guess all hexapods have somewhat equivalent geometries... :-)
Have you tried the programs that jeppler wrote for you?
The most significant change is in the joints I guess
nope, im not home yet...
but I really look forward to it...
By joints you mean the swivels that you made for the ends of the arms?
I thought they were named joints... :-)
I'm sure they are. In the EMC literature we refer to joint as the entire arm including motor, leadscrew, and ends.
i better leave now; thanks for all the help guys!
catch you later.
that's a big joint...
will probably be back this evening (gmt)
does the gmt even have "evening"?
no, they only have morning there :)
funny, i'm reading an article titled "
8 am of the world"
update should be out for rtai
Yep and I just installed it on one of the fest boxes without any problems.
thanks for the confirmation