#emc-devel | Logs for 2006-07-01

Back
[03:36:54] <lilo> [Global Notice] Hi all. Please be aware that we're still getting people using that DCC exploit to knock people offline. If there's a chance you're on an affected router, please connect to freenode via chat.freenode.net port 8001, rather than port 6667 (the default). Thanks!
[05:05:27] <SWPadnos> SWPadnos is now known as SWP_Away
[13:05:40] <lilo> [Global Notice] Hi all. Please be aware that there are still kiddies using the D C C exploit on freenode. If you've been knocked off freenode, or if you think you might be using a vulnerable router, please connect to the network using port 8001, rather than 6667 (the default).
[13:06:21] <lilo> [Global Notice] Apologies for the inconvenience, and thank you for using freenode.
[15:31:16] <SWP_Away> SWP_Away is now known as SWPadnos
[18:59:06] <jepler> jmkasunich: are you around this weekend?
[19:30:25] <jmkasunich> on and off
[20:07:00] <jepler> jmkasunich: I've been thinking about the problem of knowing when a userspace hal component is ready
[20:07:09] <jepler> jmkasunich: why not have a flag in the hal_comp_t which is set to 1 after it's done
[20:07:28] <jepler> set by a new hal API call
[20:07:53] <jepler> so halcmd just waits for the flag to be set, polling every 1ms or 10ms or whatever
[20:08:01] <jepler> I have a patch ready
[20:10:51] <jepler> http://emergent.unpy.net/files/sandbox/hal-usercomp-ready.patch
[20:12:46] <jmkasunich> sorry, was away for a bit
[20:12:47] <jmkasunich> looking now
[20:13:24] <jepler> np
[20:13:42] <jmkasunich> uppercase W means wait till the program says its ready?
[20:13:45] <jepler> yes
[20:13:58] <jmkasunich> and lowercase w means wait till it finishes executing...
[20:14:04] <jmkasunich> what does n mean?
[20:14:17] <jepler> it is the name of the component, if it's different from the name of the executable
[20:14:39] <jmkasunich> you just added that?
[20:14:41] <jepler> yes
[20:14:50] <jepler> the name of the new component wasn't ever needed before
[20:14:57] <jmkasunich> ah
[20:14:58] <jepler> -w and -i were already there
[20:15:25] <jmkasunich> -i is ignore errors, right?
[20:15:28] <jepler> right
[20:15:48] <jmkasunich> for some reason I thought it was -k for "keep going", as on halcmd itself
[20:16:28] <jepler> seems to be -i
[20:16:36] <jepler> hm, a comment refers to 'loadusr -r' but I don't see any documentation for that
[20:16:46] <jepler> any implementation either
[20:16:56] <jmkasunich> heh, I guess the loadusr needs reviewed
[20:17:00] <jepler> how do you feel about the general idea?
[20:17:05] <jmkasunich> I like it
[20:17:15] <jmkasunich> the -n part is kinda un-pretty
[20:17:27] <jepler> yeah a bit
[20:17:33] <jepler> but you can't know otherwise
[20:17:48] <jmkasunich> a parent process knows pretty much nothing about its child?
[20:17:53] <jmkasunich> doesn't it know the pid?
[20:17:58] <jepler> it does know the pid
[20:18:29] <jmkasunich> so hal_ready could change the flag from 0 to pid maybe?
[20:18:31] <jepler> but it can't know what string the child will pass to hal_init()
[20:19:02] <jepler> hm -- so the algorithm becomes, wait for ready to equal the child pid?
[20:19:11] <jmkasunich> yeah, but that sucks too
[20:19:25] <jepler> it means you can't call a script which actually runs the component, for instance
[20:19:27] <jmkasunich> cause you don't know _which_ ready is gonna become equal to the pid
[20:19:44] <jmkasunich> good point, so thats not gonna fly
[20:20:10] <jmkasunich> hmm, how about decoupling the loadrt and the -W
[20:20:19] <jepler> waitcomp zzz ?
[20:20:20] <jmkasunich> add a new command "waitusr <name>"
[20:20:29] <jmkasunich> yeah, something like that
[20:20:53] <jmkasunich> so you could actually have several loadusr commands, then do the waits afterward, before you actually hook things to the signals
[20:21:13] <jmkasunich> the loads and inits would actually run in parallel
[20:21:45] <jmkasunich> hmm, that has issues too
[20:22:00] <jmkasunich> actually, it has the same issue that loadusr -n <name> does
[20:22:02] <jepler> what issues?
[20:22:27] <jmkasunich> if you might want to have more than one instance of a user component, the convention is to append the pid onto the name
[20:22:38] <jmkasunich> which means a script won't know the full name
[20:22:47] <jepler> yeah this kinda assumes the names do not vary
[20:22:57] <jmkasunich> to date, the only "component" that does that is halcmd (I think)
[20:23:09] <jmkasunich> actually, I'm not sure, but I think halvcp might also do it
[20:23:46] <jepler> before this I wrote the shell script 'wait-for-pin' which uses a pin name, and I thought waiting on a component was better
[20:23:50] <jepler> maybe it's not
[20:25:17] <jepler> with wait-for-pin you have to know the order the pins are created in, and wait for the last one .. which sucks
[20:25:23] <jmkasunich> the more I think about it, the more I like the approach in your patch - except for the -n part
[20:25:59] <jmkasunich> what we really want is the hal_become_deamon() call, that I decided was too tough to implement at the fest
[20:26:19] <jmkasunich> that puts the decision where it belongs, in the component itself
[20:26:53] <jmkasunich> (why should the hal file author need to know that component A needs waited for, and component B doesn't, etc... or what pin compA exports last, etc)
[20:27:20] <jepler> what component *wouldn't* you want to wait for?
[20:27:39] <jmkasunich> ones that become daemons
[20:27:52] <jmkasunich> (if they don't actually become daemons, but instead just run in the background)
[20:29:07] <jmkasunich> user space comps need more thought
[20:29:17] <jmkasunich> shutting them down is also an interesting task
[20:29:29] <jmkasunich> for rt comps, you can do "unloadrt all"
[20:29:44] <jmkasunich> for user comps, you have to figure out their pids and kill them
[20:30:10] <jepler> another arrow pointing towards a requirement to put a PID in the comp structure
[20:30:16] <jmkasunich> yeah
[20:30:28] <jmkasunich> then we could implement "unloadusr <comp_name>"
[20:30:40] <jmkasunich> look up name, use the stored pid to kill it
[20:31:23] <jmkasunich> gotta go for a couple hours, will think on it
[20:31:33] <jepler> ok
[20:31:36] <jepler> I won't check anything in yet
[20:56:49] <alex_joni> "We believe that ongoing improvements, coupled with recent progress in
[20:56:49] <alex_joni> our ability to monitor site performance, will result in noticeable
[20:56:49] <alex_joni> reductions in service interruptions going forward.
[20:56:52] <alex_joni> We believe that ongoing improvements, coupled with recent progress in
[20:56:52] <alex_joni> our ability to monitor site performance, will result in noticeable
[20:56:52] <alex_joni> reductions in service interruptions going forward.
[20:57:47] <jepler> sourceforge?
[20:58:47] <alex_joni> yeah...
[20:59:16] <alex_joni> now they can monitor site performance... ohh shiny
[21:03:50] <alex_joni> maybe their hardware supplier provided them some bargraphs
[21:04:44] <fenn> maybe they actually hired an employee
[21:05:17] <fenn> you know, someone to watch the computers while they were busy spending all that money..
[21:05:24] <alex_joni> lol
[21:05:34] <alex_joni> 101
[22:19:43] <jmkasunich> * jmkasunich is back
[22:19:59] <jmkasunich> we just finished making a batch of fresh strawberry jam
[22:37:03] <jmkasunich> jepler: I'm adding PID to the component data struct
[22:37:21] <jmkasunich> leaving soon, can't do much beyond that right now
[22:46:03] <jmkasunich> one small step....
[22:57:22] <jmkasunich> time to leave, we can figure out the rest tomorrow
[23:34:14] <jepler> some small conflicts with my tree when updating, but I've fixed them