Back
[02:27:36] <jmkasunich> SWPadnos: I thought you were away...
[02:27:43] <SWPadnos> I was
[02:27:51] <jmkasunich> and now you're not ;-)
[02:27:53] <SWPadnos> how's that?
[02:28:08] <jmkasunich> I have a question for you about dreamhost accounts
[02:28:22] <SWPadnos> go ahead
[02:28:31] <jmkasunich> you made one for me (for compile farm results)
[02:28:39] <jmkasunich> I can ssh into it with a password
[02:28:55] <jmkasunich> I can also ftp into it (which isn't very secure, it sends the password in the clear)
[02:29:12] <jmkasunich> and I can scp files into it instead of ftp
[02:29:22] <SWPadnos> all good so far
[02:29:37] <jmkasunich> ftp can be made non-interactive (password stored in a rc file)
[02:30:10] <jmkasunich> scp would be better, but can it be made to work from a script?
[02:30:25] <SWPadnos> I would think so, but I'm not sure how
[02:30:53] <jmkasunich> the accounts on the cvs server use public/private key authentication, no password needed
[02:31:09] <jmkasunich> but cradek had to set that up, and copy the public keys to the right place on the box
[02:31:18] <jmkasunich> do you have that level of access to dinero?
[02:31:58] <SWPadnos> probably not, but there are options to scp that may not need any setup on dinero
[02:32:26] <SWPadnos> like -F ssh_config or -i identity_file
[02:32:40] <jmkasunich> one of my goals is to allow users other than me to be able to set up a compile farm slot
[02:33:15] <jmkasunich> (there's nothing magic about the hardware in my basement, the cvs checkouts are anon, all that is needed is a way to get the results to linuxcnc)
[02:33:35] <SWPadnos> as long as they all use the same ssh key file (using -i, I suspect), they should all be able to uplosd stats
[02:33:38] <SWPadnos> upload
[02:34:20] <jmkasunich> next question: (I think you answered this before, but I don't remember)
[02:34:27] <jmkasunich> can dinero send mail?
[02:34:44] <SWPadnos> your user on dinero can, yes
[02:34:54] <jmkasunich> I want to send farm results to emc-commit and/or cia
[02:35:37] <jmkasunich> so again, the trick is to ssh into dinero without a password, and send a mail created by the script on the farm slot
[02:35:47] <SWPadnos> right
[02:35:54] <jmkasunich> while making sure nobody can get in there and send spam the same way
[02:36:03] <SWPadnos> that should be possible, but again, I'm not sure of the details
[02:36:10] <jmkasunich> or just wreck things in my account
[02:36:15] <SWPadnos> presumably sendmail or mail are on that system
[02:36:39] <jmkasunich> I can actually send mail directly from the farm slots in my basement
[02:36:55] <jmkasunich> but its a bit of a kluge, and not usable by anyone else
[02:37:19] <jmkasunich> cradek helped me set it up, I'm actually using cvs2.linuxcnc.org (my backup cvs server) to send the mail
[02:37:36] <SWPadnos> yeah - using something like "mail -S Farm Slot foo is fubar'ed" from dinero is probably better
[02:37:38] <jmkasunich> (its a relay, but not an open one, it relays only for hosts on my local firewalled net)
[02:38:50] <jmkasunich> if I let others do "mail -S "farm fubared" " using my dinero account, they can do anything else using my dinero account too
[02:38:56] <jmkasunich> not so happy about that
[02:39:08] <jmkasunich> actually, same applies to scp
[02:39:14] <SWPadnos> right
[02:39:23] <SWPadnos> if scp can get there, then ssh can get there, and it's all over
[02:40:05] <jmkasunich> you are allowed to set up users on dinero... any limit to how many?
[02:40:09] <jmkasunich> can you set up groups?
[02:40:30] <SWPadnos> the limit is either ~12000 or ~350 users, and yes, I have group control
[02:40:41] <jmkasunich> if each person who hosts a compile farm slot has their own account, that is more secure
[02:40:52] <SWPadnos> but also more of a pain
[02:41:10] <jmkasunich> put them all in a group "emccf", and I would make one directory only that is group writable
[02:41:17] <SWPadnos> I wonder if the mail can be set up through the cvs server, since everyone compiling probably has cvs access
[02:41:45] <jmkasunich> cradek set up the cvs server to be very secure
[02:42:02] <jmkasunich> we all have accounts on it, but they can't do anything except invoke cvs in a chroot jail
[02:42:45] <SWPadnos> ah - 12000 email accounts, 375 shell users
[02:43:10] <jmkasunich> thats a lot
[02:43:16] <SWPadnos> I suppose that anyone could use a mail account on dreamhost to send their farm results
[02:43:29] <SWPadnos> they don't need shell access fro the mail part
[02:43:32] <SWPadnos> for
[02:43:50] <jmkasunich> hmmm
[02:44:29] <jmkasunich> what I had in mind up till now was ftp/scp for the relatively large build log, and one or two short mails (one to SF for the commit list, one to CIA)
[02:44:56] <jmkasunich> I wonder if they could just mail the whole thing?
[02:45:18] <SWPadnos> I'm not sure what kind of filtering I can set up to mess with emails
[02:45:18] <jmkasunich> that means I'd have to have scripts _reading_ mail at dinero, that makes things a lot more complex
[02:46:10] <SWPadnos> it should be reasonably easy - look at one or two things (subject line, maybe x-mailer), and if either isn't exactly some expected value, throw out the mail
[02:46:57] <jmkasunich> * jmkasunich has never done any scripting or any other mail handling, other than using a GUI mail client
[02:46:59] <SWPadnos> you can almost parse out the body of an email with sed - the first blank line is the end of the header, the rest is body (unless there are mime parts)
[02:47:31] <SWPadnos> I'm not sure how to get the mails delivered to a script. once that's done, it's a lot easier
[02:47:40] <jmkasunich> the build logs can be large - are there limits to body size that would force them to be attachments?
[02:47:55] <SWPadnos> I son't think there are any limits in the protocols
[02:48:10] <SWPadnos> attachments are also part of the body, they're just treated differently by MUAs
[02:48:30] <jmkasunich> what about binary ones?
[02:48:48] <SWPadnos> they're generally re-encoded to Base64 MIME
[02:49:02] <SWPadnos> that's why they take so long to send, you only get 6 bits per byte
[02:49:06] <jmkasunich> so attachments = mime parts, which means more parsing
[02:49:16] <SWPadnos> yes
[02:49:28] <SWPadnos> perl, python, php all have email libraries
[02:49:49] <SWPadnos> there might even be a c++ lib for email
[02:50:42] <jmkasunich> the hard part is having a script at dinero wake up when a mail is received, and getting the mail to the script
[02:50:47] <jmkasunich> I'm convinced I could parse it
[02:51:01] <SWPadnos> yep. I agree with that
[02:51:05] <jmkasunich> the other hard part is for the guy setting up the farm slot
[02:51:24] <SWPadnos> though I can set up a cron job to look for mails every X minutes, by checking the POP server
[02:51:25] <jmkasunich> he needs to be able to mail from a script, and hopefully in a consistent way
[02:51:41] <SWPadnos> remember - mail isn't delivered, it's asked for
[02:51:42] <jmkasunich> because the farm scripts should be usable with minimum modification
[02:51:58] <jmkasunich> right, thats the problem with the mail approach
[02:52:06] <jmkasunich> scp is push, not pull
[02:52:16] <jmkasunich> as are mails to SF and CIA
[02:52:30] <SWPadnos> maybe there can be a config file - that has user name and email address stuff in it
[02:52:42] <jmkasunich> CIA must poll frequently, because their relay is rarely more than a minute or so
[02:52:57] <SWPadnos> MTAs are push and pull
[02:53:13] <SWPadnos> CIA is probably the listener on the SMTP port
[02:53:26] <SWPadnos> (which I know I can't set up on dinero)
[02:53:30] <jmkasunich> right
[02:53:55] <jmkasunich> how do those 12000 email accounts work?
[02:54:26] <SWPadnos> they just don't have shell access. otherwise, they're normal POP/SMTP/IMAP/WebMail accounts
[02:54:26] <jmkasunich> user's MUA contacts dinero to pull his received mail and push his outgoing?
[02:54:33] <SWPadnos> yes
[02:54:41] <SWPadnos> just like att or yahoo
[02:54:49] <jmkasunich> "normal" mail accounts are still deep magic to me
[02:54:59] <SWPadnos> heh
[02:55:11] <jmkasunich> my att or yahoo webmail works from anywhere
[02:55:25] <jmkasunich> but I've never been able to get a mail client to send from here
[02:55:26] <SWPadnos> I used to deal with email servers, before the internet really existed
[02:55:37] <jmkasunich> let alone a script sending mail
[02:55:47] <SWPadnos> you'd be surprised at how easy it is
[02:56:15] <SWPadnos> you can send an email using telnet to port 25 (I think that's where SMTP is)
[02:56:21] <jmkasunich> is it easy to set up an account that can be accessed by a script to send mails, but can't be hacked by spammers to send mails?
[02:56:35] <SWPadnos> not really, without full admin access
[02:56:41] <jmkasunich> seems like you need some authentication
[02:57:07] <SWPadnos> yep. that's a problem. that'w shy a lot of places stopped relaying external mail
[02:57:15] <SWPadnos> ... that's why ...
[02:57:25] <jmkasunich> I can't believe any responsible mail server admin these days permits anonymous telnet users to send mail
[02:57:54] <SWPadnos> they don't necessarily let you do that these days
[02:57:59] <jmkasunich> so that gets us back to "how do the 12000 email accounts work"?
[02:58:30] <jmkasunich> must be some authentication
[02:58:43] <SWPadnos> the users set up their MUAs (or scripts) to point to {mail,smtp,pop,imap}.dreamhost.com, and they log in with their passwords
[02:58:57] <SWPadnos> just like gmail, yahoo, etc.
[02:59:17] <SWPadnos> err - not really like gmail
[02:59:34] <jmkasunich> not really like any of the webmail providers
[02:59:43] <jmkasunich> they don't do smtp outside their own webserver
[02:59:51] <SWPadnos> yahoo and gmail actually do provide pop and smtp access
[03:00:11] <jmkasunich> I didn't know that
[03:00:28] <SWPadnos> I could be wrong, but I think they do
[03:00:48] <jmkasunich> so how would I use my existing yahoo email account to send mail from a script?
[03:00:49] <jmkasunich> ;-)
[03:01:14] <SWPadnos> well, if it's possible from something like Mozilla or Thunderbird, then a script should also be able to do it
[03:01:54] <jmkasunich> my ISP (att) uses ssl tunneling or some such on their mail
[03:02:04] <jmkasunich> which is why I've never been able to get it to work
[03:02:13] <jmkasunich> (that and I don't know wth I'm doing)
[03:02:27] <SWPadnos> weird - what MUA(s) did you try?
[03:02:41] <jmkasunich> its been a while since the last time I tried
[03:02:45] <jmkasunich> Kmail IIRC
[03:02:54] <SWPadnos> ok - never tried that one
[03:02:55] <jmkasunich> (from kde, I was running BDI-4.20 at the time)
[03:03:17] <SWPadnos> on my big machine, I had Thunderbird or evolution working on my ISP in no time
[03:03:35] <SWPadnos> (ubuntu 5.10)
[03:04:07] <jmkasunich> http://www.wurd.com/settings.php#Accessing
[03:04:16] <jmkasunich> thats all I have to work from
[03:04:43] <jmkasunich> the "SSL required" part is what I couldn't figure out
[03:05:16] <SWPadnos> in Mozilla, there are two checkboxes:
[03:05:25] <SWPadnos> 1) Use Secure Connection (SSL)
[03:05:34] <SWPadnos> 2) Use Secure Authentication
[03:05:43] <SWPadnos> I'd imagine turning those on would be the ticket
[03:06:04] <SWPadnos> (those are per server)
[03:06:05] <jmkasunich> probably
[03:06:19] <jmkasunich> like I said, I haven't even tried lately, I just use the webmail
[03:06:33] <SWPadnos> that sucks though - you have no record of sent mails
[03:06:43] <SWPadnos> (offline, anyway)
[03:06:44] <jmkasunich> anyway, for scripts, I'd have to configure sendmail to use SSL (I guess)
[03:06:52] <jmkasunich> not offline
[03:06:57] <SWPadnos> true for your ATT account, but not necessarily for DH
[03:07:20] <jmkasunich> otoh, I have a record of sent and received emails anywhere in the world, not just at home
[03:07:29] <jmkasunich> webmail has pros and cons
[03:07:30] <SWPadnos> IMAP does that for you as well
[03:07:43] <SWPadnos> in fact, many webmail systems use IMAP
[03:07:43] <jmkasunich> whats IMAP?
[03:07:58] <SWPadnos> err - Isomething Mail Access Protocol?
[03:08:06] <jmkasunich> ;-)
[03:08:11] <SWPadnos> it has folders on the server, shared folders, etc.
[03:08:22] <SWPadnos> server based filtering also
[03:08:25] <jmkasunich> my webmail has everything on the server
[03:08:41] <jmkasunich> so I can use it anywhere I can find a basic browser
[03:08:44] <SWPadnos> right, but you can't access it unless you're connected to the web
[03:08:54] <jmkasunich> (lymx probably wouldn't work, but...)
[03:09:12] <jmkasunich> heh, if I can't connect to the web, I probably can't connect to anything
[03:09:21] <jmkasunich> the web is the least common denominator these days
[03:09:36] <jmkasunich> surf from work, yes.... ssh from work, no (firewall)
[03:09:37] <SWPadnos> if you have a laptop and IMAP, you can download a bunch of messages and read them on a plane, or compose messages on the plane, and send them when you next connect (which will also synchronize the inbox/sent/other folders)
[03:09:40] <jmkasunich> same with hotels and such
[03:10:07] <jmkasunich> I suppose I would think differently if I had/carried a laptop
[03:10:08] <SWPadnos> plus stuff like save draft
[03:10:12] <SWPadnos> heh - could be
[03:10:20] <jmkasunich> then I have "my computer" not just "a computer"
[03:10:50] <jmkasunich> but since I usually have "a computer", webmail really works for me
[03:10:54] <SWPadnos> I want to set up IMAP here, so I can send emails remotely, and still have them in the sent mail folder from my main work machine
[03:11:10] <jmkasunich> see, I never have that problem
[03:11:18] <jmkasunich> my sent mail folder is at the server
[03:11:34] <jmkasunich> however, its really slow to access
[03:11:53] <jmkasunich> back when I had dialup and a normal mail client worked fine, I would sometimes go grepping thru my mailbox
[03:11:55] <SWPadnos> I have 3 accounts I use daily, with a dozen subfolders off one of hte inboxes, for list mails (automatically sorted for me)
[03:12:05] <jmkasunich> thousands of messages per second, can't do anything like that now
[03:12:55] <SWPadnos> IMAP gives the advantages of something like Thunderbird, plus the advantages of webmail
[03:13:27] <SWPadnos> (even messages you send with a webmail interface should end up in the sent folder no all machines you use)
[03:13:33] <SWPadnos> s/no/on/
[03:13:48] <jmkasunich> that would be nice
[03:14:23] <SWPadnos> anyway, the DH mail accounts have all the protocols, including webmail
[03:14:50] <jmkasunich> I'm browsing the DH wiki....
[03:15:27] <SWPadnos> on mail?
[03:15:36] <jmkasunich> yeah
[03:16:13] <SWPadnos> try accessing webmail.linuxcnc.org
[03:17:34] <jmkasunich> unknown user or password incorrect
[03:17:44] <jmkasunich> I use the same name and pw as my shell account?
[03:17:53] <SWPadnos> I would think so. let me check that
[03:18:42] <jmkasunich> shell name and pw work, just checked
[03:18:49] <jmkasunich> (work for shell, not for mail)
[03:19:04] <SWPadnos> ok - it should work as jmkasunich plus your password
[03:19:30] <jmkasunich> nope
[03:19:47] <jmkasunich> this is a SquirrelMail page, right?
[03:19:51] <SWPadnos> strange. my cncman login works
[03:19:53] <SWPadnos> yes
[03:20:14] <jmkasunich> is setting up a mail account and setting up a shell account two different things?
[03:20:24] <jmkasunich> maybe you did both when you set up cncman, and only one for me?
[03:20:31] <SWPadnos> yes, but shell accounts include email by default, I think
[03:24:00] <SWPadnos> have you changed the password on the shell account?
[03:24:05] <jmkasunich> yes
[03:24:22] <SWPadnos> hmm - maybe try with the original password
[03:24:36] <jmkasunich> bet it didn't change the other one (and I don;t recall the original - I changed it to something I can remember)
[03:24:49] <SWPadnos> ok. I'll try it and let you know what happens
[03:24:57] <SWPadnos> yep
[03:30:59] <SWPadnos> well - I'm getting tired. I'll be around for at least part of the weekend.
[03:31:07] <jmkasunich> ok
[03:31:25] <jmkasunich> I'm gonna try to figure out how to send mail from a script thru the DH acct
[03:31:47] <jmkasunich> if somebody else wants to host a farm slot or two, you can give them an email, right?
[03:31:53] <SWPadnos> ok. did you see all the addresses to use for DH mail?
[03:31:54] <SWPadnos> yep
[03:32:15] <jmkasunich> (easier than a shell acct and easier than a shell acct plus a group)
[03:32:23] <SWPadnos> should be
[03:32:35] <jmkasunich> under "email client configuration"
[03:33:04] <jmkasunich> Incoming mail server: mail.linuxcnc.org
[03:33:05] <jmkasunich> Incoming mail server type: POP3 or IMAP (your choice)
[03:33:05] <jmkasunich> Incoming mail server username: jmkasunich
[03:33:05] <jmkasunich> Outgoing (SMTP) mail server: mail.linuxcnc.org
[03:33:05] <jmkasunich> My server requires authentication: checked. (use same settings as incoming mail server)
[03:33:05] <jmkasunich> Use secure password authentication: not checked.
[03:33:26] <jmkasunich> the authentication stuff will be the fun part
[03:33:54] <SWPadnos> try it without at first, and use POP/SMTP rather than IMAP
[03:34:18] <SWPadnos> once that works, then try with authentication, and then try with IMAP
[03:34:20] <jmkasunich> I really have no idea what I'm doing
[03:34:32] <SWPadnos> heh - maybe I'll take a look at it tomorrow
[03:34:39] <jmkasunich> on the farm, cradek had me set up "esmtp"
[03:34:46] <jmkasunich> which I also installed here
[03:34:55] <SWPadnos> that's a transfer agent, I think
[03:35:04] <jmkasunich> "esmtp -t <message" works
[03:35:18] <jmkasunich> (on the farm)
[03:35:19] <SWPadnos> ah, OK. relay only MTA
[03:35:32] <jmkasunich> I guess I need to configure it to send to DH instead?
[03:35:53] <SWPadnos> I'm not sure
[03:36:02] <SWPadnos> sending shuoldb['t be the problem, it's receiving at the DH end
[03:36:05] <SWPadnos> shouldn't
[03:36:36] <jmkasunich> well, for plan A, all I needed was send
[03:36:42] <jmkasunich> send to SF and CIA
[03:37:02] <jmkasunich> but if I want to use mail instead of scp for the big logfiles, then it gets messy
[03:37:55] <jmkasunich> I guess I'll try to get esmtp working tonight, then we can talk more tomorrow about receiving
[03:37:56] <SWPadnos> you know, one possibility is to have the web page be a php script, which looks for mail, and re-generates the page if necessary
[03:38:23] <jmkasunich> my head spins
[03:38:27] <SWPadnos> sort of lazy evaluation for the web page
[03:38:38] <jmkasunich> clever, reduces polling
[03:38:42] <SWPadnos> right
[03:39:12] <jmkasunich> I'm redoing all the farm scripts, maybe something like that makes sense
[03:39:26] <jmkasunich> right now I have a test page working that does SSI for the results
[03:39:38] <jmkasunich> so the farm no longer has to read-modify-write the index.html
[03:39:47] <SWPadnos> maybe we should leave it alone until we can get this stuff into a database
[03:40:00] <SWPadnos> that would be the next logical step, I think
[03:40:25] <jmkasunich> maybe I'll just concentrate on getting my farm to work nicely
[03:40:29] <SWPadnos> btw - I noticed that emc2 is passing on all farm slots - good job (anyone who helped)
[03:40:49] <jmkasunich> mostly jepler, some cradek, a little bit me
[03:41:02] <jmkasunich> my main role was not letting people forget it was failing
[03:41:16] <jmkasunich> (which is why I want the far to send to emc-commit and/or CIA ;-)
[03:41:17] <SWPadnos> heh
[03:43:57] <SWPadnos> hey - are you on an Ubuntu machine right now?
[03:44:03] <jmkasunich> yeah
[03:44:19] <SWPadnos> can you see if LXR is available in the package database?
[03:45:28] <jmkasunich> looks like it is
[03:45:37] <jmkasunich> (I used synaptic's search function)
[03:45:54] <SWPadnos> cool. that would be a great thing for understanding emc code
[03:46:06] <SWPadnos> (assuming that it can be used on "normal" projects
[03:46:08] <SWPadnos> )
[03:46:13] <jmkasunich> there are two packages
[03:46:27] <SWPadnos> exuberant ctags or something, plus lxr?
[03:46:27] <jmkasunich> lxr 0.3.1-2 Linux Cross Reference
[03:46:48] <jmkasunich> lxr-cvs 0.9.2-5 A general hypertext cross-referencing tool
[03:47:04] <SWPadnos> lxr-cvs is the one we'd want, I think
[03:47:06] <jmkasunich> guessing the latter is the one you want
[03:47:14] <SWPadnos> you've seen lxr on the web, right?
[03:47:27] <jmkasunich> yeah
[03:48:00] <jmkasunich> depends: apache (and others)
[03:48:07] <jmkasunich> (including exuberant-ctags)
[03:48:22] <jmkasunich> does that mean you need to set up a server to use it?
[03:48:38] <SWPadnos> yep - it uses ctags to parse the source, and generates web pages to display things
[03:48:40] <jmkasunich> sounds like something we'd set up at linuxcnc, not something to use locally on a cvs checkout
[03:48:48] <SWPadnos> it's great locally, actually
[03:48:59] <jmkasunich> faster
[03:49:00] <SWPadnos> it could be set up at linuxcnc as well
[03:49:02] <SWPadnos> yep
[03:49:13] <jmkasunich> but isn't setting up apache just a little complicated?
[03:49:15] <SWPadnos> remember the laptop on a plane ;)
[03:49:20] <SWPadnos> nope - apt-get install apache
[03:49:31] <SWPadnos> in fact, it's probably set up already
[03:49:34] <jmkasunich> install is one thing, configure is another
[03:49:39] <SWPadnos> try
http://localhost/
[03:49:55] <SWPadnos> you'll probably get the "this server is probably not configured" page
[03:50:14] <jmkasunich> no apache installed according to synaptic
[03:50:46] <SWPadnos> if you install it, then you just dump HTML under /home/http/html (maybe somewhere else on Debian)
[03:51:01] <SWPadnos> setup is pretty easy
[03:51:24] <jmkasunich> easy for you to say
[03:51:29] <SWPadnos> yes, it was ;)
[03:51:36] <jmkasunich> I spend more time doing admin stuff....
[03:52:00] <SWPadnos> I do setup of this kind of thing, but not all the security stuff
[03:52:06] <SWPadnos> ah well
[03:52:24] <SWPadnos> anyway - time for bed. good night, and see you tomorrow and/or Sunday.
[03:52:31] <jmkasunich> goodnight
[03:52:36] <SWPadnos> SWPadnos is now known as SWP_Away
[03:53:40] <SWP_Away> http://www.unixreview.com/documents/s=8989/ur0407a/
[03:54:20] <jmkasunich> mmmm.... bash.... ;-)
[03:54:21] <jmkasunich> thanks
[03:54:25] <SWP_Away> sure
[15:52:52] <jepler> If we create a new kind of g-code curve primitive, I wonder if it should be "cubic spline". The planner would break it down into one or more splines, with the restriction that no axis reverses direction within the subsplines. I think this case will prove to be plannable, and many splines generated by curve-fitting methods will already be in the necessary "no reversal" form.
[15:56:30] <jepler> one problem is that you need a lot of values to specify a cubic spline: the current position, two control points, and the end position.
[15:56:41] <jepler> one quickly runs out of letters
[15:57:01] <SWP_Away> E is available
[15:57:18] <cradek> how do you get the max vel and accel along the spline?
[15:58:02] <cradek> can you just take the derivatives of the parametric equations?
[15:58:29] <SWP_Away> I think it's close to that
[15:58:40] <jepler> I haven't tried to do any of the math yet.
[16:00:25] <SWP_Away> there are special categories of spline (like curves) that have reasonably simple equations for getting path lengths
[16:00:31] <SWP_Away> I don't remember what they're called
[16:00:57] <jepler> Unlike the obvious way to express arcs and lines as parametric functions (x(t), y(t), z(t)) you'll find that the distance-per-t is not constant with cubic splines.
[16:03:09] <jepler> maybe a better kind of spline for machining is one which passes through all the control points
[16:12:11] <jepler> My "insight" (if you can call it that) is to have the "no axis reversal" rule, which I think will make planning the curve easier
[16:22:30] <alex_joni> jepler: b-spline passes through the control points
[16:22:39] <alex_joni> opposed to bezier which only uses them for tangents
[16:52:47] <jepler> "passes through all points" vs "passes through some points" seems irrelevant when you intend for computers to generate the curves.
[16:58:31] <alex_joni> I really meant that bezier doesn't pass through any of the control points
[17:04:02] <alex_joni> but then again, there are lots of parametric splines out there..
[17:54:45] <jepler> "The sync process between developer and anonymous CVS (ViewCVS, etc.) is disabled now until the new infrastructure is in place, to ensure we have maximum coverage for the small number of data corruption issues that have been detected. We understand this is sub-optimal, but strongly believe that the protection of the data is paramount." -- sourceforge
[17:54:52] <jepler> boy it would suck if we were still on sf
[17:55:16] <jepler> (they want to get it fixed by the end of the month, but warn that it's an 'aggressive schedule' to do so)
[17:55:21] <jepler> http://sourceforge.net/docman/display_doc.php?docid=2352&group_id=1#1145038684
[17:55:43] <SWP_Away> SWP_Away is now known as SWPadnos
[17:56:02] <SWPadnos> I like the syntax hilighting on the new cvs server
[17:57:16] <SWPadnos> is it possible to add a link in the commit messages that goes to the complete changed file, with differences hilighted
[17:57:33] <jepler> I thought one of the links was to the differences
[17:57:46] <SWPadnos> it has only the diff, not an annotated copy of the whole file
[17:57:56] <SWPadnos> essentially what used to be a pat of the message
[17:57:59] <SWPadnos> part
[17:58:01] <jepler> oh
[17:59:11] <jepler> so you'd prefer this link
http://cvs.linuxcnc.org/cvs/emc2/tcl/bin/halconfig.tcl.diff?r1=1.29;r2=1.30;f=h
[17:59:23] <SWPadnos> sort of.
[17:59:26] <jepler> wow, I find the partial-line changes realy irritating
[17:59:59] <jepler> around line 660 it's almost useful, but around line 681 it just obscures what's going on (IMO)
[18:00:07] <SWPadnos> I'm not sure if cvsweb has the mode I'd like
[18:00:24] <SWPadnos> basically something like kdiff would show - the whole file, with diffs hilighted
[18:00:36] <jepler> there's this format:
http://cvs.linuxcnc.org/cgi-bin/cvsweb.cgi/emc2/tcl/bin/halconfig.tcl.diff?r2=1.30&r1=1.29&f=sc
[18:00:43] <jepler> never used kdiff
[18:00:46] <jepler> bbl
[18:00:48] <SWPadnos> ok - that one would do
[18:00:52] <SWPadnos> see you later
[18:01:29] <SWPadnos> hmmm - it truncates the lines, that's no good
[18:03:03] <SWPadnos> I think this one would do:
http://cvs.linuxcnc.org/cgi-bin/cvsweb.cgi/emc2/tcl/bin/halconfig.tcl.diff?r2=1.30&r1=1.29&f=H
[18:03:34] <SWPadnos> it gives more context than the standard context diff, but you don't get lost in the whole file
[18:41:37] <alex_joni> hello
[18:42:10] <cradek> hi alex
[18:42:20] <alex_joni> seen your commit.. so I thought you're around :D
[18:42:30] <cradek> yep
[18:42:46] <alex_joni> what's up?
[18:42:58] <alex_joni> things seem rather slow lately
[18:43:11] <cradek> I should be cleaning the house, but I want to play with the lathe instead, so out of guilt I'm doing neither
[18:43:24] <alex_joni> lol..
[18:43:45] <alex_joni> I know the feeling.. too much to do, can't decide which one to start.. nothing gets done ;)
[18:43:55] <cradek> right
[18:44:22] <alex_joni> I finished installing most of the pc's today, got network & phones running
[18:44:29] <cradek> yay
[18:44:37] <cradek> so you'll have a little weekend for yourself leftover?
[18:44:37] <alex_joni> yeah, things work great :)
[18:44:46] <alex_joni> yes & no ;)
[18:44:58] <alex_joni> gotta visit my gf's family tomorrow.. easter & all
[18:45:34] <cradek> yeah somehow I got talked into having lunch with my dad's parents
[18:45:48] <cradek> I guess it won't be bad, but I'd rather not go
[18:47:05] <alex_joni> grandparents?
[18:47:13] <cradek> yes
[19:19:55] <jmkasunich> hi all
[19:20:00] <SWPadnos> hi there
[19:20:03] <cradek> hi
[19:20:09] <alex_joni> hello
[19:20:54] <SWPadnos> did you get anywhere with bash + POP last night?
[19:21:12] <jmkasunich> I focused on sending rather than receiving
[19:21:23] <jmkasunich> that works (esmtp to dreamhost)
[19:21:24] <SWPadnos> ah - the SMTP side
[19:21:27] <SWPadnos> cool
[19:21:56] <jmkasunich> if I stick with ftp or scp for the web stuff I don't need to worry about receiving
[19:22:04] <SWPadnos> true
[19:22:16] <jmkasunich> I'm redoing all the scripts, breaking things up into independent chunks
[19:22:30] <alex_joni> * alex_joni passes jmkasunich a shiny new hammer
[19:22:36] <jmkasunich> so far I have check_commit, check_cvs <tree>, and check_build <tree>
[19:22:40] <SWPadnos> * SWPadnos hands over the chisel
[19:22:49] <cradek> I thought you had that all working...
[19:22:55] <jmkasunich> need to do results_to_web <tree> PASS|FAIL
[19:23:05] <jmkasunich> results_to_list, and results_to_cia
[19:23:24] <alex_joni> results_to_fbi
[19:23:24] <cradek> one thing you could do that would be nice is to get the /last-commit over the net only once, and share it somehow to all your machines
[19:23:50] <SWPadnos> re-sync CVS, then pull locally
[19:23:57] <jmkasunich> that goes against one of my other goals, which is to allow distributed farm slots
[19:24:02] <SWPadnos> though that wouldn't exacly help with bandwidth, I think
[19:24:15] <jmkasunich> check_commit only access one small file
[19:24:33] <jmkasunich> check_cvs is only excuted when check_commit says there's been a commit
[19:24:51] <jmkasunich> check_build only when check_cvs says there was a change in that particular tree
[19:25:06] <SWPadnos> I wonder if it's a good idea to have basically anyone able to post compilation status to the status page
[19:25:32] <jmkasunich> right now they need ftp access
[19:25:45] <SWPadnos> having more than one location is good, but a generic solution for most anybody may be overkill
[19:26:48] <jmkasunich> I'm not aiming for "most anybody"
[19:27:22] <jmkasunich> just for "if someone wants to host a slot, they don't have to rewrite all the scripts"
[19:27:39] <SWPadnos> right
[19:28:00] <SWPadnos> some small config file that gets sourced might be good (like that pop script example)
[19:28:05] <alex_joni> jmkasunich: speaking by experience they probably will
[19:28:21] <jmkasunich> will rewrite all the scripts?
[19:28:47] <alex_joni> jmkasunich: usually yes, people doing such a task usually aren't shell newbies
[19:28:59] <alex_joni> and most are more comfortable with their own scripts & setup
[19:29:07] <jmkasunich> right now there are two chunks of site/user specific info
[19:29:21] <jmkasunich> ftp username and password, and email username and password
[19:29:25] <alex_joni> but I didn't say that in order to stop you..
[19:30:05] <jmkasunich> don't worry
[19:33:45] <jmkasunich> It would be nice to have a farm entry with a newer bdi, 4.38 or 4.40
[19:33:52] <jmkasunich> but I'm too lazy to set that up....
[19:33:59] <alex_joni> apt-get update?
[19:34:05] <alex_joni> apt-get upgrade ?
[19:34:07] <jmkasunich> from what?
[19:34:10] <alex_joni> 4.20
[19:34:17] <alex_joni> and the bdi4emc repo
[19:34:34] <jmkasunich> seems like thats not gonna be the same setup as if somebody got the iso and installed it
[19:34:46] <alex_joni> why not?
[19:35:11] <alex_joni> probably some packages might differ (installed vx. not installed)
[19:35:17] <jmkasunich> each iso paul makes has a different set of packages
[19:35:18] <alex_joni> but you can do that from the installer aswell
[19:35:56] <alex_joni> I think 4.20 upgraded should be the same that a 4.40 fresh installed
[19:36:03] <jmkasunich> no way
[19:36:08] <alex_joni> if it's not the case, it's fscked
[19:36:18] <cradek> I agree that's unlikely
[19:36:23] <jmkasunich> not unless you somehow get the list of packaged that are on 4.40 and stuff that into apt
[19:36:26] <cradek> I think 4.40 doesn't even exist right now
[19:36:30] <cradek> who knows what will be on it
[19:36:41] <alex_joni> cradek: he announced it a few days ago?
[19:36:53] <alex_joni> didn't bother to look for it though
[19:36:55] <jmkasunich> and then it idisappeared from all the mirrors
[19:37:10] <cradek> he retracted it because it doesn't install right or something
[19:38:09] <cradek> jmkasunich: have you made any progress with encoders?
[19:38:25] <jmkasunich> I didn't know I was working on encoders?
[19:38:30] <SWPadnos> upgrades were not the same as later versions back in the 4.20 timeframe, and I suspect that's still true
[19:38:36] <jmkasunich> oh, the canonical encoder interface stuff
[19:38:37] <SWPadnos> heh
[19:38:41] <cradek> yeah that
[19:38:47] <jmkasunich> fsck no
[19:38:55] <jmkasunich> got sidetracked as always
[19:39:08] <cradek> of course
[19:39:29] <jmkasunich> that would probably be a better way to spend my time than the farm stuff
[19:39:38] <jmkasunich> but I want to have cia and/or list notification
[19:40:01] <cradek> I think the farm is not so much needed now that the makefiles, etc, are more settled
[19:40:50] <jmkasunich> if I wait till the next time its needed, 1) I won't have time to make it work 2) I will have forgotten a lot of stuff that is fresh in my mind now
[19:41:37] <jmkasunich> another thing that is on my mind is cvs backup
[19:41:40] <cradek> true
[19:41:57] <jmkasunich> we still don't have anything rsync'ing the repository do we?
[19:41:59] <cradek> maybe we should figure that out now
[19:42:14] <cradek> I am making backups here
[19:42:58] <jmkasunich> that still makes you a single point of failure (the bus scenario)
[19:43:00] <cradek> I'm also worried about dreamhost - I don't know about any backups scheme
[19:43:19] <cradek> I'm afraid of buses so it'll probably be ok
[19:43:30] <jmkasunich> yeah, theres a non-trivial amount of work invested in the joomla stuff
[19:43:36] <cradek> right
[19:44:04] <cradek> what's cvs2's IP again?
[19:44:26] <alex_joni> maybe we should register the ip's in DNS?
[19:44:38] <cradek> they change
[19:44:43] <jmkasunich> alex: its dynamic
[19:44:48] <alex_joni> oh, ok..
[19:44:55] <alex_joni> mine will probably stay fixed
[19:45:04] <jmkasunich> 66.167.25.76
[19:45:57] <jmkasunich> mine usually stays fixed, but a power failure (reboot of DSL modem) or any change to the router config changes it
[19:46:18] <jmkasunich> when I set up the port forwarding for the server I went thru 3 IPs in an hour
[19:46:23] <alex_joni> I see.. well my home ip (DHCP router) never got changed..
[19:46:40] <alex_joni> anyways.. think I'll head to bed now
[19:46:44] <jmkasunich> goodnight
[19:46:58] <jmkasunich> alex: before you leave
[19:46:59] <cradek> night alex
[19:47:09] <jmkasunich> what do you know about joomla backup capabilities?
[19:47:18] <alex_joni> jmkasunich: you need 2 things
[19:47:24] <alex_joni> a tar.gz with the actual folder
[19:47:31] <alex_joni> and a backup of the mysql database
[19:47:52] <alex_joni> both usually are provided by hosters
[19:48:03] <alex_joni> sometimes even in one single file..
[19:48:11] <alex_joni> I know my hoster does that
[19:49:23] <jmkasunich> ok, we can ask SWP about that
[19:49:37] <alex_joni> realwebhost.com <- that's them
[19:49:38] <SWPadnos> when he's not sick and on the phone
[19:50:45] <alex_joni> * alex_joni is off
[19:50:48] <alex_joni> good night all
[20:05:14] <cradek> jmkasunich: I can't seem to ftp from cvs2
[20:05:24] <cradek> trying to install a package from ftp.freebsd.org
[20:05:27] <jmkasunich> probably blocked
[20:05:42] <jmkasunich> I have to open specific IPs
[20:05:45] <jmkasunich> stand by
[20:05:49] <cradek> I'll wing it
[20:06:40] <cradek> I copied it from here, np
[20:09:39] <jmkasunich> try the ftp now
[20:09:48] <cradek> I'm already done, sorry
[20:09:56] <jmkasunich> phone rang while I was doing it)
[20:10:10] <jmkasunich> I'll leave it open for a while
[20:11:05] <cradek> ok the backup is running, it'll take a while the first time
[20:11:18] <cradek> up speed is only about 35kB
[20:11:42] <jmkasunich> where is the backup going to be? some user directory on cvs2?
[20:11:53] <cradek> /var/cvs, ready to run
[20:12:09] <jmkasunich> ready to run as in ready to be a server?
[20:12:12] <cradek> yes
[20:12:16] <jmkasunich> cool
[20:14:26] <jmkasunich> is it normal that there are two sendmail processes running (three actually, but 2 appear to be identical)
[20:17:45] <cradek> yeah I think so
[20:20:42] <cradek> jmkasunich: did you see sf says they "may" have anon/cvsweb working by the end of the april?
[20:21:04] <jmkasunich> saw somebody mention it when reading back IRC, didn't go to SF to look
[20:21:55] <cradek> As of 2006-04-14 we have an estimate on when the replacement CVS hardware for the new infrastructure will arrive. As soon as we get in into our hands, we'll actively work on it, with a goal of having it online by the end of the month of April.
[20:22:59] <cradek> "two weeks after the problems started, we ordered a new computer. It'll be here sometime soon we hope."
[20:27:26] <jepler> "when it does, somebody's even going to work on it during his lunch break"
[20:27:41] <cradek> "for a few weeks"
[20:28:55] <jepler> if they didn't spring for the expensive shipping on that server, it won't be there before the end of this week
[20:29:00] <jepler> a week of lunches sounds about right to get CVS set up
[20:29:17] <cradek> but when will the poor guy eat?
[20:29:58] <jepler> I feel sorry for him too