This post is slightly more tech-heavy than most of my recent posts. If that's not your thing, feel free to move on now.
[Update 20091223: NEVERMIND. I got it sorted. The url you re-direct a user to with OAuth doesn't need extra OAuth headers. So you really could just use webbrowser.open(). My bad.]
Suppose I want to interact with the twitter API via some python code running on my personal computer. Suppose, for grins, that instead of using the Basic Authentication I'd rather try OAuth (even though it's all running locally...). Part of the flow of OAuth authentication is that my script is supposed to direct the user to an address at twitter (oauth/authorize), with some OAuth specification headers in the HTTP request for that address (I hope I'm saying things within a few shades of correctly). Well, python provides the webbrowser module which should open up a url in the users browser of choice. And it does, pretty easily (based on my 1-test sample). The problem is that for the OAuth dealings, I'm supposed to pass additional HTTP headers, and I can't figure out how to do that with the webbrowser module. I tried creating a Request object, from the urllib2 module. If I were just making a url request using this library, I could make the Request object, with the extra headers, and things would go fine. But the webbrowser.open() method seems to want its url parameter to be a string, not a Request object.
So... how am I supposed to do this? Or am I not supposed to do this?
Am I supposed to use some other existing python based browser? How is the user supposed to feel like I'm not still in the middle of the authentication process? I mean, if my script displays a webpage using some graphical widget, and waits for the user's input, then I could just be grabbing their username and password while they log in to twitter, no? The point of having the user go to twitter and get a pin is that the user then tells me that pin. I don't put myself in the middle and grab the pin (or their username/password) somehow.
Does any of this make sense? Can somebody point me at a solution? Existing code that solves this problem?
Showing posts with label web. Show all posts
Showing posts with label web. Show all posts
Wednesday, December 23, 2009
Monday, September 7, 2009
Parametric Explorer
If you've been following my twitter stream the past few days, you may have noticed that I've been mentioning (and shamefully linking to) a webpage I've been working on to explore parametric curves. Today I decided to make another round of improvements and it was going well, so I thought I'd share here as well. I like to think it is something that calculus teachers might find useful.
Anyway, the page is sort of introduced on my personal website, and the actual "play around with this" page is here. I've only gotten it to work in Firefox 3.5 and Google Chrome. That's enough for me to play with it, so I don't intend to do much more in terms of browser compatibility work on it.
The idea is that your mouse coordinates describe a parametric curve as you move around the screen. The webpage then also graphs the individual curves $x(t)$ and $y(t)$.
There's certainly room for improvement, but I'm happy enough with it as it is to not worry about it for now.
Anyway, the page is sort of introduced on my personal website, and the actual "play around with this" page is here. I've only gotten it to work in Firefox 3.5 and Google Chrome. That's enough for me to play with it, so I don't intend to do much more in terms of browser compatibility work on it.
The idea is that your mouse coordinates describe a parametric curve as you move around the screen. The webpage then also graphs the individual curves $x(t)$ and $y(t)$.
There's certainly room for improvement, but I'm happy enough with it as it is to not worry about it for now.
Tuesday, May 26, 2009
A New Kind of Wiki
Well, not really... I'll just explain.
So, when Wolfram|Alpha (referred to as w|a below because I'm lazy) came out, I, like many of you, was pretty excited to play with it. I was primarily interested in its use as a free, online, computer algebra system (CAS). So when I tested it, I gave it the sorts of questions that I give my calculus students (in fact, I essentially tested it with exams I've given students). In many areas it was obvious what to do, in some areas I could mess around and get a reasonable answer, and in a remaining few areas, w|a seemed to come up lacking.
I thought it would be great to have a resource telling how to input questions you might typically ask a CAS, since apparently entering straight-up Mathematica code doesn't always work (I guess Wolfram still wants to sell copies of Mathematica). One of my early thoughts was that I should make one. And then I thought, surely somebody else has already done so. In fact, the folks at w|a probably already have some nice documentation online. I made a note to look into it, and thought it funny that I was hoping to find documentation for such an online system.
Not long after that, and before I did any more playing with things, Maria Anderson, @busynessgirl on Twitter, posted a tweet: "I am toying with the idea of taking a standard algebra TOC and putting up a webpage that shows which topics W|A can do." A fantastic idea (which she quickly refined: webpage -> wiki). Extend it to calculus, and I'm there. And show not just what it can do, but what it can't do, what it does wrong (or oddly), and ways to make it do what it can do.
I think such a thing should come into being. Perhaps it already has, and I missed it? Or perhaps there is some nice documentation for w|a that I've not yet found? If either of these is the case, could somebody point me to it?
If there is no such thing yet, I say it's time to make one. I'm getting antsy. In the comments below, if you want such a wiki to exist, would you please leave some helpful feedback? I'm particularly interested in: (1) What (free, hosted) wiki software would you suggest or suggest avoiding? I think right now I'm leaning toward wikispaces, though I've not looked into things a whole lot. (2) What should it be called? (3) Any other comments or suggestions you have.
To get things rolling, I'll say that this coming Saturday (May 30), if no links are provided to an existing webpage, I'll start a wiki somewhere that seems to fit the consensus of the comments (I hope there are comments, and they have a consensus). I'll then let you know where it is.
Update 20090526: Derek Bruff left a comment that he was starting one, and posted the link http://walphawiki.wikidot.com/calculus-i via twitter. Looks promising!
So, when Wolfram|Alpha (referred to as w|a below because I'm lazy) came out, I, like many of you, was pretty excited to play with it. I was primarily interested in its use as a free, online, computer algebra system (CAS). So when I tested it, I gave it the sorts of questions that I give my calculus students (in fact, I essentially tested it with exams I've given students). In many areas it was obvious what to do, in some areas I could mess around and get a reasonable answer, and in a remaining few areas, w|a seemed to come up lacking.
I thought it would be great to have a resource telling how to input questions you might typically ask a CAS, since apparently entering straight-up Mathematica code doesn't always work (I guess Wolfram still wants to sell copies of Mathematica). One of my early thoughts was that I should make one. And then I thought, surely somebody else has already done so. In fact, the folks at w|a probably already have some nice documentation online. I made a note to look into it, and thought it funny that I was hoping to find documentation for such an online system.
Not long after that, and before I did any more playing with things, Maria Anderson, @busynessgirl on Twitter, posted a tweet: "I am toying with the idea of taking a standard algebra TOC and putting up a webpage that shows which topics W|A can do." A fantastic idea (which she quickly refined: webpage -> wiki). Extend it to calculus, and I'm there. And show not just what it can do, but what it can't do, what it does wrong (or oddly), and ways to make it do what it can do.
I think such a thing should come into being. Perhaps it already has, and I missed it? Or perhaps there is some nice documentation for w|a that I've not yet found? If either of these is the case, could somebody point me to it?
If there is no such thing yet, I say it's time to make one. I'm getting antsy. In the comments below, if you want such a wiki to exist, would you please leave some helpful feedback? I'm particularly interested in: (1) What (free, hosted) wiki software would you suggest or suggest avoiding? I think right now I'm leaning toward wikispaces, though I've not looked into things a whole lot. (2) What should it be called? (3) Any other comments or suggestions you have.
To get things rolling, I'll say that this coming Saturday (May 30), if no links are provided to an existing webpage, I'll start a wiki somewhere that seems to fit the consensus of the comments (I hope there are comments, and they have a consensus). I'll then let you know where it is.
Update 20090526: Derek Bruff left a comment that he was starting one, and posted the link http://walphawiki.wikidot.com/calculus-i via twitter. Looks promising!
Thursday, May 7, 2009
Changing Calculus
Calculus, at least derivatives, are the (a?) study of rates of change. What I've been wondering recently is how instructors are thinking about change - in their curricula.
I know we've had calculators for quite some time that can do lots of the work we assign our kids. There has always been a price barrier for students using them though. I'm thinking Wolfram Alpha is about to change that (when it goes live later this month).
There has always (well, for quite some time, anyway) been integrals.wolfram.com, which will compute integrals (a big part of a calc 2 course). However, no indication is given there about how to obtain the solution. According to the ReadWriteWeb account of Wolfram Alpha, you can ask it to do an integral, and also ask to see the steps in the computation.
I think this is just one sign, of many, that calculus class will be changing. Sure, technology has been around (behind a price barrier) that will give students answers. Teachers could typically rely on "Show All Work" to hopefully get their students to not bother with the calculators. But now, perhaps, "Show All Work" is also done by the machines, and now it's free. How should I be changing the setup of my calculus class to accommodate this shift?
It seems to me that my classes should start spending less time going through the algebra and "doing integrals" (though not completely removing this from the syllabus), and spend more time finding ways to use them to solve problems. Perhaps try to work some more theory into things, besides just "Oh, look, with functions that look like blah, a substitution blah makes them easier to integrate". I need to figure out how to shift my classes from "do the algebra to work out this computation" to "set up a computation that will determine the answer to this 'interesting' question".
Wolfram Alpha, which has brought this issue up most pressingly (in my mind), might also be a useful tool in shifting how my calculus courses are set up. By the looks of things, Wolfram Alpha has access to lots and lots of data, and can do lots and lots of interesting computation with it. So perhaps it will be a great way to find and create new problems, and give students interesting opportunities to find solutions. Of course, it's too soon to say, because the service isn't up yet. But it will be soon.
So, have people already started making these changes, and I'm just behind in my teaching (as it the rest of my school)? If so, how do I get to where you are? What should I be doing? What are the "interesting" problems I should have my students thinking about, instead of the interesting (in terms of symbol pushing) problems they currently do? Perhaps the tools that I'm just starting to see available for free in Wolfram Alpha are already around (anybody have some links for us)? Or is this all a non-issue, because doing 10 steps of algebra in each of 10 problems, each with a different algebra trick, is what we want our students to be able to do after they're through a calculus class (because in the "real world" (which I'm assuming is out there) they'll have to do everything by hand, no computers)?
I know the technology in math classes debate is not a new one. But I think it is getting more pressing. Maybe I've just been reading too much online/tech news.
I also know this is not the only question that should go into changing courses (if a change is going to happen). What is the goal of a calculus course? How does it fit into the entire mathematics curriculum? And what are the answers to these questions in terms of students going into mathematics, versus science, versus the arts? What actual calculus (and other math) should they be getting out of my class? What other things should they be getting out of my class (how to read a math text? how to present a mathematical solution? how to write one?)? What other questions am I supposed to be asking?
Apparently giving a final exam today is making me philosophical.
I know we've had calculators for quite some time that can do lots of the work we assign our kids. There has always been a price barrier for students using them though. I'm thinking Wolfram Alpha is about to change that (when it goes live later this month).
There has always (well, for quite some time, anyway) been integrals.wolfram.com, which will compute integrals (a big part of a calc 2 course). However, no indication is given there about how to obtain the solution. According to the ReadWriteWeb account of Wolfram Alpha, you can ask it to do an integral, and also ask to see the steps in the computation.
I think this is just one sign, of many, that calculus class will be changing. Sure, technology has been around (behind a price barrier) that will give students answers. Teachers could typically rely on "Show All Work" to hopefully get their students to not bother with the calculators. But now, perhaps, "Show All Work" is also done by the machines, and now it's free. How should I be changing the setup of my calculus class to accommodate this shift?
It seems to me that my classes should start spending less time going through the algebra and "doing integrals" (though not completely removing this from the syllabus), and spend more time finding ways to use them to solve problems. Perhaps try to work some more theory into things, besides just "Oh, look, with functions that look like blah, a substitution blah makes them easier to integrate". I need to figure out how to shift my classes from "do the algebra to work out this computation" to "set up a computation that will determine the answer to this 'interesting' question".
Wolfram Alpha, which has brought this issue up most pressingly (in my mind), might also be a useful tool in shifting how my calculus courses are set up. By the looks of things, Wolfram Alpha has access to lots and lots of data, and can do lots and lots of interesting computation with it. So perhaps it will be a great way to find and create new problems, and give students interesting opportunities to find solutions. Of course, it's too soon to say, because the service isn't up yet. But it will be soon.
So, have people already started making these changes, and I'm just behind in my teaching (as it the rest of my school)? If so, how do I get to where you are? What should I be doing? What are the "interesting" problems I should have my students thinking about, instead of the interesting (in terms of symbol pushing) problems they currently do? Perhaps the tools that I'm just starting to see available for free in Wolfram Alpha are already around (anybody have some links for us)? Or is this all a non-issue, because doing 10 steps of algebra in each of 10 problems, each with a different algebra trick, is what we want our students to be able to do after they're through a calculus class (because in the "real world" (which I'm assuming is out there) they'll have to do everything by hand, no computers)?
I know the technology in math classes debate is not a new one. But I think it is getting more pressing. Maybe I've just been reading too much online/tech news.
I also know this is not the only question that should go into changing courses (if a change is going to happen). What is the goal of a calculus course? How does it fit into the entire mathematics curriculum? And what are the answers to these questions in terms of students going into mathematics, versus science, versus the arts? What actual calculus (and other math) should they be getting out of my class? What other things should they be getting out of my class (how to read a math text? how to present a mathematical solution? how to write one?)? What other questions am I supposed to be asking?
Apparently giving a final exam today is making me philosophical.
Thursday, March 12, 2009
Math Blogroll in OPML
I don't spend a lot of time visiting the actual pages for many of the blogs I follow, since I get all (or at least, most, and the main portion) of the content from their rss/atom feeds. Recently though, one of the feeds I had indicated that the author was quitting. For whatever reason (perhaps because it mentioned having the world's best math blogroll), I was inspired to visit the actual page, instead of just removing the feed from my list (or doing nothing).
The feed was from Vlorbik on Math Ed. Upon visiting the page, I found that Vlorbik kept a pretty substantial blogroll of math blogs. Liking to be in the know, I figured I might subscribe to some. Of course, I already probably do subscribe to some (and author some :)), but there are surely plenty there that I don't subscribe to. And perhaps some of them are ones I would like to follow. But I didn't want to click each link, load each page, find it's feed, and add it to Google Reader. I'm pretty lazy, and my computer would slow down a bit and frustrate me.
This evening, though, I decided to see if I could write a script to grab the rss/atom feeds for any or all of the linked pages in the blogroll. I had a great time doing so. Remembering some fun pattern matching variables in perl(like \$' and \$& (the \ there only because of how I'm doing LaTeX in Blogger)), and using curl to grab the pages... good times. Then some reformatting of appropriate strings, and out pops an OPML file. Handy, because that's what Google Reader expects if you want to import a bunch of feeds. I've played with similar things before.
Anyway, the long and short of it is, I thought perhaps other people might find this OPML file helpful. Blogger won't let me upload anything besides pictures (where's my damn GDrive?), so the file is currently (as of this writing) on my UVA personal page, here. If you'd like to blindly add these feeds to your feed reader, and then trim them down individually based on content or whatever, I encourage you to do so. The only reader I've used is Google's, so I'll give some instructions for that.
The first step is to download my OPML file, and save it somewhere convenient (you only need it temporarily on your computer). In Reader, at the bottom of the left-hand pane is the 'Manage Subscriptions' link. Click on that, and then the 'Import/Export' link at the top of the settings page that pops up. In the file upload form where it says 'Select an OPML file to upload', pick the file out from wherever you saved it, and then click 'Upload'. Wait patiently as Google imports the new feeds (it really doesn't take that long, though it might take longer for news items to start flowing in). It'll send you back to the main settings page, so click 'Back to Google Reader' to start reading. You'll notice that the feeds all show up in a folder in your subscriptions panel, called 'vlorbik' (if you already have such a folder, you might modify my OPML file before upload... I should have told you that earlier). If you already subscribed to one of the feeds, it won't mess anything up, and they won't show up as duplicates in your news stream. Of course, when making this file I grabbed the atom files, where available, so if you are subscribed to the rss feed (as I am, in many cases), then you will have duplicates. But whatever, I'll let you sort out your own subscription list.
So, with this success, I feel like perhaps I should visit actual pages (instead of just watching the news stream go by in reader) more often. Perhaps find some other blogrolls?
Anyway, enjoy. Sorry, Vlorbik, that I only started getting to know you on your way out.
The feed was from Vlorbik on Math Ed. Upon visiting the page, I found that Vlorbik kept a pretty substantial blogroll of math blogs. Liking to be in the know, I figured I might subscribe to some. Of course, I already probably do subscribe to some (and author some :)), but there are surely plenty there that I don't subscribe to. And perhaps some of them are ones I would like to follow. But I didn't want to click each link, load each page, find it's feed, and add it to Google Reader. I'm pretty lazy, and my computer would slow down a bit and frustrate me.
This evening, though, I decided to see if I could write a script to grab the rss/atom feeds for any or all of the linked pages in the blogroll. I had a great time doing so. Remembering some fun pattern matching variables in perl(like \$' and \$& (the \ there only because of how I'm doing LaTeX in Blogger)), and using curl to grab the pages... good times. Then some reformatting of appropriate strings, and out pops an OPML file. Handy, because that's what Google Reader expects if you want to import a bunch of feeds. I've played with similar things before.
Anyway, the long and short of it is, I thought perhaps other people might find this OPML file helpful. Blogger won't let me upload anything besides pictures (where's my damn GDrive?), so the file is currently (as of this writing) on my UVA personal page, here. If you'd like to blindly add these feeds to your feed reader, and then trim them down individually based on content or whatever, I encourage you to do so. The only reader I've used is Google's, so I'll give some instructions for that.
The first step is to download my OPML file, and save it somewhere convenient (you only need it temporarily on your computer). In Reader, at the bottom of the left-hand pane is the 'Manage Subscriptions' link. Click on that, and then the 'Import/Export' link at the top of the settings page that pops up. In the file upload form where it says 'Select an OPML file to upload', pick the file out from wherever you saved it, and then click 'Upload'. Wait patiently as Google imports the new feeds (it really doesn't take that long, though it might take longer for news items to start flowing in). It'll send you back to the main settings page, so click 'Back to Google Reader' to start reading. You'll notice that the feeds all show up in a folder in your subscriptions panel, called 'vlorbik' (if you already have such a folder, you might modify my OPML file before upload... I should have told you that earlier). If you already subscribed to one of the feeds, it won't mess anything up, and they won't show up as duplicates in your news stream. Of course, when making this file I grabbed the atom files, where available, so if you are subscribed to the rss feed (as I am, in many cases), then you will have duplicates. But whatever, I'll let you sort out your own subscription list.
So, with this success, I feel like perhaps I should visit actual pages (instead of just watching the news stream go by in reader) more often. Perhaps find some other blogrolls?
Anyway, enjoy. Sorry, Vlorbik, that I only started getting to know you on your way out.
Wednesday, October 1, 2008
Homework Helper
I've been a little bit frustrated with the way the discussion session for my calculus class has been going recently. That time is set aside as a time for students to ask whatever questions they have, without me lecturing on any new content. Of course, generally the questions they have are 'can you do this homework problem?' or 'I got stuck on this problem, can we go through it?'. Generally, those are fine questions that I'm happy to answer. However, I'm getting the impression that many of the students have not yet looked at the assignment yet, and are just waiting for me to do half of it for them. Of course, this'll come back to bite them on the exam, but it's fairly frustrating all around. So I've been trying to decide what to do about it.
It occurred to me today that even just answering those questions asked by the students who have looked at the assignment isn't very efficient. If they've already started, but gotten stuck, it'd be fairly quick for me to sit down with them individually, find their error, and send them on their merry way. Even there, though, that's not what I should be doing. It's easy for me to spot errors, generally. Especially when I've already looked at the problem with several other students. But it would be hugely valuable for students to be able to find their own mistakes. It can be maddening trying to find your own mistakes, of course, but it's an important skill to have.
A good way to practice finding mistakes, even if they get all of their own problems correct, would be to help identify mistakes in other people's work. Of course, this process can be ironed out a little online. I am envisioning a system where students can go and enter the work they have on a problem, up to the point where they got stuck. Then other students could go and try to find errors in people's work. This way people that get stuck can get help whenever it's convenient for them (as opposed to waiting for office hours or something), and students can practice finding errors in work.
It seems there should be some sort of credits system involved. At the beginning of the semester, students have, say... 3 credits, or 5 or something. A credit gives you permission to ask a question. To earn credits, you submit a bug report on another person's question. Perhaps a bug report just identifies what line the error occurs on, without identifying the error. And I guess answers would need to be verified before credit is added to the person who submitted the answer. Perhaps the person asking the question verifies it?
That's about as far as I've taken the idea today. Clearly there'd have to be an easy way to enter work, perhaps with some sort of graphical formula editor. Also probably some anonymity, so you can see questions, but not who submitted them (nor who answered them?). It also seems like what might happen is that the people who have the most questions might have a hard time spotting other people's mistakes in order to earn credits to ask more questions. So perhaps there's a way to account for that. Something like... if you earn lower than an N on the exam, each point less than that gets you a free credit?
Anyway, that's a day's thought on the idea. What do you all think? Do you know of a system that does something like this already? Could something like the above idea be worthwhile and helpful? How could it fail? Where does it need improvement? What additional policies might you use?
It occurred to me today that even just answering those questions asked by the students who have looked at the assignment isn't very efficient. If they've already started, but gotten stuck, it'd be fairly quick for me to sit down with them individually, find their error, and send them on their merry way. Even there, though, that's not what I should be doing. It's easy for me to spot errors, generally. Especially when I've already looked at the problem with several other students. But it would be hugely valuable for students to be able to find their own mistakes. It can be maddening trying to find your own mistakes, of course, but it's an important skill to have.
A good way to practice finding mistakes, even if they get all of their own problems correct, would be to help identify mistakes in other people's work. Of course, this process can be ironed out a little online. I am envisioning a system where students can go and enter the work they have on a problem, up to the point where they got stuck. Then other students could go and try to find errors in people's work. This way people that get stuck can get help whenever it's convenient for them (as opposed to waiting for office hours or something), and students can practice finding errors in work.
It seems there should be some sort of credits system involved. At the beginning of the semester, students have, say... 3 credits, or 5 or something. A credit gives you permission to ask a question. To earn credits, you submit a bug report on another person's question. Perhaps a bug report just identifies what line the error occurs on, without identifying the error. And I guess answers would need to be verified before credit is added to the person who submitted the answer. Perhaps the person asking the question verifies it?
That's about as far as I've taken the idea today. Clearly there'd have to be an easy way to enter work, perhaps with some sort of graphical formula editor. Also probably some anonymity, so you can see questions, but not who submitted them (nor who answered them?). It also seems like what might happen is that the people who have the most questions might have a hard time spotting other people's mistakes in order to earn credits to ask more questions. So perhaps there's a way to account for that. Something like... if you earn lower than an N on the exam, each point less than that gets you a free credit?
Anyway, that's a day's thought on the idea. What do you all think? Do you know of a system that does something like this already? Could something like the above idea be worthwhile and helpful? How could it fail? Where does it need improvement? What additional policies might you use?
Tuesday, August 5, 2008
Experimental Get Together
I recently decided it would be nice to have an organized get together of grad students in the math department here at UVA. Being summer, I figured plenty of people wouldn't be around, and I also figured not everybody would be interested. Even moreso, I knew that I didn't want to be in charge of organizing such a thing. But if I didn't, I wasn't sure anybody else would.
To overcome this gap, I decided to try to make it self-organizing. I set up a spreadsheet on Google Docs and set it so that anybody could edit it. This was intended to be a repository of information about the outing: who was going, how they were getting there, what (if any) food they planned on bringing, and what sorts of activities they hoped to do while out and about. After entering my info, and having a friend in the department make sure he could edit it as well, I sent out an email to the math graduate students. In the email I emphasized that I didn't want to be in charge, and pointed everybody to the spreadsheet. I figured we're all supposedly smart (enough) people, we could certainly organize ourselves.
It didn't quite work out as I had envisioned. With little more than a week to go before the outing, only 1 more person had added to the spreadsheet. I had also gotten a couple of emails from people saying they would be out of town. But no word from plenty of people who I thought would be interested and around. One day I ran across one of the visiting instructors, who was teaching some of the summer classes UVA offers for incoming and rising second year grad students (it's an awesome feature of the department). Apparently he'd been thinking about having a group picnic or so for his class, and asked if he could just merge with my little experiment. I figured this was fine, and that many of the attendees would be from that class anyway (even if they weren't on the spreadsheet). I sent him a link to the spreadsheet, but apparently there were some technical troubles using it - he indicated that Google had required a login, which I hadn't expected.
So anyway, the day came, and plenty of people turned out. The majority of them were from the summer class, or probably had spoken directly with them. We had a good time sitting around talking and eating munchies, then playing some frisbee golf. It's quite fun to watch 10ish people all coming your way throwing frisbees, by the way.
But I'm still a little confused about the apparent failure of the online organizational aspect of the outing. I thought having one specific place where anybody could go to see about the day, and add their 2 cents, would be helpful. It could organize rides, and appropriate amounts of food. I wonder if perhaps a Facebook group would have had more traction with the other students, as most of them were not much more than a year out of undergrad. Perhaps such outings really do better with an overseeing individual, a role which may, this time, have been played by the other instructor. I still think a wiki-style organized outing would work. Maybe I'll try again sometime. I expect I'll be around for another summer...
To overcome this gap, I decided to try to make it self-organizing. I set up a spreadsheet on Google Docs and set it so that anybody could edit it. This was intended to be a repository of information about the outing: who was going, how they were getting there, what (if any) food they planned on bringing, and what sorts of activities they hoped to do while out and about. After entering my info, and having a friend in the department make sure he could edit it as well, I sent out an email to the math graduate students. In the email I emphasized that I didn't want to be in charge, and pointed everybody to the spreadsheet. I figured we're all supposedly smart (enough) people, we could certainly organize ourselves.
It didn't quite work out as I had envisioned. With little more than a week to go before the outing, only 1 more person had added to the spreadsheet. I had also gotten a couple of emails from people saying they would be out of town. But no word from plenty of people who I thought would be interested and around. One day I ran across one of the visiting instructors, who was teaching some of the summer classes UVA offers for incoming and rising second year grad students (it's an awesome feature of the department). Apparently he'd been thinking about having a group picnic or so for his class, and asked if he could just merge with my little experiment. I figured this was fine, and that many of the attendees would be from that class anyway (even if they weren't on the spreadsheet). I sent him a link to the spreadsheet, but apparently there were some technical troubles using it - he indicated that Google had required a login, which I hadn't expected.
So anyway, the day came, and plenty of people turned out. The majority of them were from the summer class, or probably had spoken directly with them. We had a good time sitting around talking and eating munchies, then playing some frisbee golf. It's quite fun to watch 10ish people all coming your way throwing frisbees, by the way.
But I'm still a little confused about the apparent failure of the online organizational aspect of the outing. I thought having one specific place where anybody could go to see about the day, and add their 2 cents, would be helpful. It could organize rides, and appropriate amounts of food. I wonder if perhaps a Facebook group would have had more traction with the other students, as most of them were not much more than a year out of undergrad. Perhaps such outings really do better with an overseeing individual, a role which may, this time, have been played by the other instructor. I still think a wiki-style organized outing would work. Maybe I'll try again sometime. I expect I'll be around for another summer...
Thursday, July 31, 2008
sumidiot.com
I said I'd have one more post about sumidiot.com. Well, here it is. I've now got rss feeds available on sumidiot.com (all news, math jokes news, for now). So if you're interested in the site, I'll probably just let you follow those. Then if you aren't interested, you don't have to hear too much about it here.
Logging in seems to work (as long as you have cookies enabled), though it won't get you much right now. The next thing I will write should let you rate jokes, if you are logged in, and then it'll be totally worth it.
One of the things I wanted to play with was another way to access the math jokes. If you direct your browser to sumidiot.com/jokeN, and it's a proper N (like N=1 works), you'll see a joke as part of the webpage. For example, if you follow the 'Random' joke link, it'll send you to a /jokeN. If you'd rather just get the raw joke text, you can send 'Accept: text/plain' in your HTTP header. I'm pretty sure this doesn't qualify this as a web API, but it's the closest I'm going to get for a while. If nothing in this paragraph made any sense, just ignore it.
Ok, I guess I better get back to things I'm actually supposed to be doing with my time...
[Update (July 31, 2008): Changed the feed links above to use feedburner. Nice to have stats sometimes, I guess.]
Logging in seems to work (as long as you have cookies enabled), though it won't get you much right now. The next thing I will write should let you rate jokes, if you are logged in, and then it'll be totally worth it.
One of the things I wanted to play with was another way to access the math jokes. If you direct your browser to sumidiot.com/jokeN, and it's a proper N (like N=1 works), you'll see a joke as part of the webpage. For example, if you follow the 'Random' joke link, it'll send you to a /jokeN. If you'd rather just get the raw joke text, you can send 'Accept: text/plain' in your HTTP header. I'm pretty sure this doesn't qualify this as a web API, but it's the closest I'm going to get for a while. If nothing in this paragraph made any sense, just ignore it.
Ok, I guess I better get back to things I'm actually supposed to be doing with my time...
[Update (July 31, 2008): Changed the feed links above to use feedburner. Nice to have stats sometimes, I guess.]
Tuesday, July 29, 2008
Keeping Up
It might be a mathematician's obsessive-compulsive tendencies, or lack of proper motivation to do other things, but I like to try to keep up with all my rss/atom feeds, and largely with my twitter stream. I tried, for a while, setting up Google Reader with a folder that I didn't keep up-to-date with, and would just check randomly when I was bored, but it never quite sat right with me. I'll star items to read later, but once I get over 20 or 30, and certainly by about 40 starred items, I start getting pretty antsy. Furthermore, I check Reader's 'Discover' link every few days, because the thought of it not being an empty list of recommended feeds is a little annoying. Sometimes I wish I could turn off the recommendation engine, though I suppose not clicking 'Discover' is about the same.
This past week I moved into a new apartment, which was, itself, newly renovated with all brand new appliances. I mention this because one of my new appliances gives me something else to keep up with. My fridge has a pretty productive little ice-maker. As it is summer time, and I'm trying to do a fair bit of running, I need to be drinking lots of water anyway. And I don't like drinking water without lots of ice. So it's nice that there's always ice. At the same time, it's just one more thing to keep up with :)
This past week I moved into a new apartment, which was, itself, newly renovated with all brand new appliances. I mention this because one of my new appliances gives me something else to keep up with. My fridge has a pretty productive little ice-maker. As it is summer time, and I'm trying to do a fair bit of running, I need to be drinking lots of water anyway. And I don't like drinking water without lots of ice. So it's nice that there's always ice. At the same time, it's just one more thing to keep up with :)
Sunday, July 27, 2008
The .com(s)
For a while (longer than I'd like to admit), I've had the sumidiot.com domain. I never really got around to doing much with it, besides rarely messing about with how I wanted the menus to look and such. Then at some point I decided 'parallelality' was a cool name, and should have a cool webpage. I found parallelality.com un-occupied, so I figured I might as well keep it from being a stupid ad page or something. It's set up to just redirect to sumidiot.com, but if you can think of a cool use for it, please let me know. I'd be happy to see somebody do something worthwhile with it.
Before you head over there, you should know that make pages look nice is not something I'm good at. I'd be incredibly shocked if there weren't drastic rendering issues depending on your operating system and browser, or even just your default font settings. But I've only got so much time to work on this, and making it look pretty for everybody isn't my top priority. It looks acceptable to me, on my browser, so that'll have to be good enough for now. Don't like it? Make your own page.
Anyway, the news is that there is now, at least a little bit, something to look at there. Namely a large collection of math jokes. If you've got suggestions for how you'd like to interact with a large collection of math jokes, please leave me a comment below, and I'll see about making it happen. In the near future I hope to let you rate jokes, and star your favorites. Then you'll also be able to look at the most popular jokes. I'm also wondering about maybe putting a list of mathy comics up there. I just went through my list and found 3: abstruse goose, brown sharpie, and xkcd (does it have a high enough math/non-math ratio to count?). If you know any others, please leave a comment below.
I wanted to make this post earlier this week, but at one point I accidentally deleted all of the pages - on the server and on my own computer. So that was pretty stupid. But I guess I wanted to do a re-write anyway (that's why I was deleting things to begin with), so besides the time loss, it kinda worked out.
There's still plenty I'd like to do on the page, but probably there's more pressing priorities. I'm going to try to take at least a little time every week to mess about with it though. One of the next things I'd like to do is make a feed on that domain, so that I'm not always posting here about changes made there. Hopefully that'll cut down on the amount of posting here about changes there, though I'll probably still post about big changes here. Something for you to look forward to.
Before you head over there, you should know that make pages look nice is not something I'm good at. I'd be incredibly shocked if there weren't drastic rendering issues depending on your operating system and browser, or even just your default font settings. But I've only got so much time to work on this, and making it look pretty for everybody isn't my top priority. It looks acceptable to me, on my browser, so that'll have to be good enough for now. Don't like it? Make your own page.
Anyway, the news is that there is now, at least a little bit, something to look at there. Namely a large collection of math jokes. If you've got suggestions for how you'd like to interact with a large collection of math jokes, please leave me a comment below, and I'll see about making it happen. In the near future I hope to let you rate jokes, and star your favorites. Then you'll also be able to look at the most popular jokes. I'm also wondering about maybe putting a list of mathy comics up there. I just went through my list and found 3: abstruse goose, brown sharpie, and xkcd (does it have a high enough math/non-math ratio to count?). If you know any others, please leave a comment below.
I wanted to make this post earlier this week, but at one point I accidentally deleted all of the pages - on the server and on my own computer. So that was pretty stupid. But I guess I wanted to do a re-write anyway (that's why I was deleting things to begin with), so besides the time loss, it kinda worked out.
There's still plenty I'd like to do on the page, but probably there's more pressing priorities. I'm going to try to take at least a little time every week to mess about with it though. One of the next things I'd like to do is make a feed on that domain, so that I'm not always posting here about changes made there. Hopefully that'll cut down on the amount of posting here about changes there, though I'll probably still post about big changes here. Something for you to look forward to.
Wednesday, July 16, 2008
Debugging
Recently I have been trying to get OpenID working on a website. I want to use the JanRain OpenID PHP library, and it looked like the EasyOpenID extension to it would make things easier, so I wanted to use that as well.
My first few attempts with them seemed to have various setup issues (the blame for which rests solely with me, I concede), so I decided to remove everything and start fresh. I remembered at some point one of the issues being about getting the proper include path, so the first thing I did was put all of the .php files in the same directory. I also re-did the 'include' and 'require_once' lines to accommodate this, and removed the initial bits of the example OpenID consumer code that set up the php include path. I figured this would at least eliminate one of the problems, and if I got it all working from here, it should be feasible to undo this step.
So, the index.php loaded correctly and asked for my OpenID url. I entered one, and was sent to try_auth.php, where after a few seconds I was presented with a "500 Internal Server Error". Hmm. This is code other people have written and published. People who I expect blow me out of the water, programming-wise, so something strange is going on. Nevertheless, I tried tracking down the error. The only way I knew to do this was to put lots of error_log lines into the program to trace execution, and update my php.ini on the server to write errors to a log file. Reloading the try_auth.php file (with my url specified as a GET parameter) I'd still get the 500 error, and now I could track what was going on by also reloading the error log file. Slow and tedious, but it seemed to work.
When I finally tracked down the last line of the code that was executing, I was fairly surprised where it was. It was in JanRain's BigMath.php, and the actual line that was failing was a call to bcpowmod, a built-in PHP function (part of the encryption/decryption process, if I understand correctly). Since bcpowmod is only available in newer versions of PHP, BigMath.php has it's own powmod function, which gets called if bcpowmod is not available. So I changed the code to skip bcpowmod, and use the powmod function in BigMath. Still died. The powmod function is basically a single while loop, so I stuck a counter in to see how much it was looping. Somewhere among 500-600 iterations of the loop, the code would fail. The precise number of times was different each time, but it was always somewhere in that range. In the process I found the bcpowmod was also getting called at an earlier stage, where it finished without issue.
Ok, take a step back. Maybe it was something screwy with the OpenID provider I had picked. So I tried another one. No dice. When I finally decided to have some friends try, they also had the same error, and looking at the error log, it was in the same place. Ah, maybe wrap the offending code in a try/catch block. No dice. No exception thrown.
Well, maybe there was actually something strange with bcpowmod. So I had the code print out the values it was calling bcpowmod with (nice long integers) in the error log. I copied those into a little local script whose only job was to call bcpowmod. Worked fine. Ok, upload that to the server. Worked fine there. Hmm.
Perhaps the server was cutting the script off at a certain amount of time. I seem to recall, in reading Google App Engine documentation (not what I'm using, but still), that code had only a few seconds to return before it was interrupted, for performance reasons. Perhaps that's how the server I'm on is set up. So I found that php.ini contains those sort of limiting parameters, both execution time and memory usage. Comparing with my local php.ini I noticed the memory usage allowance was a little lower on the server, so I tried temporarily up-ing it. Didn't work. The execution time allowance was set at the default 300 (seconds), and the script was dying after about 6, so I figured that wasn't it. All the same, to test it, I changed the allowed time to about 5 seconds, to see what would happen when the script was forced to quit early. For grins, I also told the script to pause for a few seconds using a usleep() call. This gave me a different error message, so I decided that wasn't the problem.
So, now I'm out of ideas. I contacted the hosting company by submitting a support request online. Tracking the work log, it looks like they've taken a look (at something, my notice at least, hopefully the issue itself), but it's now been 3 days (2 since they last checked it out). Makes me feel like this is at least a little bit of a justifiable puzzler.
Anybody have any ideas? If this gets resolved sometime, I'll let you know how. Perhaps it's just time to start over fresh? Maybe without the EasyOpenID bit? Just get something, anything, working?
My first few attempts with them seemed to have various setup issues (the blame for which rests solely with me, I concede), so I decided to remove everything and start fresh. I remembered at some point one of the issues being about getting the proper include path, so the first thing I did was put all of the .php files in the same directory. I also re-did the 'include' and 'require_once' lines to accommodate this, and removed the initial bits of the example OpenID consumer code that set up the php include path. I figured this would at least eliminate one of the problems, and if I got it all working from here, it should be feasible to undo this step.
So, the index.php loaded correctly and asked for my OpenID url. I entered one, and was sent to try_auth.php, where after a few seconds I was presented with a "500 Internal Server Error". Hmm. This is code other people have written and published. People who I expect blow me out of the water, programming-wise, so something strange is going on. Nevertheless, I tried tracking down the error. The only way I knew to do this was to put lots of error_log lines into the program to trace execution, and update my php.ini on the server to write errors to a log file. Reloading the try_auth.php file (with my url specified as a GET parameter) I'd still get the 500 error, and now I could track what was going on by also reloading the error log file. Slow and tedious, but it seemed to work.
When I finally tracked down the last line of the code that was executing, I was fairly surprised where it was. It was in JanRain's BigMath.php, and the actual line that was failing was a call to bcpowmod, a built-in PHP function (part of the encryption/decryption process, if I understand correctly). Since bcpowmod is only available in newer versions of PHP, BigMath.php has it's own powmod function, which gets called if bcpowmod is not available. So I changed the code to skip bcpowmod, and use the powmod function in BigMath. Still died. The powmod function is basically a single while loop, so I stuck a counter in to see how much it was looping. Somewhere among 500-600 iterations of the loop, the code would fail. The precise number of times was different each time, but it was always somewhere in that range. In the process I found the bcpowmod was also getting called at an earlier stage, where it finished without issue.
Ok, take a step back. Maybe it was something screwy with the OpenID provider I had picked. So I tried another one. No dice. When I finally decided to have some friends try, they also had the same error, and looking at the error log, it was in the same place. Ah, maybe wrap the offending code in a try/catch block. No dice. No exception thrown.
Well, maybe there was actually something strange with bcpowmod. So I had the code print out the values it was calling bcpowmod with (nice long integers) in the error log. I copied those into a little local script whose only job was to call bcpowmod. Worked fine. Ok, upload that to the server. Worked fine there. Hmm.
Perhaps the server was cutting the script off at a certain amount of time. I seem to recall, in reading Google App Engine documentation (not what I'm using, but still), that code had only a few seconds to return before it was interrupted, for performance reasons. Perhaps that's how the server I'm on is set up. So I found that php.ini contains those sort of limiting parameters, both execution time and memory usage. Comparing with my local php.ini I noticed the memory usage allowance was a little lower on the server, so I tried temporarily up-ing it. Didn't work. The execution time allowance was set at the default 300 (seconds), and the script was dying after about 6, so I figured that wasn't it. All the same, to test it, I changed the allowed time to about 5 seconds, to see what would happen when the script was forced to quit early. For grins, I also told the script to pause for a few seconds using a usleep() call. This gave me a different error message, so I decided that wasn't the problem.
So, now I'm out of ideas. I contacted the hosting company by submitting a support request online. Tracking the work log, it looks like they've taken a look (at something, my notice at least, hopefully the issue itself), but it's now been 3 days (2 since they last checked it out). Makes me feel like this is at least a little bit of a justifiable puzzler.
Anybody have any ideas? If this gets resolved sometime, I'll let you know how. Perhaps it's just time to start over fresh? Maybe without the EasyOpenID bit? Just get something, anything, working?
Thursday, July 10, 2008
Invites and Elsewhere
I've recently gotten a chance to sign up at a couple of online services that are apparently in 'invite-only' beta. But they have given me invites. So I thought I'd extend, to you, beloved readers, those invites. I've got a couple of invites over at Twine, and a few at dailymile. Twine is supposedly a 'semantic web' application. As far as I can tell it's a new place to store bookmarks (though you can also post comments, notes, blog), and keep them well organized. The more info you put into Twine, the better it knows you, and the better it can recommend new items. That's my understanding, anyway. Dailymile is for people who run, bike, or swim. Some sort of social network for such people, where you can keep some information about your training as well as upcoming events. So anyway, if you'd like to try either service, leave a comment below, with your email (or, if you don't want your email showing up in public comments, you'll have to find another way to contact me - or I suppose I could delete the comment afterwards). I'll update this post, at the top, if/when I run out of invites.
So, since I've been signing up at places online and trying things out, I thought it might be a good idea to make a list of those places. And since they're all reasonably public, I went ahead and posted them on the 'contact' part of my other webpage. The list includes: this blog, twitter, reader shared items, twine, dailymile, mapmyrun, goodreads, flickr, youtube, facebook, and friendfeed. Feel free to find me there.
In the process of adding those links, I thought I should try something else out. This is the XFN (XHTML Friends Network) microformat. For the links I made to my other pages, I added rel="me" to the (x)html. Supposedly that's helpful in some sort of semantic web sense. Reading a little, though, I found that rel="me" is supposed to be symmetric. So I'm supposed to now to go all of those other places and make pointers to all of the other places from them? Some of them don't have that sort of capability setup. So am I not to claim rel="me" when I link to them? I mean, I think I see the argument for rel="me" to by symmetric - if it's not, I can go around claiming any page I want.
Of course, if I'm only linking to my pages, it's not much of a 'Friends Network', huh?
I've also wondered about linking to pages that mention me before, maybe storing them all in some sort of 'mention file'. Not because I really care that if somebody finds me online, that they also find all of my other mentions. What I was thinking was if I linked to pages that mentioned me, bots might be able to distinguish between various instances of Nicholas Hamblet online (I think there is at least one other, in case you were curious). It seems like this could be helpful somehow. Like... I find an interesting mention of a 'A. Person' and wonder if it might be the 'A. Person' I went to high school with. So I go to some web services that has crawled 'mention files', and put in the page I found them on and the name (that page could, after all, mention many other people). If the web service has found a 'mention file' for that link and person, it'll let me know. Clearly this has issues: (1) several people could claim the same mention, either intentionally or not, (2) spam, as always, (3) more popular people get mentioned all the time, and keeping their 'mention file' up-to-date would be a hastle. All the same, it's something I've wondered about. I guess one way would be to set up a web service that crawls, finds names, and stores them. If it keeps track of context, it may be able to guess intelligently to distinguish individuals with the same name, and with the help of some 'mention files' and appropriate microformats, perhaps it could do ok? Perhaps there's already such a thing? Either the 'mention files' or the web service?
So, since I've been signing up at places online and trying things out, I thought it might be a good idea to make a list of those places. And since they're all reasonably public, I went ahead and posted them on the 'contact' part of my other webpage. The list includes: this blog, twitter, reader shared items, twine, dailymile, mapmyrun, goodreads, flickr, youtube, facebook, and friendfeed. Feel free to find me there.
In the process of adding those links, I thought I should try something else out. This is the XFN (XHTML Friends Network) microformat. For the links I made to my other pages, I added rel="me" to the (x)html. Supposedly that's helpful in some sort of semantic web sense. Reading a little, though, I found that rel="me" is supposed to be symmetric. So I'm supposed to now to go all of those other places and make pointers to all of the other places from them? Some of them don't have that sort of capability setup. So am I not to claim rel="me" when I link to them? I mean, I think I see the argument for rel="me" to by symmetric - if it's not, I can go around claiming any page I want.
Of course, if I'm only linking to my pages, it's not much of a 'Friends Network', huh?
I've also wondered about linking to pages that mention me before, maybe storing them all in some sort of 'mention file'. Not because I really care that if somebody finds me online, that they also find all of my other mentions. What I was thinking was if I linked to pages that mentioned me, bots might be able to distinguish between various instances of Nicholas Hamblet online (I think there is at least one other, in case you were curious). It seems like this could be helpful somehow. Like... I find an interesting mention of a 'A. Person' and wonder if it might be the 'A. Person' I went to high school with. So I go to some web services that has crawled 'mention files', and put in the page I found them on and the name (that page could, after all, mention many other people). If the web service has found a 'mention file' for that link and person, it'll let me know. Clearly this has issues: (1) several people could claim the same mention, either intentionally or not, (2) spam, as always, (3) more popular people get mentioned all the time, and keeping their 'mention file' up-to-date would be a hastle. All the same, it's something I've wondered about. I guess one way would be to set up a web service that crawls, finds names, and stores them. If it keeps track of context, it may be able to guess intelligently to distinguish individuals with the same name, and with the help of some 'mention files' and appropriate microformats, perhaps it could do ok? Perhaps there's already such a thing? Either the 'mention files' or the web service?
Thursday, June 26, 2008
Introducing: Reader Rater
I'm not terribly impressed with the trend reporting in Google Reader. It doesn't seem to match well with how I use Reader. I'd like to have a somewhat better idea which of my feeds I actually read the most. Even more so, I'd like to know which feeds I don't really care for, so I can remove them, or at least reorganize my lists.
With that in mind, I sat down this morning (and into the afternoon) with a couple existing scripts, dive into greasemonkey, and firebug (awesome), and cranked out 'Reader Rater'. Clever, huh? It's a greasemonkey script, and it seems to be an ok first go at something. The script is here, and accompanying (brief) documentation here.
Let me know if you try it, and how it goes. If you find any bugs, or have any suggestions (easy to implement ones, please :)), please leave a comment below.
[While I was looking up references for this post, I found this script, and at first glance it looks pretty nice.]
With that in mind, I sat down this morning (and into the afternoon) with a couple existing scripts, dive into greasemonkey, and firebug (awesome), and cranked out 'Reader Rater'. Clever, huh? It's a greasemonkey script, and it seems to be an ok first go at something. The script is here, and accompanying (brief) documentation here.
Let me know if you try it, and how it goes. If you find any bugs, or have any suggestions (easy to implement ones, please :)), please leave a comment below.
[While I was looking up references for this post, I found this script, and at first glance it looks pretty nice.]
Tuesday, June 3, 2008
Why Not Wikipedia?
I'd like to apologize for my previous post. Not it's length, not that it was math, but I'd like to apologize for it's location.
With all of the links in that post being links to wikipedia articles, you may have wondered why I didn't just refer you to the wikipedia page on homotopy colimits, and be done with it. Well, unless I'm missing something, there isn't a wikipedia page on homotopy colimts (or their dual, homotopy limits). Ok, fair enough. Wikipedia is extensive, but nobody expects it to be complete. So why didn't I make the homotopy colimit page?
Laziness. I like the format of my previous post, whether you do or not. But I don't believe that it is what a wikipedia article should look like. I wish I would sit down and take the time to write a proper wikipedia article. But there are lots of things I wish I were doing that I'm not, and I don't see that changing anytime soon. For now, you can join me in hating me for not correcting this gap in wikipedia.
With all of the links in that post being links to wikipedia articles, you may have wondered why I didn't just refer you to the wikipedia page on homotopy colimits, and be done with it. Well, unless I'm missing something, there isn't a wikipedia page on homotopy colimts (or their dual, homotopy limits). Ok, fair enough. Wikipedia is extensive, but nobody expects it to be complete. So why didn't I make the homotopy colimit page?
Laziness. I like the format of my previous post, whether you do or not. But I don't believe that it is what a wikipedia article should look like. I wish I would sit down and take the time to write a proper wikipedia article. But there are lots of things I wish I were doing that I'm not, and I don't see that changing anytime soon. For now, you can join me in hating me for not correcting this gap in wikipedia.
Monday, June 2, 2008
Milestone!
I've been enjoying watching hits trickle in based on the Feedjit widget on the right of this page. Today I noticed I had hits from the 6 continents you'd expect. If anybody knows somebody in Antarctica with a few spare minutes, you could really make my day (though I don't know why you'd care to). At first it looked like that hit in Australia was from my birthplace of Alice Springs. But zooming in, apparently it's a little south of that.
Anyway, I'd like to take this chance to apologize to all of you who didn't find what you were looking for here. I know I've been even worse than usual about content with any value recently. I'd like to correct that.
Anyway, I'd like to take this chance to apologize to all of you who didn't find what you were looking for here. I know I've been even worse than usual about content with any value recently. I'd like to correct that.
Tuesday, May 27, 2008
OpenID Revisit
A while ago I wrote a joyful post about OpenID and how I thought it was great. Admittedly, since then I have hardly used my OpenID identity (any of them). I've remained positive about the idea, and keep hoping I'll find/make the time this summer to do enough webpage building sorts of things to get to a point where I would accept OpenID login.
So I've been brought down a bit by the article "The problem(s) with OpenID". It's a little lengthy, and apparently largely a wrap-up of many other posts from many other places. Still, it seems like a fairly important article to read.
Also today I listened to a podcast from April concerning DataPortability with guest Jonathan Vanasco, and found it to be pretty interesting. In particular, it made me think more about the idea of having many different faces online, e.g., something like a MySpace versus LinkedIn account. And, as coincidences go, just yesterday Mr. Vanasco wrote a post: "Data Sportability". All quite interesting, and I'm looking forward to the promised 'upcoming' posts.
I wish I was more thoughtful about security and privacy and things. The web sure is an exciting place. Articles like these tend to make me wonder what I should be doing with my life. There are smarter people than me, that work harder than me, all over the place, getting all sorts of amazing things done. What sort of contribution can I actually make? Where should I direct my energy, so that a contribution actually can be made?
So I've been brought down a bit by the article "The problem(s) with OpenID". It's a little lengthy, and apparently largely a wrap-up of many other posts from many other places. Still, it seems like a fairly important article to read.
Also today I listened to a podcast from April concerning DataPortability with guest Jonathan Vanasco, and found it to be pretty interesting. In particular, it made me think more about the idea of having many different faces online, e.g., something like a MySpace versus LinkedIn account. And, as coincidences go, just yesterday Mr. Vanasco wrote a post: "Data Sportability". All quite interesting, and I'm looking forward to the promised 'upcoming' posts.
I wish I was more thoughtful about security and privacy and things. The web sure is an exciting place. Articles like these tend to make me wonder what I should be doing with my life. There are smarter people than me, that work harder than me, all over the place, getting all sorts of amazing things done. What sort of contribution can I actually make? Where should I direct my energy, so that a contribution actually can be made?
Monday, May 26, 2008
Feed Update
I just decided to try out FeedBurner for this blog (new feed here). I'm not sure how that works for current subscribers - will you even see this message? Anyway, if you get a minute, perhaps update to the new feed.
I decided to do this, because according to the Feedjit widget here, I'm actually getting hits from... well... around (I know, plenty of them are me, seeing how the page looks. But I know I'm not all of them). So I thought I'd get some more information about that, and FeedBurner seemed like a common way to do so.
Please bear with me.
I decided to do this, because according to the Feedjit widget here, I'm actually getting hits from... well... around (I know, plenty of them are me, seeing how the page looks. But I know I'm not all of them). So I thought I'd get some more information about that, and FeedBurner seemed like a common way to do so.
Please bear with me.
Thursday, May 15, 2008
Global Comments
I'm part of the problem. Minutes ago, I decided to 'Share with note' an item from Google Reader, instead of posting a comment on the site hosting the original article. In my defense, I didn't have a huge amount to say, I'd be surprised to think it was truly valuable, and I would have had to register at the site in order to leave a comment (instead of allowing, say, logging in via openid, in which case I would have). I want to be more a part of the online community, leaving comments on posts that strike me... but at the same time I am fairly lazy. So instead of contributing to the community, I contributed to the problem: comment tracking.
By not posting my comment somewhere associated with the original article, I have essentially forked the conversation. Somebody trying to keep up with the commentary the article generates doesn't know what I said (not a true loss, in the case, but the principle is there).
I was wondering a little about comment aggregation recently, and systems to keep all comments about an article (or links back to articles directly inspired by an article). I hope they develop more in the near future, and I vaguely wonder what, if any, will be the effect on sites like digg or slashdot (more on this below).
[Warning: this post gets a little rambly about now. If you've got better things to do, don't let me keep you.]
Another aspect of this commenting issue that has been on mind, as of late, is the one I mentioned in a recent post: there are too many venues to publish thoughts. I had something I wanted to say about the article I mentioned at this start of this article. I could have left a comment on the original page. I could have thought about what I wanted to say more and made a post here (and put a link in the comments on the original page). I could have left a tweet pointing to the article (but with 140 characters, some occupied by the obscure tinyurl, could I have left a tweet that would have actually inspired people to take a gamble at a url, for which they had no idea where it actually pointed?). I could have just 'shared' the item in my reader. All of these different venues are hosted by different services, and read by entirely different audiences.
What I dream about is a global interface. I don't want to have to post the same thing many times so that 'my audience' sees it. I want to post in one place, and then if I actually have an audience, have the system automatically send them a message (or wait for an rss request) that says something has been posted. And I can pick a subset of my audience for particular messages, and global reading permissions (I could choose, in fact, just a single person, with only that person allowed to read it, and have just eliminated my need for email/IM). Except my audience has the option of just being an audience when I talk about a particular topic (e.g., they could filter my posts down to those tagged with 'math' (future versions of this system handle the tagging for me, as the semantic web and natural language processing makes better progress)). And when I tell the system that my post was related to, or inspired by, some article I found, the system goes to that article and lets it know it inspired me, with a link to what I had to say.
Again, but from a reader's perspective: Instead of finding a blog I want to follow, and noticing that they also twitter, and... wherever else they post, I can just click 'follow', and get their one global feed (tailored the way they want for privacy). Hopefully this also limits duplication from people whose posts get published in a few locations. And then I notice that a particular source has a hard time staying on track, and talks about a diverse collection of things. I've only got so much time in the day and I don't care about the person's fish recipes (or whatever nonsense), so I can set it up to filter incoming messages/feeds from them based on topics/tags the interest me. Better yet, the system sorts all of my incoming traffic based on my APML data and the audience of the message (if I'm the only intended recipient, and I ignore it... that's not very polite), and soon starts tossing out things I never read (or at least dropping them in some folder I rarely check). And when I do find somebody new to follow, my system asks them for recommendations, compares against my current subscriptions and interests, and leads me to new interesting material. Instead of choosing subcategories of digg to follow, my system finds pages directly.
And we all live happily ever after...
By not posting my comment somewhere associated with the original article, I have essentially forked the conversation. Somebody trying to keep up with the commentary the article generates doesn't know what I said (not a true loss, in the case, but the principle is there).
I was wondering a little about comment aggregation recently, and systems to keep all comments about an article (or links back to articles directly inspired by an article). I hope they develop more in the near future, and I vaguely wonder what, if any, will be the effect on sites like digg or slashdot (more on this below).
[Warning: this post gets a little rambly about now. If you've got better things to do, don't let me keep you.]
Another aspect of this commenting issue that has been on mind, as of late, is the one I mentioned in a recent post: there are too many venues to publish thoughts. I had something I wanted to say about the article I mentioned at this start of this article. I could have left a comment on the original page. I could have thought about what I wanted to say more and made a post here (and put a link in the comments on the original page). I could have left a tweet pointing to the article (but with 140 characters, some occupied by the obscure tinyurl, could I have left a tweet that would have actually inspired people to take a gamble at a url, for which they had no idea where it actually pointed?). I could have just 'shared' the item in my reader. All of these different venues are hosted by different services, and read by entirely different audiences.
What I dream about is a global interface. I don't want to have to post the same thing many times so that 'my audience' sees it. I want to post in one place, and then if I actually have an audience, have the system automatically send them a message (or wait for an rss request) that says something has been posted. And I can pick a subset of my audience for particular messages, and global reading permissions (I could choose, in fact, just a single person, with only that person allowed to read it, and have just eliminated my need for email/IM). Except my audience has the option of just being an audience when I talk about a particular topic (e.g., they could filter my posts down to those tagged with 'math' (future versions of this system handle the tagging for me, as the semantic web and natural language processing makes better progress)). And when I tell the system that my post was related to, or inspired by, some article I found, the system goes to that article and lets it know it inspired me, with a link to what I had to say.
Again, but from a reader's perspective: Instead of finding a blog I want to follow, and noticing that they also twitter, and... wherever else they post, I can just click 'follow', and get their one global feed (tailored the way they want for privacy). Hopefully this also limits duplication from people whose posts get published in a few locations. And then I notice that a particular source has a hard time staying on track, and talks about a diverse collection of things. I've only got so much time in the day and I don't care about the person's fish recipes (or whatever nonsense), so I can set it up to filter incoming messages/feeds from them based on topics/tags the interest me. Better yet, the system sorts all of my incoming traffic based on my APML data and the audience of the message (if I'm the only intended recipient, and I ignore it... that's not very polite), and soon starts tossing out things I never read (or at least dropping them in some folder I rarely check). And when I do find somebody new to follow, my system asks them for recommendations, compares against my current subscriptions and interests, and leads me to new interesting material. Instead of choosing subcategories of digg to follow, my system finds pages directly.
And we all live happily ever after...
Thursday, May 8, 2008
The Problem with Posting
There are too many venues, that's the problem. I have no audience, and still 'publish' in 3 places: blogger, twitter, and I'll go ahead and consider my reader shared items. Of course, with the recent updates to reader shared items, what is the point in posting to twitter anymore (was there every a point?)? I can post random notes to my shared items without any associated link... that's what I do on twitter (I know, I know, you can't follow anybody in reader... unless you sign up for their shared feeds). On the flip side, I can post links on twitter, but why bother if they are shared through reader? And I don't do any real amount of multimedia (pictures, video) posting, but I can't imagine things would be better if I did. And this is just my own individual decentralized posting. What about the people I follow, the random networks they're all on, and services they use. We're all on different IM/blog/microblog/social networks, and it's a mess.
Everyone (e.g., here's two) is abuzz today (I guess yesterday now, sorry) about myspace joining dataportability. And it is good news. I think it's only a first step (one of many first steps being made), which is clearly valuable. Nothing will apparently come out for a few weeks, but I'd still like to think about what's next (since I have been anyway). Perhaps something like the following:
From my main page I can post new material, manage my friend networks, maintain my profile, manage access to my material and profile, and read all of the material people have sent me. Let's start with friend networks. I've set up lots of little networks for myself (instead of signing up to whichever ones online) - some friends are in several of the networks, some are people I've never met (imported from my twitter/blog followers). Some are people I'll only communicate with via email, others are more instant message sorts of friends. Some are people that I'll never hear from, but who've decided to read what I've got to say. Others are the opposite, people I read but will probably never communicate to. The university has set up a network for the class I'm teaching, as well as a network for the faculty and staff. I've got lots of information about some of these people (close friends, online family), and little more than a username for plenty more (blog followers). I've got all of my contacts, at all levels, accessible in one place.
When I want to post something new, I distinguish it as a noted link, or a global tweet, or a local tweet, or a blog post, or a geophysical post (or...?), or I pick an individual out to send a message (email, IM? same thing). So my random twitter followers only get to see my global tweets (and noted links, or links to new blog posts I make, if I want to allow that), while my closer friends might see my local tweets (as well as global ones). Generally I put random 'what I'm doing' tweets in the local bit, and random 'interesting thought' (to me) tweets in the global bit. I've also set things up so that the world can see my blog posts and global tweets (though if I wanted to make an exception for any individual post, it'd be easy to do). My geophysical posts go out to people physically nearby (I've set it up to broadcast to listeners in c'ville, since that's where I almost always am. More advanced (mobile) users can hook things up with their phone to broadcast to local listeners wherever they are). I don't use this feature much, but I hear it's popular. Sometimes when I get bored I take a look at other people's local messages. Or post a message asking if anybody is up for a run this afternoon, or some frisbee.
So how do people see what I've posted? You must have noticed that in the above I only talk about generating content, not presentation. When somebody goes to my base address (like sumidiot.blogspot.com) my server points them to the suggested presentation means (for example, a link to a blogger template), which is something I've customized (added various widgets, changed the layout, etc). But since my data is all stored in standardized formats, it is quite likely that individuals accessing my page have set up their browsers to ignore my suggestion (actually, probably set things up to not even bother asking), and will use their own template. This is a natural extension of the sorts of post-production scripts people already run (greasemonkey, ad-block plus).
People can see my global posts without any further interaction on my part, because I've set things up to automatically accept requests to join my global feeds (there's nothing stopping people from joining the rss feed for my blog, or tweets). When somebody signs up, it shows up in my list of friends, under my global followers. This list is mostly only used as a distribution list for when I write something new, but I also feel like it's polite to then sign up for my followers feeds (most of the time). When I meet new people, I can add them to my various friend networks and they will then be able to follow my posts, if they want. If somebody annoys me, I can remove them from the list, and they won't get my (vitally important) updates. I can even block them from signing up again.
What's great about this system is that it handles all of my online communication. Emails, IMs, tweets, blog postings, feed subscriptions all come and go through this personal communication channel. I mean... IMs are just rapid-fire emails, emails are just individually-audienced blog posts, RSS feeds are emails you don't respond to (but you are, of course, encouraged to make insightful comments concerning). Now I've got one system to both send and receive all of it.
Perhaps some people will have this set up through some company. Like amazon hosts everything for you, or myspace (given recent events). These companies will provide nice ways to interface with all of your data, but the better ones also make it easy to bundle up your data and take it to a different service. Open source projects will also provide these services, but you'll still have to find a host (surely there is an analogy to setting up a wiki using twiki, or a bulletin board using phpbb).
And one day, perhaps a nice cloud will come along, and I can have my setup there, so I don't have to worry about porting my data. Semantic web technologies will determine the content of my posts, and little autonomous agents will wander around the cloud, telling people that are interested in such content that I've posted something new. And dually, I'll have little agents wandering around gathering up things they think I'll find interesting (instead of wading through rss feeds for blogs that don't have a focused topic), and sending little links back to me. They'll not stop at forwarding pages, but will send me directly to the original author (so instead of looking at the digg page for an article, I just see the article (and perhaps a note it made it to digg)). The comments generated by anybody, anywhere, will all (mod privacy) be accessible to me. My little agents are turning up more and more interesting items every day...
Everyone (e.g., here's two) is abuzz today (I guess yesterday now, sorry) about myspace joining dataportability. And it is good news. I think it's only a first step (one of many first steps being made), which is clearly valuable. Nothing will apparently come out for a few weeks, but I'd still like to think about what's next (since I have been anyway). Perhaps something like the following:
From my main page I can post new material, manage my friend networks, maintain my profile, manage access to my material and profile, and read all of the material people have sent me. Let's start with friend networks. I've set up lots of little networks for myself (instead of signing up to whichever ones online) - some friends are in several of the networks, some are people I've never met (imported from my twitter/blog followers). Some are people I'll only communicate with via email, others are more instant message sorts of friends. Some are people that I'll never hear from, but who've decided to read what I've got to say. Others are the opposite, people I read but will probably never communicate to. The university has set up a network for the class I'm teaching, as well as a network for the faculty and staff. I've got lots of information about some of these people (close friends, online family), and little more than a username for plenty more (blog followers). I've got all of my contacts, at all levels, accessible in one place.
When I want to post something new, I distinguish it as a noted link, or a global tweet, or a local tweet, or a blog post, or a geophysical post (or...?), or I pick an individual out to send a message (email, IM? same thing). So my random twitter followers only get to see my global tweets (and noted links, or links to new blog posts I make, if I want to allow that), while my closer friends might see my local tweets (as well as global ones). Generally I put random 'what I'm doing' tweets in the local bit, and random 'interesting thought' (to me) tweets in the global bit. I've also set things up so that the world can see my blog posts and global tweets (though if I wanted to make an exception for any individual post, it'd be easy to do). My geophysical posts go out to people physically nearby (I've set it up to broadcast to listeners in c'ville, since that's where I almost always am. More advanced (mobile) users can hook things up with their phone to broadcast to local listeners wherever they are). I don't use this feature much, but I hear it's popular. Sometimes when I get bored I take a look at other people's local messages. Or post a message asking if anybody is up for a run this afternoon, or some frisbee.
So how do people see what I've posted? You must have noticed that in the above I only talk about generating content, not presentation. When somebody goes to my base address (like sumidiot.blogspot.com) my server points them to the suggested presentation means (for example, a link to a blogger template), which is something I've customized (added various widgets, changed the layout, etc). But since my data is all stored in standardized formats, it is quite likely that individuals accessing my page have set up their browsers to ignore my suggestion (actually, probably set things up to not even bother asking), and will use their own template. This is a natural extension of the sorts of post-production scripts people already run (greasemonkey, ad-block plus).
People can see my global posts without any further interaction on my part, because I've set things up to automatically accept requests to join my global feeds (there's nothing stopping people from joining the rss feed for my blog, or tweets). When somebody signs up, it shows up in my list of friends, under my global followers. This list is mostly only used as a distribution list for when I write something new, but I also feel like it's polite to then sign up for my followers feeds (most of the time). When I meet new people, I can add them to my various friend networks and they will then be able to follow my posts, if they want. If somebody annoys me, I can remove them from the list, and they won't get my (vitally important) updates. I can even block them from signing up again.
What's great about this system is that it handles all of my online communication. Emails, IMs, tweets, blog postings, feed subscriptions all come and go through this personal communication channel. I mean... IMs are just rapid-fire emails, emails are just individually-audienced blog posts, RSS feeds are emails you don't respond to (but you are, of course, encouraged to make insightful comments concerning). Now I've got one system to both send and receive all of it.
Perhaps some people will have this set up through some company. Like amazon hosts everything for you, or myspace (given recent events). These companies will provide nice ways to interface with all of your data, but the better ones also make it easy to bundle up your data and take it to a different service. Open source projects will also provide these services, but you'll still have to find a host (surely there is an analogy to setting up a wiki using twiki, or a bulletin board using phpbb).
And one day, perhaps a nice cloud will come along, and I can have my setup there, so I don't have to worry about porting my data. Semantic web technologies will determine the content of my posts, and little autonomous agents will wander around the cloud, telling people that are interested in such content that I've posted something new. And dually, I'll have little agents wandering around gathering up things they think I'll find interesting (instead of wading through rss feeds for blogs that don't have a focused topic), and sending little links back to me. They'll not stop at forwarding pages, but will send me directly to the original author (so instead of looking at the digg page for an article, I just see the article (and perhaps a note it made it to digg)). The comments generated by anybody, anywhere, will all (mod privacy) be accessible to me. My little agents are turning up more and more interesting items every day...
Monday, May 5, 2008
Notes in Reader
Fun new feature in Google Reader: add notes to shared items. I've been wanting this recently, so that when I share a feed, I can attach my own little blurb. And it comes with a bookmarklet, so now I can share items without having found them from a feed. Hurray! My own little digg (minus digg's RDFa I guess).
A few things:
[Update (<5 minutes after original post): Also something relatively new (I never saw announcements for it, and it didn't seem to be there a few days ago) in reader is that you can see who your friends are that can see your shared items. Of course... I don't have many :( ]
[Update (<10 minutes after original post): Could this be a step in liberating twitter? Also I noticed my first link was a link to a feedburner page, so I updated that]
A few things:
- What is the keyboard shortcut? [Update 8 May 2008: Shift-D]
- In the box that pops up where I enter my note, it has a check box for 'Add to shared items'. What happens when I uncheck that? Where does my note go? It doesn't seem to go to my Google Notebook(s). [Ok, it seems to just go to 'Notes' under 'Your Stuff' on the left panel of Reader, and I guess looking at it there I could then 'Share' it]
- I've got the shared items widget on this blogger page, but it ignores my notes. That needs to not happen.
- [Added after first posting] How do I un-note something?
[Update (<5 minutes after original post): Also something relatively new (I never saw announcements for it, and it didn't seem to be there a few days ago) in reader is that you can see who your friends are that can see your shared items. Of course... I don't have many :( ]
[Update (<10 minutes after original post): Could this be a step in liberating twitter? Also I noticed my first link was a link to a feedburner page, so I updated that]
Subscribe to:
Posts (Atom)