In my last two posts I talked specifically about the process. This round I’m going to discuss the tools of the trade. Note: These are my views and opinions and not that of PASS in any way shape or form.
This was my first year on the program committee. From talking to other people on the committee, there have been several tools and methods used in the past to do the work of selecting abstracts. I can’t speak to the previous tools just to this years. And it will change again next year. PASS is a dynamic, volunteer, organization things can change pretty quickly when they need to. I also have to say that Elena Sebastiano and Jeremiah Peschka were both very helpful and responsive to questions. Lance Harra was also on the Professional Development track team and was easy to work with. This isn’t Lance’s first time doing this and for that I was grateful. He helped me stay focused and really helped guide the selection process. In all, the final selection process was pretty smooth in that regard.
The Tool
At first blush the web based tool we were to use seemed pretty simple. Jeremiah did a training session and I felt like I could use it without much fuss. The tool is integrated into the main PASS website, which is based on DotNetNuke. Since they have to work inside the DNN framework there are some limitations. It has a limited amount of space to display a lot of information. In the inner panel you could have to scroll down and to the right.
The main page has all the sessions listed, but on multiple pages. I actually missed this my first night going through the first page of submissions. I thought there were only 20~ submissions because I couldn’t see the page counter until I scrolled the frame all the way to the bottom.
The detail page was laid out pretty well the first thing I would change if possible would be the column names. They looked just like that, camel case column names. Secondly, the section to enter notes and set ratings was a little slow to use. You had to click on a rate button fill out the form and submit. This requires a server round trip every time. It does keep you from losing anything you have put into the form so far though.
When you are done with the detail ratings you are back to the main page for your final ratings. Again every button push was a server call and got tiring at times. This is just your final ratings and reason for rejection or to approve an abstract. You also have to set a final reason that ultimately is used by the heads of the program committee to pick the session list.
This brings me to the selection of reasons an abstract was rejected. I have to say it was limited and was difficult to choose. There isn’t a “You were awesome but not enough slots” in the drop down. We have to put in a reason so I tried to pick the most appropriate one I could.
In all, the tool was functional and allowed us to do the work. Again, this is the first year for this tool and I’m sure it will undergo some changes.
Odds and Ends
One of the things I thought was odd was the lack of knowledge sharing. I could see my partners totals for each submission but not any of the notes. Since we aren’t in the same room let alone in the same state it pretty much means out of band emails or phone calls to talk about abstracts. Also, as my first time doing this it would have been nice to see why Lance had rated an abstract the way he did.
After talking with some of the submitters, it appears that they don’t get any feedback on why they were chosen or not, just what we picked in the reasons drop down. I took notes on pretty much every abstract with the assumption that it would be fed back to the submitter, so if they chose to submit again next year they wouldn’t make some of the same mistakes.
Lastly, a speaker is limited to the number of sessions they can present. This guarantees that you don’t see the same three people the whole summit. The problem is we don’t know if they have been chosen more than the allotted times. If we pick them then they get pulled for another track they have to depend on our alternate selection to fill a slot. We did some second guessing on some folks with the assumption they would be gobbled up by other tracks. In hind sight I it would have been helpful if we knew the person had put in say five submissions to and what tracks to make our choices a little better. Possibly prioritize the tracks and publish to the tracks down stream who is off the table. Maybe even allow the submitter to put a preference on their submissions so we have just that little bit more information on what they would like to speak on as well.