Thursday, May 20, 2010

Success - and Learning from Failure

The greatest barrier to success is the fear of failure.


- Sven Goran Eriksson


Well, what a ride! I started out in January of this year proposing to do a Master's thesis as part of my degree in Professional and Technical Communication. As part of my thesis, I developed a web-based application for the assessment of student portfolios by a community of readers. Keep in mind that I hadn't programmed since my undergrad days at NJIT (25 years ago!) and so I was faced with the daunting task of becoming proficient at HTML, learning PHP, and learning MySQL. All this in the course of a few weeks so that I could actually develop the application and have it finished in time to perform usability testing and complete my thesis.


This wasn't just a theoretical application, mind you. I had to ensure that the application actually worked because it was going to be put to full use on May 13th of 2010 when the freshmen-level writing professors would be using it to score approximately 80 student portfolios.


I was terrified the night before the actual portfolio assessment session. I had visions of the application crashing and burning in all its glory. I had tested it as much as possible during development, but that's nowhere near the same as putting it out there for 20 or more professors to use in real time.


So was it a success? Yes! Did it have problems? Yes! I realized that the way I had implemented adjudications did not work based on the way the application was actually used, so I have already started to redesign that part of the application. But in general, the application performed beautifully.


So what does this all mean? It means that perseverance and the development of knowledge are just as important to defining success as a positive outcome. The application did not work flawlessly, but there were no recriminations from the professors - there were suggestions of how to make the application better, there was increased knowledge and understanding on my part, and all those factors together point to the fact that the entire experience was a success.


In fact, it was such a success that I have now graduated from NJIT with my Master of Science in Professional and Technical Communication, and I am subsequently applying for admission to NJIT's PhD program in Information Systems. I'm sure that experience will generate even more posts to this blog, so wish me luck!

Thursday, February 18, 2010

Designing an Assessment Application

Don't make something unless it is both necessary and useful; but if it is both necessary and useful, don't hesitate to make it beautiful.


- Shaker Philosophy



I've been fairly quiet again, but it's not because I don't care about you, my readers. I've just been really busy trying to work on my thesis proposal. Now that the proposal is in good shape, I'm doing rapid prototyping for my application to work on the HTML screens. I'm going to post them here (there are quite a few) in the hopes that I can get some feedback on usability design. These are non-functioning HTML pages - there are no active links. And I know the design is very simple, but my development cycle is quite compressed so I don't have a lot of time to make it "pretty" right now.


So here are the pages - let me know what you think.


This is the Welcome Page that users will see when they start the web-based application.


They then move on to the Login page, where both raters and administrators can log into the application.


From here the screens divide into two components: screens for the raters and screens for the administrators. I'll handle the screens for the raters first.


The first thing a rater does once he or she has logged in is to select a student to rate using this page.


Depending on the section being rated, the rater will then be presented with one of three scoring rubrics: Humanities 101, Humanities 352, and graduate level MSPTC.



The rater must submit a score for each variable on the rubric unless the rater is adjudicating, but more on that in a minute.


When the rater submits the completed rubric, she is presented with choices for the next action, as shown on this page.


If the rater chooses to log out, she is presented with a screen asking if she is sure she wants to log out. If she answers yes, she is logged out and thanked for her time and efforts on a final screen.


If the rater chooses to rate another student, he goes back to the page to select the next student to rate.


Now, from the administrator side of the application, there are more functions available. When an administrator logs on, she is presented with the administrator screen listing possible actions. I'll step through each of these actions in order.


The Begin Assessment screen lets the administrator start the "official" rating period. Prior to that, any scores entered are not stored in the database. This is useful for the purposes of training the raters prior to starting the actual assessment.


If a faculty member walks into the assessment room and asks to be a rater, the administrator can use the Add New Rater screen to add that faculty member to the list of raters. Similarly, if a student knocks on the door and hands the administrator a last-minute portfolio for scoring, the admin can use the Add New Student screen to add that student to the list to be rated.


Each portfolio is rated by two raters. But if there's scores don't match or aren't adjacent (within one point of each other), we assign a third reader, called an adjudicator, to break the tie, so to speak. The administrator can find the portfolios that need an adjudicator and then assign them using this screen. Note that the adjudicator only provides scores for the variables requiring adjudication.


The administrator can also monitor scores to evaluate the ongoing assessment process. Using the View Reports screen, the admin can select different types of reports to view.


  • View Report by Student lets the admin look at the scores from all raters for a particular student.

  • View Report by Rater lets the admin look at the scores being assigned by a particular rater.

  • View Report by Variable lets the admin see a bar chart of the distribution of scores for any particular variable from the rubric.

  • View Records Requiring Adjudication lets the admin see all records that needed adjudication, whether the adjudication was already completed or not. This is useful for identifying any raters who are consistently requiring adjudication.



Once the assessment is over, the administrator can export the data for analysis using either the SPSS or SAS software package by making the appropriate selection on the Export Data page.


That's the design of the application as it currently stands, which is still in the early stages. I look forward to your feedback.

Friday, January 8, 2010

Stereotype Threat - What Every Minority Should Know


The emotional, sexual, and psychological stereotyping of females begins when the doctors says: "It's a girl."


- Shirley Chisholm


I was doing a literature search for a research proposal and I came across a most fascinating study. Published in 1999 in the Journal of Experimental Social Psychology (volume 35, pages 4-28), it documents research performed by Steven Spencer, Claude Steele and Diane Quinn regarding something they call stereotype threat. They define stereotype threat as "the experience of being in a situation where one faces judgment based on societal stereotypes about one's group..."


It's really a fascinating study - one that every minority should read. To summarize what was a very involved study, the researchers found that when men and women took an easy math test, they scored about the same. But when the test was difficult, women scored significantly lower than men.


This might make you think, "Aha - so it's true that men are better at math than women!" But you'd be wrong. Because in a second portion of the study, the researchers showed that stereotype threat was the real culprit behind the significant difference in performance. In this second study, all groups were given the difficult math test. But some groups were told ahead of time that previous tests had shown gender differences (thereby reinforcing the stereotype threat), and other groups were told that the test had never shown any gender differences in the past.


  • The groups in which the stereotype threat was reinforced showed a significant difference (almost 20%) in scores between men and women.

  • In the groups who were told no gender differences existed, the scores for men and women were almost identical.


Fascinating, isn't it? So the next time you hear someone tell you men are simply better at math, or some other ignorant stereotype, don't believe them, and certainly don't let it impact your own performance!