Sex, Drugs and Public Hangings
A series by Spiralbound.net on social deviance and punishment in the United States and Europe
Research Design & Methods:
Since the premise of this project is the notion that citizens of other Western, industrialized countries take on more personal responsibility for deviants than Americans, it is important to come up with a way to first quantify this, and secondly, since this study is comparative in nature, to derive a method by which respondents can qualify their answers. The problem, then, becomes how best to do this. The first question becomes which countries to include. Clearly, Americans needed to be a portion of the sample, but who else? Since the only requirements for the study was that the other country/s be Western and industrialized, the slate of possible candidates became quite large indeed. Based partially on the fact that it is an English speaking country, and partially on the fact that I had access to a faculty member with contacts there, I chose to direct most of my efforts towards Great Britain as a pool from which to draw my sample. Not wishing to limit my options, however, I thought it best to include more countries and searched out other professors with contacts in Germany and France as well.
Second to actually sitting down with and talking to the research subjects (which would have been impractical, at least for this study), the decision to use a survey was an obvious one. The only questions was what to include, and how to deploy it. Because problems with response rate could be expected, if not counted on, the survey would need to be short in order to hold the interest of the subjects, but complete enough to draw meaningful results. I decided on a design which would include a battery of simple Yes/No questions asking if the subject supports various programs designed to deal with deviants, as well as an opportunity to discuss the reasons for, and the limitations of his or her answer. The quantitate Yes/No portion of the survey would reveal statistical data, while the qualitative aspect it would determine exactly what the subject meant by his or her answer. One person may feel, for example, that it is fine for the government to provide emergency healthcare treatment for a tax paying citizen suffering a heart attack, but answer quite differently indeed when it comes to treating a homeless person or drug user for chronic nose bleeds. For this reason, the research subject would only be asked to elaborate on his or her answer if that answer was in support of a given policy or program.
In retrospect, this was a mistake. I was contacted more than once by people during the course of the survey, who asked why I had not provided them the opportunity to explain a “No” answer, and stated that they would perhaps not have answered “Yes” had I given them this opportunity. For this reason, I am lead to the conclusion that the overall number of “Yes” answers, particularly among American respondents is artificially high. This cloud has a silver lining, however, since those who did answer “Yes” generally did a good job of explaining their answers and provided useful qualitative input as to the limitations of their answers.
Having then decided to include citizens from America, Germany, France and England, the next question became how to best select the subjects and get the survey to them. Initially, the best choice from my perspective would be university and college students. Getting a survey out to a pseudo-random pool of Americans would be easy, but contacting a similar group from the other three countries would be nearly impossible without actually traveling there. Thus the sample was to be American, British and French college students, and, for safe measure, members of a union in Germany.
The surveys, distributed via e-mail and administered through a web browser, were created on a MySQL database running on a campus UNIX server. The respondents interacted with this database through a front end created with the PHP server-side scripting language, and no personal data about the individual taking the survey was recorded. One survey was created for each of the four countries, and e-mail containing a brief description of the project and a link to the appropriate survey was distributed in the following manner:
Please note that those contacted in Germany were NOT, to the best of my knowledge, college or university students, and this sample was only gathered as a precautionary measure to be used in the event of insufficient response rates from England and France. At this point, I still had my eyes trained to England as the best source of data.
As with so many things in life, the returns did not play out like I had planned. The days and weeks rolled by, and while I had fantastic results from the United States, I had received none from England and France, and only three from Germany. Two things became painfully clear at this point. First, I would need to combine the data from any European country into one large pool, thus the study would now compare the United States with Europe alone, rather than with England, France and Germany separately. Secondly, I would not be able to count on only using college or university students in the study. I would have to turn to European Usenet newsgroups as a more aggressive method of increasing my response rates. The “plan B” survey distribution, while identical in form to the first wave, went out to the following postings:
Over the next few days response rates picked up, and while I did not end up with as many completed surveys as I had hoped, the number was satisfactory, and the time came to move on. At this point my methods had been compromised in several important ways, and before blazing ahead to interpret the data, it became important to step back to evaluate the resulting limitations of the study.