Jackson Mumper

GIS and Academic Portfolio


Project maintained by jackson-mumper Hosted on GitHub Pages — Theme by mattgraham

While not inherently spatial, the reading by Longley et al made me think about my experiences with survey design, and how those data can often be misconstrued to mean something other than they actually are saying. Last spring I conducted a survey in my home region about people’s level of concern about the (new at the time) pandemic. The only issue was that I wasn’t able to find a good way to get a sample, so I relied on people in a Facebook group designed to help people during the pandemic. So even if I analyzed the data correctly, the results were pretty skewed because of sample bias. It’s not hard to go from here to thinking about these issues from a more geographical lens.

I really like Figure 6.1 in the Longley reading. It seems like a good representation of the research process, and a recognition that mistakes, bias, and losses of nuance can take place at any of the stages. In my example above, that would be a U2 error, as my measuring technique was not ideal. But when it comes to spatial phenomena, problems can arise at any one of these transitions between reality and story. It’s easy to imagine a researcher creating a metric that doesn’t actually live up to the real world (as is often seen with development indices), or just making a mistake in the software.

My first thought in terms of the responsibility of researchers in this regard is that they shouldn’t be making and publishing these errors in the first place, but researchers are people too, and it’s a little naive to think that someone could have a perfect understanding of the world, with perfect tools for measurement and a perfect execution of those tools all in one. So I think ultimately the responsibility then becomes to be honest about one’s own shortcomings. Including notes in one’s findings about shortcomings in the methodology or conception, with some thought given to how to interpret the results given these potential sources of uncertainty.

This is another area in which an open source model of research can be used for good. By allowing researchers to see the complete inner workings of one’s research, the doors of critique and caveats begin to open, and people who have ideas for how to improve someone’s conception of a research question can do that for themselves.

But these sorts of questions are ones that researchers should ask themselves at the beginning of the research process rather than at the end. If projects begin from a place of scrutiny of existing research norms and intentionality with metrics and tools used, then the metadata page at the end expressing the data’s shortcomings won’t need to be as long, and more reliable analyses will be conducted.

Sources:

Longley, P. A., M. F. Goodchild, D. J. Maguire, and D. W. Rhind. 2008. Geographical information systems and science 2nd ed. Chichester: Wiley. (only chapter 6: Uncertainty, pages 127-153)