Deadly mistakes

Ross Woods. Rev. 2018, 2022

It's very easy to make mistakes that affect the validity of your research. Some of them are procedural and some are logical. There are many more than these but these are a good start.

One: Unanticipated attitudes

Lots of potentially good research is spoiled by the unanticipated effects of subjects’ attitudes.

Note: Some of the post-modern methodologies actually build subjects’ attitudes into the research methodology, so instead of spoiling the research they are part of it.

Two: Cultural expectations

Respondents or informants might have answered your questions according to cultural values you didn’t find out about, or told you what they thought you wanted to hear. For example, they might have felt intimidated, tried to save face, or reacted to attitudes or purposes that they perceived in you, either correctly or incorrectly.

Three: Cause and effect

Let’s say that your research found that X and Y normally co-occur, and you conclude that X obviously causes Y.

But you don’t really know, because your research did not explore the cause-and-effect relationship. It could be:

In fact, if you naturally expect X to cause Y but find that Y causes X, you have found something much more interesting.

Four: Change of definition during research

It sounds obvious that you shouldn’t change your definitions during research. In most quantitative research, it simply means that the conclusions are invalid.

Qualitative researchers, however, have more freedom to make adjustments. The research can change direction very radically part-way through and the act of learning about something makes one define it better. Similarly, your context can evolve quite a lot. The point is that you need to qualify conclusions with the new definitions and the changes in context. And when you've made the changes, you need to use terminology precisely and consistently.

Five: Logical flaws

At this point, we should note that professional projects are a little different from theoretical research. Supervisors tend to give some leeway for professional judgment to avoid the silliness of proving minor details.

Six: Oversimplification

Don’t oversimplify anything complex. Oversimplifications are by definition inaccurate. Combat oversimplification by using precise language, which is always necessary in research. But don’t let it become an excuse for being difficult to read; aim for flowing, readable prose.

Other than that, you can handle complexity is several ways:

Seven: Circular logic

Circular logic takes a several forms. It looks quite silly when exposed, but happens more often that you’d think.

In one form, the researcher starts by assuming something is true, does the research, and then concludes that the assumption is true. Of course, the conclusion only appears true because the researcher assumed it in the first place.

In a similar kind of circular logic, the researcher goes looking for something, finds it, then draws conclusions about it. But some things are only there because you look for them. He (or she) could have gone looking for something else and found it just as easily. (You’ll find little green elephants and flying pigs everywhere if you believe in them hard enough.) It is actually the same error of assumption; it is the assumption that X exists.

The solution is to consider that X might not exist in the defined form. Some of the possibilities are then:

Eight: Self-evident truth

Nothing is self-evident, so don’t treat any statement as self-evidently true. It's another word for arbitrary; if you suggest that a statement is self-evident, then its contradictory statement is equally as true because there is no evidence either way. (Of course, some things might be so clearly true that you don’t need to present evidence, but that’s not the same.)

Here’s a simple test: Put the "self-evident" statement into a negative form. How do you know that this statement is now untrue?

This test will encourage you to start rethinking your argument and looking for evidence one way or the other. (Of course, both forms could be wrong or unprovable if you also made another mistake, e.g. questionable assumptions or  definitions.)

Nine: Overstatement

We have already seen the danger of overstatement in words like always or never. But there are other kinds of overstatement.

Be careful to qualify your results. Your results are only true for your particular population, at that time, with those definitions, using those assumptions, and with that methodology.

Claiming that you have proven something is an unwise overstatement. It’s always possible that more evidence will come to light showing that your definitions, methods, conclusions, or assumptions need to be modified or even replaced. As a result, it’s usually better to say that the evidence supports or lends weight to your particular conclusion. Even the word demonstrates is often quite acceptable.

Ten: Over-complexity

A researcher who looks for complexity will probably find it. To some extent, complexity can be researcher-generated, because the academic research system depends on it. In essense, the reality might be quite simple and the action taken in response might also be simple.