There are many ways of dividing up research, usually into two or three different kinds. The list below does not follow any particular order:
Pure, applied, and professional research
Cognate and non-cognate research
Critical review and original contribution
Critical review and empirical research
Pre-research and empirical research
Linear and global research
Relationship between conclusion and discussion
Pure, applied, and professional research
Pure research is develops dealing with theory regardless of whether it has any direct application. It can be indirectly practical in a wide range or circumstances, but there might not be any benefits at all from a particular piece of research. However, pure research is often useful in creating fundamentally new ways of looking at its subject matter. It is sometimes called strategic research for this reason.
Applied research is concerned mainly with the application of theory in the field, and it is relatively easy to produce direct practical benefit. However, it less likely to result in fundamental changes, and findings are not necessarily useful outside the circumstances of the particular application.
Professional research refers to what practitioners do out there. Its purpose is to develop advanced professional skills for the workplace by researching the performance of tasks, and dissertations tend to be rather ambitious projects. Consequently, theses containing professional research are usually only accepted in professional degrees.
Professional research is not particularly inclined to theoretical study, and often there is no attempt to make new discoveries, as opposed to innovative professional practices. However institutions vary, with some more strongly emphasizing theory. The methodology usually already exists and does not need any radical development.
Cognate and non-cognate research
In cognate research, the idea is that everything in a discipline works on basically one set of rules that is true in all places and at all times. For example, gravity is pretty much the same everywhere.
In fields of non-cognate research, truths are only valid for one culture or language group at one time. Cultural and literary studies are usually done this way.
Students can use this factor to their benefit when defining a research topics, because they can define a topic in terms suitable for non-cognate research. They can more easily put a boundary on their topic and don't have to make generalizations beyond their immediate context.
Critical review and original contribution
Critical reviews of the literature aren't an easy way out, especially if the material is complex, and they can result in valuable findings. Consequently, some critical reviews are suitable Ph.D. topics.
In this context, "original contribution" means that the researcher is proposing an original idea and presenting evidence to demonstrate that it is true. Original contribution research is often very challenging unless one has a really good idea to start with.
Critical review and empirical research
Critical reviews of the literature are analyses of written sources, so any evidence is quite abstract, which does not necessarily mean that it is vague. In literature analysis research, the student demonstrates the truth of his/her thesis by using written sources. Almost all religious, philosophical and historical studies use a literature analysis approach for research.
In empirical research, the researcher demonstrates the truth of the conclusion by using some kind of direct observation. Evidence in support of a conclusion is generally more concrete, and is gained through experiments, surveys, or field observation. Whenever you fly, you appreciate that airplane engine designs are tested in experiments before they made your airplane.
Pre-research and empirical research
Pre-research refers to ideas formulated logically based on an analysis of written sources, but must still be tested empirically by some kind of observation or experiment. "Looks good on paper but does it work?"
Research, in this sense, is empirical research that has been tested with out-there data. Ideas that appear very logical might not survive when actually tested. (This classification does not apply to research that can only be done by literary-analytical means. You can't put history or God in a test-tube.)
Some fields and topics make the researcher's task a little easier by already having a repertoire of research methodologies. Students can more easily make progress if they choose a topic that already has suitable methodologies.
Other fields, however, either don't yet have methodologies or have only methodologies that need a great deal of adaptation. In these cases, developing methodologies might be as important as the research itself.
Linear and global research
Linear research methods go systematically through a neat set of steps, only working on one step at a time. Empirical experiments are usually very sequential.
Global research mans that the researcher must conceive of the whole before being able to identify the main parts of the core idea. For example, literature analyses is more likely to require a global approach.
Some researchers get very frustrated when they have to think globally, but make good progress when using a linear style. Others are naturally global and get frustrated when using a linear approach.
Here's an example of someone who had to think globally:
Sam was doing library research. When delving further into his topic, he discovered that the information pointed toward an unexpected conclusion. As a result, he had to rearrange all his material into a different outline to reflect the newly-found relationship between the parts. It was a big job, but it made his paper much better.
Relationship between conclusion and discussion
Some research is very clearly directed toward the conclusion for which it argues and the data is only a way to reach it. This kind of research is easier in that it is clearly focused and very purposeful. Research that is designed and conducted completely to test an hypothesis belongs in this category.
Other research appears to be very focused on their discussion, and the conclusion for which it argues only summarizes or encapsulates the whole. Ethnography and literature analysis usually fit in this category.
Most students find the latter more difficult because they easily lose direction. They still need to go a clear direction and to argue for a particular conclusion, by which one can evaluate the discussion and determine which data is relevant to the topic and which is not.
Qualitative and quantitative research
Quantitative knowledge is that which can be expressed in numbers and statistics. It often refers to the testing of an hypothesis in an experiment to produce statistical data. It starts with an hypothesis, gathers data relevant to the hypothesis, reduces data to statistical information, and confirms whether or not the hypothesis is true. Academics tend to prefer this kind of information where possible, and actively seek ways to get it. We don't recommend it here unless you have specific training in experimental procedures and statistics.
Qualitative information is that which is not represented in numerical or statistical form. Cultural description is one of the best examples.
The style of thinking is very different between quantitative and qualitative:
Results tend toward supporting or not supporting a hypothesis.
Results are more like a one-sentence summary of the whole that is described.
Numerical basis for conclusions.
Lots of weighing up arguments for and against a proposition.
Results will predictably comment on the validity of the hypothesis.
Results are difficult to predict.
Usually difficult to change direction during a research project, because you are testing a hypothesis.
Changing direction is a sign of progress. It means that you understand the questions better.
Favors closed questions (respondents must choose).
Favors open-ended questions.
Tends to use deductive logic.
Tends to use inductive logic.
Very broad literature review is necessary to develop a good hypothesis.
Not so dependent on a broad literature review.
Knowledge is artificial.
Knowledge is natural, reflecting knowledge as the researched people understand it.
Tends to treat a population as if it were homogenous, so seeks only one kind of result.
Tends to treat a population as if it were heterogeneous, so many different results are possible.
The difference between quantitative and qualitative research is often a topic of interest in the humanities.
Statistical and non-statistical research methods result in research conclusion that are epistemologically different. For many decades, the kind of research was considered very important, but with the fall of positivism, several weaknesses became obvious and they are now a little more similar.
Students should not take on quantitative research with a satisfactory basis in research methods that use hypotheses and statistical theory. It is so easy to make mistakes when formulating a system of gathering data to be made into statistics and when interpreting statistics.
The way of thinking is quite different from that used in writing an essay, with one important exception. A literature review is a kind of essay that often depends on having access to many books in its field in order to develop the hypothesis as far as possible before going to the field.
The direction of thought cannot change during the research because the point is to prove whether or not a hypothesis is true, and anything necessary should be known when developing a methodology. The conclusion in easy to foresee: it will be either that the hypothesis is either true or not true.
Research can use a questionnaire, but the answers from respondents have to be made into statistics. Consequently, researchers are trained in ways to write them and how to reduce answers to statistics.
It is not "natural" knowledge. Knowledge is focussed on one specific hypothesis, and does not reflect the whole of the phenomenon researched. Its conclusions depend on statistical evidence that an hypothesis is either true or not.
Hypotheses are gathered together to form an artificial theory that can only symbolize the real world, and might not be comprehensible to the people who were original subjects of the research. The process tends to depend on inductive logic.
Facing the various patterns of through in the population being researched, researchers try to formulate a methodology that will encompass all possibilities in one system of categories. The methodology of a piece of research tends to assume that its conclusions will be absolutely true, although they must be firmed up with further research.
Qualitative research is focused on natural knowledge, that is, it reflects knowledge as it is known by the people who are research subjects. The researcher does not try to change it through statistics. Qualitative research tends to reflect wholes rather than single hypotheses, and can reflect a vary of different thought patterns that arise when the population of research subjects are not homogenous. In turn, the open ended nature of the research makes possible to do much more further research later on, and not of its conclusions are absolute.
Qualitative research is very suitable for writing theses and dissertations. In several ways, this kind of research is more like writing an essay, where the author takes arguments from various sources and weighs up each one for and against.
This kind of information is similar; the researcher must be careful when interpreting data. It differs from quantitative research in that the researcher is interpreting data while in the field.
Research can use a questionnaire with open-ended questions where respondents can give freely give their own answers. The interviewer doesn’t even have to follow a questionnaire that was written beforehand; they can freely ask questions according to what appears significant and relevant. This is because the researcher as a person is often the primary research instrument and is given training in free interviewing.
In this kind of research, the essence of the problem is often accurately understood only halfway through the research. Put another way, understanding the problem accurately is often a major part of the research. In some cases, the data is inadequate and the researcher goes again through the cycle getting new information. In other cases, the researcher needs to move into a new topic or field of knowledge if he/she finds it relevant to the research problem. Besides, the kind of conclusion is different from those that derive from statistics. It is often difficult to forecast what kind of conclusion one will arrive at, but it will be some kind of meaning or value, and it will often be expressed as a patter or theme that is based on descriptive evidence. Consequently, it is tends to be based on inductive logic.
Validity and common mistakes
It's very easy to make mistakes that affect the validity of research. Some of these mistakes are procedural and some are logical. There are many more than these but these are a good start.
If the research purpose, specific research question, methodology, data, and conclusions don’t match, you probably have an invalid piece of research. And the problem is definitely serious if the conclusions and implications aren’t based on the data.
In other words, your research purpose should result in a specific research question. Then you have to choose a methodology that is appropriate for your specific research question. Your methodology needs to produce suitable data, from which you can then draw conclusions. And if you identify any implications, they also have to come from your data.
results in Specific research question
for which you choose a suitable Methodology
which generates Suitable data
from which you can draw Conclusions
In many kinds of research, you should be able to do the research again and get the same result. For example, if you deprive trees of water, they should always die. The same kind of medication should have consistent results when administered for the same ailment.
The researcher normally describes the actual method in enough detail for someone else to run the same experiment again, and it should get the same result. If it gets a different result, then perhaps some other unknown factor is the cause of the change.
However, some research is not replicable, that is, the same methodology would produce a different result. For example:
A community development project could change the way a particular community works and how it sees itself. You couldn’t then do the same project again in the same community. Even so, the researcher needs to report enough detail for someone to adapt it for another community.
An ethnographer interviews people and asks what they believe. As a result of being asked, people might think though their beliefs and change them.
It is also an ethical issue in that one has to trust the researcher to do what he/she has reported. Some things can’t usually be checked:
Events that are lost in the historical moment as the culture or situation moves on.
Lots of potentially good research is spoiled by the unanticipated effects of people’s attitudes.
The Hawthorne effect: If people know that they are part of an experiment, they behave differently than they would if they didn’t know. They often try to make the experiment a "success."
The Pygmalion effect (also known as the Rosenthal effect), refers to the phenomenon in which people perform better when more is expected of them. It is a kind of self-fulfilling prophecy; if teachers expect children do do better, then they probably will.
The placebo effect. Sick people often get better if they believe in the medicine, even if the "medicine" is an inert placebo.
The Dunning-Kruger effect might play a role in any kind of self-assessment.
The less competent a person is, the more likely they are to be confident.
The more competent a person is, the less likely they are to be confident.
The shaman effect: Someone with specialized information on a topic can overshadow the data, whether intentionally or inadvertently.
Note: Some of the post-modern methodologies actually build subjects’ attitudes into the research methodology, so instead of spoiling the research they are part of it.
Respondents or informants might have answered your questions according to cultural values you didn’t find out about, or told you what they thought you wanted to hear. For example, they might have felt intimidated, tried to save face, or reacted to attitudes or purposes that they either correctly or incorrectly perceived in you.
Cause and effect
Let’s say that your research found that X and Y normally co-occur, and you conclude that X obviously causes Y.
But you don’t really know, because your research did not explore the cause-and-effect relationship. It could be:
Y causes X,
Y causes X and X causes Y (a vicious cycle),
some other factor (or factors) causes both X and Y, or
some other factor (or factors) causes X and another different factor (or factors) causes Y.
In fact, if you naturally expect X to cause Y but find that Y causes X, you have found something much more interesting.
Change of definition during research
It sounds obvious that you shouldn’t change your definitions during research. In most quantitative research, it simply means that the conclusions are invalid.
Qualitative researchers, however, have more freedom to make adjustments. The research can change direction very radically part-way through and the act of learning about something makes one define it better. Similarly, your context can evolve quite a lot. The point is that you need to qualify conclusions with the new definitions and the changes in context. And when you've made the changes, you need to use terminology precisely and consistently.
Words like never or always are very risky. Your statement is untrue if you find only one exception. You can use all and none a little more safely, but you still need to be careful.
The terms too much and not enough imply that a criterion is being applied; you need to say explicitly what determines how much is too much or not enough. (In fact, any use of the word too in the sense of "excessive", means you have to specify a limit of some kind.)
"Therefore" states that x must follow from y. No exceptions. It’s a high-risk claim even if it’s true. You can usually use the much safer word "consequently." In fact, I’d recommend you delete "therefore" from your research vocabulary altogether unless you’ve had specific training in some kind of formal logic (e.g. syllogistic logic).
"Sometimes" can be vague if you should specify the frequency that something occurs or the conditions under which it occurs.
At this point, we should note that professional projects are a little different from theoretical research. Supervisors tend to give some leeway for professional judgment to avoid the silliness of proving minor details.
Don’t oversimplify anything complex. Oversimplifications are by definition inaccurate. Combat oversimplification by using precise language, which is always necessary in research. But don’t let it become an excuse for being difficult to read; aim for flowing, readable prose.
Other than that, you can handle complexity is several ways:
Summarize the details if the details are not essential to your purposes but you still need to mention them.
Evade the topic if only one aspect is relevant to your particular topic. You can mention the issue, say why it is complex, then get on with the discussion.
Go through it all if you have no choice but to go carefully through the details. Make sure you get them right.
Circular logic takes a several forms. It looks quite silly when exposed, but happens more often that you’d think.
In one form, the researcher starts by assuming something is true, does the research, and then concludes that the assumption is true. Of course, the conclusion only appears true because the researcher assumed it in the first place.
In a similar kind of circular logic, the researcher goes looking for something, finds it, then draws conclusions about it. But some things are only there because you look for them. He (or she) could have gone looking for something else and found it just as easily. (You’ll find little green elephants and flying pigs everywhere if you believe in them hard enough.) It is actually the same error of assumption; it is the assumption that X exists.
The solution is to consider that X might not exist in the defined form. Some of the possibilities are then:
X exists in the defined form.
X exists, but the definition is incorrect and a new definition needs to be proposed.
X does not exist at all.
X does not exist as an entity; it is a combination of other things.
X is actually several different things, which all need to be defined and differentiated.
X exists only in perception and not in reality. (This option is important if the perception has effects of its own.)
X exists only because the perception of X exists. (This is the self-fulfilling prophesy or the placebo effect.)
X does not exist; the phenomenon can be attributed to other factors.
Nothing is self-evident, so don’t treat any statement as self-evidently true. It's another word for arbitrary; if you suggest that a statement is self-evident, then its contradictory statement is equally as true because there is no evidence either way. (Of course, some things might be so clearly true that you don’t need to present evidence, but that’s not the same.)
Here’s a simple test: Put the "self-evident" statement into a negative form. How do you know that this statement is now untrue?
This test will encourage you to start rethinking your argument and looking for evidence one way or the other. (Of course, both forms could be wrong or unprovable if you also made another mistake, e.g. questionable assumptions or definitions.)
We have already seen the danger of overstatement in words like always or never. But there are other kinds of overstatement.
Be careful to qualify your results. Your results are only true for your particular population, at that time, with those definitions, using those assumptions, and with that methodology.
Claiming that you have proven something is an unwise overstatement. It’s always possible that more evidence will come to light showing that your definitions, methods, conclusions, or assumptions need to be modified or even replaced. As a result, it’s usually better to say that the evidence "supports" or "lends weight to" your particular conclusion. Even the word "demonstrates" is often quite acceptable.
On being scientific
Your research needs to be scientific in some way; it is not just arbitrary values, private opinion, or vague pseudo-truths.
Be scientific means something different depending on the topic and the methods, because they make different assumptions and follow different philosophies. For example, only hard statistics might be adequate in some fields. In contrast, an ethnographer describes culture. If people in the target cult think that the world is flat, teh ethnographer seeks to find out what they mean by that and how it affects their thinking and actions. The point is not to argue that the world is round.
While there is much to learn from postmodernism, there is still an appropriate place for objectivity. Evidence is objective in that knowledge can be true, rational and expressible in human language, while still being aware of subjective factors. We can identify our assumptions and defend them. We don’t have to claim that our conclusions are absolutely proven, and we can can allow for changes in the future.
Empiricism and "proof"
Empiricism is the idea that reality can be perceived through the senses, and hence that what one observes can be used as evidence to support a conclusion. It has been essential to modern science since the Renaissance.
In the middle ages, theologians in the Western church held that knowledge could only be deduced from given precepts, usually from Scripture and the philosophy of the ancients, especially Aristotle.
Copernicus and Galileo were the pioneers; they observed the planets and stars, and drew conclusions about the way they moved. The church disagreed, and Galilaeo was ostracized because his views were based on observed evidence, not the teaching of the Church.
But the idea of drawing conclusions based on observations stayed as the basis of Renaissance science. In other words, they established that the senses could be the basis for scientific endeavor. Since then, it is generally seen that science needs to be grounded in the real world.
By going out and researching people, you will be doing empirical research.
Positivism (also known as logical positivism or Comteism) is the idea of a basic scientific experiment. If you have two identical trees growing in the same kind of place, and you water one but not the other, the outcome (one dead tree and one live tree) is the result of your watering. Your conclusion (water makes trees grow) is then seen to be positive knowledge.
The purpose was to be establish a method of gaining certain knowledge that was completely objective. But late in the twentieth century, people such as Michael Polanyi and Karl Popper showed that it was actually quite subjective. They pointed out many faults:
Scientist’s personal interests and values inclined them to do some experiments rather than others.
Scientists influenced each other to work in different directions.
Basic positivism had no way to account for unknown assumptions.
It could not show how myriad atomistic conclusions could add up to an encompassing theory.
It could not show that the human senses were reliable in perceiving the world.
The basic tenet of positivism has persisted in a useful form but it has been reduced. It now only claims to produce what is probably true. While positivism is part of empiricism, lots of empirical research is not positivistic.
Physics and chemistry are presented as real science with objective truth, although the natural sciences are not the only kind of science. Do cultural studies produce similarly objective truth?
The philosophy of science has changed with the decline of positivism, which, too simply put, was the idea that knowledge could be completely objective.
In summary, the position since the rise of postmodernism is that science is not totally objective. For example, what you think a book means might say more about your ideas than about the book itself.
The idea of objective observation in a laboratory to test hypotheses is helpful in the ''pure'' sciences, but its limitations are now well known.
One of them is that there is always a relationship between the researcher and the research. Nobody denies the dangers of subjectivity, but total objectivity is a falsehood.
The relationship between student and what is studied is real in any field. The characteristics of this relationship vary greatly between researchers, between disciplines, and even between different research methodologies. There is no such monolithic thing as ''science'' but a range of disciplines, each of which is a systematic kind of thought that produces its own kind of responsible conclusions.
Each researcher's contribution tends to be unique because researchers vary in their motivations, thinking styles, creative directions, and interests.
In recent times, the not-completely-objective nature of science has spawned the study of the sociology of science as a new sub-discipline. The ways scientists think is partly caused by their patterns of social interaction.
Another effect of postmodernism
By the way, another effect of post-modernism is that scientists in any one field don't try quite as hard to develop a logically consistent theory of whatever they study. Reality is just too complex and no single theory is big enough to explain everything that happens in its field.
It's better to have a grab-bag with lots of theories. They have to fit the evidence, but it doesn't matter too much if they're not consistent. As a result, a "theory" is now more like a model, because it has less claim to encompass all reality in its field.
Several values of "science'' are less acceptable. A questionable value is the importance of research as opposed to that which can be easily known. The problem is that one cannot always assume that the easily-known is less important. This particularly affects people on the field--sometimes the most basic things are most important.
Another negative value is inbuilt relativism. We might express it as follows:
Findings are never completely true; they are only well-supported and may be re-evaluated and refuted. Truth is not really expressible in final, indisputable terms.
Disciplines allow opposing theories and paradigms to co-exist.
Knowledge is complex, dynamic, and always subject to myriad qualification.
Academia has a propensity for faddishness, abandoning valuable solutions to important problems when journal editors tire of the issue or when further embellishment is no longer in demand.
Why "proof" is a difficult concept
Radical skepticism is the idea that it is extremely difficult to prove anything beyond all doubt. Here’s what usually happens ...
The first-year philosophy professor starts a class by putting a book on a desk in front of the class and asking the students to prove that it is really there.
Students try all the obvious answers:
"I can see it there."
"We can all see it there."
"It fits the definition of a book."
But the professor easily shows that their answers are not absolute proof:
How do you prove that your eyesight is infallible?
Can you prove that your deduction based on your eyesight is infallible?
Can you prove the existence of a physical realm?
Could there be other possible conclusions that you have not yet considered? How do you prove that your conclusion is the only possible conclusion?
Can you prove that your deduction is not influenced by your culture?
Could there be other definitions of book that don’t fit your preconceived idea of book?
Isn’t it simply an imprecise social convention that is vulnerable to misunderstanding?
If you define book by using other language, aren’t you just compounding to the problem?
Can you prove that the language with which you communicate your observations is absolutely unambiguous?
How do you know that the person is even there? Isn't that the same problem as whether or not the book exists?
If you communicate your observation, how do you know that the other person has received it?
And how could you prove that your idea is the same as the other person's idea?
If you depend on confirmation by other observers, how do you then prove absolutely they also exist?
If they do, how do you prove that you have communicated infallibly?
And then how do you prove that your communication isn’t subject to your shared social and cultural preconceptions?
If you get that far, how does that constitute absolute proof anyway? Isn’t it just a sociological consensus?
Your field of study
The disciplines share some major assumptions: conclusions must be supported according to standards of evidence, expressible in human language, and agreeable to people with demonstrable expertise in the field.
But evidence for any kind of conclusion depends on what you are studying. This is because a researcher uses the kind of evidence that is appropriate to the topic, and the standards of evidence and argumentation depend upon the subject matter of the discipline. For example:
An historian cannot recreate the past in a laboratory to test an hypothesis; he/she depends upon documents of varying credibility and on archaeology.
Linguistic phenomena are often readily available but the descriptive linguist must document them and articulate rational models that explain them. If you are studying grammar, the evidence will be examples of language using particular grammatical structures.
If you are studying culture, your evidence will be your ethnographic observations and examples of people’s statements about their beliefs. In fact, an ethnographer using a participant observation research methodology is to some extent observing himself.
If you are studying pharmacology, your evidence might be a double-blind test of medication with a placebo control and a randomly selected group of patients.
If you are studying construction engineering, your evidence might be the results of testing various structures with different designs or materials.
Put another way, there’s no such thing as Science; there are many scientific disciplines and each has its own rules of evidence.
Each field of study has its own sociology of prominent influencers and pattern of consensus. Consequently, social forces determine what's acceptable and what's not, what's trendy and what's not, and what direction research will go.
And different fields on study don’t always even agree with each other. For example, economics shows that the markets work in certain ways. Business students spend their energy trying to beat the laws of economics.
Many disciplines have broken into schools of thought that disagree on very fundamental aspects. In other words, even within one discipline, scientific endeavor is broad enough to allow for different theories to co-exist. For example, education and psychology both have behaviorist and cognitivist camps which use radically different assumptions and methods.
1. Be respectful of persons, their culture, and their institutions.
As an underlying principle, you are required to treat people as valuable and worthy of trust. This brings up issues of your personal ethnocentricity and prejudices, potential favouritism toward some individuals, and your bias toward some viewpoints.
Clearly, you do not need to agree with everything that people say and do, and you could be exposed to practices that may be seen as grossly immoral. Nevertheless, your starting point is your respect toward them.
2. Get people's permission if they are to be your informants.
You only need to need to ask them orally; in fact anything more might make them suspicious or act unnaturally. For example, "I'm new here and I'm learning your culture. I don't understand some things. Could you help me please?" (Of course you'd adapt the example to your situation.) Then, as much as possible, keep interviews to friendly conversations and make notes immediately afterwards, not during the conversation. Technically these are called "free informal interviews". They also enhance your security.
Some interviewees give permission to be quoted with their names, especially on matters that are not sensitive and perhaps enhance their prestige. You can reference them fully as formal interviews.
In some countries, privacy laws require you to obtain the written consent of informants, but the general trend this that this does not apply to free informal interviews if you keep specific identities confidential.
Some topics become impractical if laws require you to disclose fully the nature of the research project for getting informants' or subjects' permission. In many cases, providing that information is just unscientific, because it predisposes people toward particular responses, making your conclusions invalid.
3. Do not make identities public without their explicit consent.
Keep people’s personal information private and confidential unless they have authorized its release. This is also a legal requirement under privacy laws.
Unless you have explicit consent, you should keep informant's quotes anonymous in the final work, and maintain the integrity of the informants' information by keeping it distinct from your analysis and comments.
Readers should not be able to identify your informants from the way you have written about them. Informants might also validly perceive audio or video recordings to be a risk. "What if someone recognizes my voice? Or sees my face?"
It is generally better simply to add a note in your introduction that informants are not identified for ethical reasons. You might find pseudonyms helpful.
However, you should record identities (as much as you know), places, and times of interviews in your field notes. These records are helpful when you need to establish the authenticity of your field information with your supervisor, but these records must then be handled according to security procedures.
Similarly, you may not mention in a public document an organization by name without its permission unless its activities are on public record in the location of their activities. Some organizations need to operate out of the public eye, and being mentioned in a public document may endanger their personnel or their activities.
4. Protect the interests of your research subjects.
First, avoid any way in which informants' cooperation and personal information could be used against them. In some cases, people can be arrested and imprisoned based on your information. For example, Spradley's ethnography of the homeless in the US could have been used to arrest many of his informants had it been published locally. The danger is even greater in countries with oppressive regimes or persecution policies.
Your research could lead you to knowledge of illegal activities. Your commitment to your informants generally means that you should prefer to protect their interests. In some cases, you might even need to "pull the plug" on your research to protect a victim. Besides, simply by being present and observing an illegal act may lead you to be deemed an accomplice.
Second, your research, including your relationships with informants and any means used to acquire information, may not be exploitative, or seen to be so. Besides the obvious problems of inappropriate relationships (e.g. romantic entanglements), your information gathering gives you the ability to become power-broker or mediator, which is a potentially exploitive position. You also have a duty to protect them from exploitation.
Third, maintain a safe environment for yourself and others. Kind of obvious, with the emphasis on WHS and the current aversion to risk.
5. Your information must accurately reflect your sources.
Your reporting needs to be honest and representative of what you have observed, read, and heard. You need to protect the intellectual property of authors, informants, colleagues and research assistants.
The list of prohibitions is more illustrative of the kinds of potential problems:
You may not use fictitious information. This includes not just manufactured information, but also "bending", "adjusting" or exaggerating aspects to suit your own ends.
You may not delete information that would create an impression different from what you had observed.
Reference the source if you use other people’s ideas or data. You may not plagiarize or submit work resulting from unauthorized collusion.
6. You may not use deceptive means to obtain information.
While it is normal to select information that you should disclose, you may not provide misinformation.
You can safely and ethically conduct research on sensitive topics.
You can keep quotes anonymous in the final work. To maintain the integrity of the informants' information, keep it distinct from your analysis and comments. It is generally better simply to add a note in your introduction that informants are not identified for security reasons.
The same applies to organizations; if the activities of an organization are not on public record in the location of their activities, you should not mention them by name except in in-house written work.
You will often use a networked sample (a natural network of friends, relatives, and neighbors), that will make your work more secure. You are introduced to people by others whom they trust.
Some of your research may need to be private, so keep and transmit documents the same as any other material that you wish to keep private (e.g. hand-carried or secure Internet connection).
It is sometimes to your advantage that cultural descriptions be made public in edited form. They are usually not security risks if you see a culture emically and sympathetically. Your people can easily tend to see descriptions as a reflection of their ethnic pride, so an an attack on your work is an ethnic attack on them. Besides, if your work is on public record, you can easily provide evidence that your activities are not a security risk.
Making edited versions available might not be advisable when the ethnic group you study is an oppressed minority or if you should have a research permit as part of your visa.
Solving ethical problems
You might encounter ethical problems in your research. In practice, most of them are fairly easily resolved by:
following your organization's statement of research ethics
asking your instructor and
following your professional association's statement of ethics.
doing an Internet search of the problem.
If you still can't resolve the issue, read on ...
An approach for analyzing ethical dilemmas in research methodology
State the situation clearly. Try to be objective.
Present the dilemma as a clear contradiction between principles.
Besides your college, who else needs to approve of your solution? (This might be funding bodies, the organization in which you are doing the research project, etc.)
Identify the principles involved.
Consider the logical connections:
The ethical basis of the principles (pragmatist, absolutist, etc.)
The direct and indirect implications and consequences
Any possible unintended consequences
Research any precedents.
Give your range of possible solutions, and consider the strengths and weakness of each one.
Choose a solution that you can implement.
Check your solution for a failsafe position. That is, what is the chance you have made a mistake, and what can you do if you have?
Present your solution in a convincing way.
This procedure might look step-by-step, but you need to check through each step afterwards to make sure you have a suitable solution. If you have to make trade-offs, then recognize that you might have to put up with some kind of risk. In your methodology, simply explain what you have done and present the information that informed your decision.
Case studies in ethics
Case study 1
The institution required that all research subjects must sign a consent form at the beginning of the research stating what they would be researched on. The student wanted to research a particular ethnic group that was known to be unreceptive to outside influences. They would obviously be unwilling to consent to such a research topic.
Choose another topic.
Ask for permission to waive the requirement of a written consent form, and use an oral permission for a generically named "cultural study".
Solution chosen: Choose another topic: training nationals to work with the particular ethnic group. The trainees could then sign the consent form.
Case study 2
The institution required all its researchers to get their subjects’ written consent to participating in research with a specific topic. The particular target group that would obviously "sanitize" their answers to avoid anything that might embarrass anyone and would politely give the researcher the answers they thought the researcher wanted. Any resultant research findings would then be invalid.
Choose another topic.
Ask individuals for an oral permission to simply learn about their culture and write it down at the time.
Ask for an oral permission for a generically named "cultural study" and write it down at the time.
Ask for an oral permission for a research on a related topic and use the data for the real topic.
Solution chosen: Ask individuals for an oral permission to simply learn about their culture and write it down at the time.
More cases to discuss:
You are doing research on an organization. On one hand, but it is afraid that you will expose their dirty washing, but it could also find new ways to improve.
You are doing an ethnography in a community. They are afraid that data will be released that could harm informants (e.g. religious persecution). However, your institution requires that your dissertation must become a public document.
Your research project has not produced enough data to support firm conclusions but your tentative conclusions would be highly beneficial in a situation where delay is dangerous.
Your informants say they believe something, but you observe a pattern of behavior that is quite inconsistent with what they say they believe. How much should you respect what people say?
You work at SmithCorp, which sponsors you to do a course at Jones University that involves doing research at work. Your academic supervisor doesn't agree with your SmithCorp boss. You face a conflict of duty.
You are doing a study of the emotional trauma of assault victims. Although interviews agree to participate, the interviews often cause great discomfort. You are torn between their discomfort and your need for "painful" data.
You stumble onto information about illegal activities, but have to comply with a requirement to keep information on subjects confidential.
Your research uses animal subjects. When are they animals and when are you treating animals as if they were people?
You have a personal tendency to withdraw from conflict, but you also feel pressure to step in as a "rescuer"?
The Allies used data that was unethically obtained by the Nazis conducting experiments on concentration camp prisoners.
The Allies used data that was unethically obtained by the Japanese section 31 during World War II by experimenting on prisoners sentenced to death.
Some postmodern research methods are based on the idea that a group of people can construct their own reality. In practice, this means that they share their ideas and come to a belief that they can use as a basis for action. Sounds odd. Here’s how it came about:
Before post-modernism, scientists attempted to develop theories that explained all reality in their respective disciplines. In each field of study, the goal was to build a consistent theory that accounted for all phenomena in its field.
With the advent of post-modernism, it was eventually conceded that reality is too complex to be explained by any one theory. It then became acceptable to have multiple theories, all of which were to some extent justifiable, and which were not necessarily consistent with each other. Each theory became one tool in a toolkit.
At the same time, the distinction between objective and subjective began to blur, because so-called "objective" knowledge was shown to be at least partly subjective.
One application of these ideas was that different people (or groups of people) could construct different inter-subjective realities, which are not fictions and not subject to examination from other realities. To some extent, these are much the same as worldviews.
You’ll find lots of modern methodologies based on this philosophy. They can be very useful if you are working with a group of people that needs to share a belief that it can use as a basis for action.