Université de Nice Sophia-Antipolis: Accueil
Notes for courses by
D a v i d   C r o o k a l l

AreasCourses / Presentations / Job interviews / Teamwork / X-culture / Searching / Research / Publication / Communication / Projects / Meetings / Studying
CoursesInfoCom (ACL) L-3 • IUP Com SEDI L3 • IUP Job interviews • IUP Teamwork • IUP x-cult • MBFI • Telecom

Research cycle è • Up • Research cycle • Stats • Thesis writing • About research • Data_sources_Stats • Resources •

Research cycle
Thesis writing
About research

Simulation & Gaming:
An Interdisciplinary Journal



Research cycle • Up •
• Proposal •
• Topic •
• Lit Rev •
• Samples reports •
• Design •
• Qnr samples •
• Qnr design •

Essential links


From  www.pharm.chula.ac.th/research_design/sld002.htm

Qualitative and quantitative research are the two major approaches used in scientific inquiry.  A comparison of the distinguishing features of each of these approaches is presented in the following table.

  Qualitative Quantitative
Purpose Gain insight into a problem through the interpretation of narrative data Explain a problem or predict an outcome through the interpretation of numerical data
Approach to inquiry Inductive, subjective, interested in participants Deductive, objective, detached from participants
Hypothesis Tentative, evolving Specific, testable
Research Setting As natural as possible Controlled to the degree possible
Sampling Selective, small sample to facilitate in-depth understanding Random, large samples used from which generalizations are made
Measurement Non-standardized, is on-going Standardized, performed at the end
Design and Methodology Flexible, specified in general terms in advance Structured, specified in detail in advance
Data Collection Participant observation,  taking detailed extensive notes Non-participant; administration of tests, instruments, surveys and questionnaires
Data Analysis Ongoing, involves information synthesis Performed at the end; involves statistics, graphics and measurement tools
Data Interpretation Generalizations, speculations Formulated with a degree of certainty at the end
Reporting Raw data are words, interpretive reports Raw data are numbers, impersonal objective reports
From  http://www.dtfire.com/introduction_to_educational_research.htm


Dr. Robert N. Tyzzer

From http://www.humboldt.edu/~rnt7001/scientific_method_dr.htm

The Scientific Method
Validity and Confidence Levels
Hypotheses, Theories, Laws, and Scientific "Proof"
Scientific Observation and "Counter-intuitive" Conclusions

    Science and its methods result from human curiosity and from our attempt to understand ourselves and the world around us.  Science assumes that we can learn how the universe “works” – that there are consistent laws of nature and that we can discover them.  The scientific method has proven to be a powerful and reliable tool for doing so.  Science also has limitations: It cannot directly address subjects that are not part of the natural world, are natural but beyond our observational abilities, or events that occur too rarely to observe systematically.  (However, technology such as microscopes, telescopes, and many other scientific instruments can make what was once beyond observation observable, thus increasing the reach of science.  This apparently disturbs some people.)  

    The scientific method is essentially an organized, systematic way to state and answer questions, and to solve problems.  There is no one single “official” scientific method, and the “steps” below usually blend into a continuous process.  The process also inevitably differs somewhat from one discipline to another.  However, outlines like the one below do help to describe how science is done. There is also a review in the text. 

    The problem is stated clearly and specifically, based on initial observations, curiosity, or a recognized problem.  A precisely-stated problem increases the odds of eventually gaining new knowledge.

    Gathering of all available information that is already known about the problem.  May involve some preliminary experiments. (There is obvious feedback between steps 1 and  2.)

    A good hypothesis relates and explains the known facts.  It should also predict new facts.  It must be stated in such a way that we can test it by experimentation or further observation, or it is of no scientific value.  Also, it must stated in a way that would allow us to show if it is incorrect, i.e., it must be "falsifiable."  A scientist must be willing to accept the possibility that his or her hypothesis is incorrect, and this point often separates true science from pseudoscience.  (In fact, most scientists work hard to develop good hypotheses, and then spend a great deal of effort trying to disprove them.  Pseudoscientists tend to settle on a hypothesis that suits their needs or expectations, and then spend a great deal of effort trying to prove that they are "true".  See the discussion of scientific proof below.)

    This is ideally done by carrying out a controlled experiment, in which all variables except the one being investigated (the variable factor) are controlled (do not vary in unknown ways).  However, in the life sciences, and especially in the social sciences, controlling all of the variables is often impossible, and the test may be a series of field observations under controlled conditions, etc. Many of the differences between various sciences reflect differing ways in which they can or do apply the scientific method.

    The validity of the hypothesis is evaluated by examining the test results, to determine how well the hypothesis predicted the experimental/observational results.  There are three general possibilities. The results may:

  • completely support the hypothesis (relatively rare except in instructional settings, or perhaps confirmation of established points in new ways).

  • partly or incompletely support the hypothesis (common).

  • completely fail to support the hypothesis (not very common, except when working in very new areas of science when several alternate and incompatible hypotheses are undergoing initial testing)

    Typically, the new experimental data are used to improve a hypothesis that has been supported in some ways, or to refine the experimental procedure, and the entire "hypothesize-test-evaluate" process is then repeated.

Are Our Conclusions Valid? – A Brief Look at Confidence Levels
    As the process outlined above proceeds, a steadily more accurate description of what is being investigated emerges – and in some cases earlier conclusions may be abandoned as new evidence accumulates.  Science is thus a “work in progress.”  How confident can we be of the conclusions?  When are our conclusions “good enough?”  A detailed discussion of the statistics of confidence levels isn’t necessary here, but informally there are a couple of important points to be made.
    First, you can think of confidence levels as an estimate of the odds of being right and/or wrong.  If we decide that a certain conclusion is valid at a 95% confidence level, perhaps based on a statistical analysis of the experimental outcomes, it means we accept that there is a 5% chance that the conclusion is actually incorrect.
    Second, you have to consider the consequences of being wrong.  In most essentially academic research, a 95% confidence level is typical.  Usually the consequences of being wrong might be embarrassing, but not dangerous.  Similarly, we might be satisfied with explaining 95% of the variation in a phenomenon we are investigating.  However, in medical research, where lives are a stake in our conclusions about the effectiveness and/or safety of a new medication, a 99.9% confidence level might be far too low!  And in the social and behavioral sciences, sometimes being confident that you’ve explained 75% of what is going on might be pretty good. 

Hypotheses, Theories, Laws, and “Scientific Proof
    Usually the scientific method results in steadily improving hypotheses, which do a better and better job of predicting the outcome of experiments, observations, and events in the world around us.  If a major hypothesis (or set of related hypotheses) survives the “test of time,” with a long pattern of repeated verification and accurate prediction by a number of scientists, it may eventually become accepted as a theory.  Good theories not only explain things – they also tend to generate new hypotheses that enable us to learn even more. 
Thus, if a scientific explanation is considered to be a theory rather than a hypothesis, it indicates a very high level of confidence.  Unfortunately, most of the public has it backwards, assuming that “theory” means “just a guess,” whereas for most scientists the term implies a very high level of confidence that the theory is valid.
    There are several important related points involved here, some of which are widely misunderstood by non-scientists.  Ironically, science cannot “prove” beyond any doubt that something is absolutely true.  We can't see the future, which could hold extremely rare but real exceptions to well-supported theories.  The best we can do is “fail to disprove.”  If a theory has never been shown to be false after repeated testing, our confidence that it is probably true increases.  

On the other hand, it is possible to disprove a hypothesis or theory with a single valid example that refutes it.  However, in this case, we must remember that (1) failing to support (or even disproving) one view does NOT automatically mean that any given alternative or conflicting view is automatically correct. The other view must also be adequately tested before being accepted.  (2) It is possible to refute one part of a complex theory without affecting the confidence in other parts of it.  Case in point:  Some earlier ideas in evolutionary biology that were widely held, such as the assumption that it is always “slow and steady,” have now been rejected or modified.  This does not somehow “validate” creationism, or even shake the overall confidence that evolution is a basic natural process.  It just means we learned more and cleared up some misconceptions.

What about “laws?”  In the simplest terms, a natural law is a theory or set of theories, which have stood the test of time so well that we think that in this instance we actually do have a basically complete and accurate understanding of some particular aspect of how the universe works.  Unfortunately, the term is often used pretty loosely, even by scientists.  Many of the “laws of physics” certainly qualify, but (in my opinion) there is still too much we don’t know to really talk about “laws of genetics.”

Scientific Observation And Counter-Intuitive Conclusions
    Finally, one of the strengths of science, and a point that many people don't appreciate, is that science can uncover patterns of nature that may be "counter-intuitive."  Common sense or intuition are NOT always reliable.  For example, science has demonstrated that the sun only seems to move and “come up in the east,” because it is the earth itself that is rotating.  Similarly, intuition and even common sense seem to suggest that heavier objects will fall more rapidly, but centuries ago Galileo showed that in fact all objects fall at the same rate regardless of mass; it is air-resistance that makes light objects seem to fall more slowly.  

    In my opinion, one problem that many people have with evolutionary biology is that evolution is indeed a slow process observable only when you know what to look for and how to look.  Some of evolution may indeed be counter-intuitive, but the scientific method enables us to discover what is actually going on.

Some notes on the research process

Notes taken, with some modifications, from:  http://www.i-m-c.org/imcass/VUs/IMC/content.asp?id=1585

  • Introduction
  • The reality of research
  • The research process
  • Levels of research
  • Research methods
  • Experiment
  • Survey/Field (or case) study
  • Techniques of Research
  • Observation
  • Interviews
  • Questionnaires
  • Other techniques
  • Research organizations

It has been the experience of a good number of those working towards a higher degree which involves a major research element that little or no guidance is given on the research process itself. Frequently it seems to be expected that knowledge of research is something to be ''picked up'' as one progresses through the activity. While this is no doubt an important part of the learning that takes place it is an approach which can lead to frustrations and unnecessary failures along the way.

In the absence of any specific instruction, the higher degree researcher may turn to the textbooks. A number now exist but many are quite detailed and very specific. Few give even a brief introduction. This chapter is intended to do that, and provide a basis for further reading and development. It looks at the meaning of research and the processes involved, and briefly describes the main methods and techniques employed. A brief glossary of basic terms helps sort out some of the semantics. Finally, some guided reading is given with a few comments on the contents.

The reality of research

It may seem irrelevant to ask what research is all about; since so many are doing it, then most people (at least those carrying it out) must know what it is all about. Some people consider it to be a cozy and personal activity that could be indulged in from time to time from the safety of an armchair - and certainly not stretching beyond a pile of books resting on the coffee-table. To others, research is a vigorous and rigorous activity aimed at developing new bodies of knowledge and is normally ''acceptable'' only in a physical laboratory situation and is seen as ''the discovery of fact through a systematic process of survey, hypothesis and experiment''.

This is somewhat closer to the more scientific approach than the ''cozy, personal activity''. For many people research is a ''careful inquiry or examination to discover new information or relationships and to expand and to verify existing knowledge''. This immediately implies a vital role for research - one of helping researchers to underline the effectiveness of their approaches. It thus seems that research is an inevitable element of the total professional process - its absence leading to obsolescence, reduced effectiveness and dissatisfaction.

The possible range of research philosophies and approaches has grown extensively over recent years. Each research organization and institution in this field of development has different ideas and your understanding of these variable dynamics can be an important input in determining the design of your research process; establishing your "best research practice".

But research is not a ''careful enquiry'' for its own sake - it always starts with some sort of problem, or at least should do if it is to be of use to anyone. Whether the research is carried out for personal or practical purposes, some reason exists for it. The problem may be that little is known about some phenomenon and it would be good to have more knowledge about it (sometimes known as ''pure'' or ''fundamental'' research). On the other hand, the problem may have much more practical significance in that the research may help us to do something we could not do before (often referred to as ''applied research''). This is the essence of the American school of ''pragmatic philosophers'' for whom any theorizing or research is a waste of time unless it has ''cash value'', i.e. helps us to solve problems or understand things better than we did before. Some forms of research can be much more practical than others. But, if the problem is theoretical, then academic research is necessary. The danger of falling between two stools lies in confusing academic research with practical, i.e. operational, problems.

We have more concepts currently than we can adequately cope with. This is certainly true of the behavioral sciences. Notions such as motivation, perception and learning are highly developed and researched (if not agreed upon) at the conceptual level yet lag far behind when it comes to applying them to organizational situations. This point seems to be missed because so much literature exists. What is now required is a greater emphasis on ensuring that these concepts can be usefully employed. This is not to suggest that highly sophisticated academic research is no longer required - it is, but at a considerably reduced level. The poor image that research so often appears to have is largely the result of inappropriate approaches based on academic requirements - it would be more fruitful to adopt problem-centered approaches based on situational requirements.

It is important at this juncture to refer briefly to a form of research, known as action research. This form of research is essentially ''applied'' in nature, even more so when it is realized that in action research the researcher actually gets involved in what he is researching. It developed primarily from the need of organizational analysts to explore thoroughly the organization and at the same time to ''change'' and ''develop'' it. The researcher acts as a ''catalyst'', a ''facilitator'', or even a ''mirror'' for the organization - a very far cry from the ''objective impartiality'' of traditional research. Action research therefore requires a joint approach - a definite and agreed collaboration between organization and researcher.

The research process

Understanding research starts with knowing what, in essence, it is all about. As we have seen, the process of research starts, usually, with some form of problem or question. The problem/question may be the researcher's - he may wish to know which learning theory of several is most relevant in explaining certain levels of performance in different situations. The problem may, of course, be initiated by a manager or someone else; perhaps wanting to decide on the best technique for developing greater participation. In either case, the requirement is for some information that will shed light on the problem and help make a decision to solve it. It may be that solutions are not the end result of the research, but rather the development of a new theory or body of knowledge. Whatever the end result, the starting-point is represented by an urge to find out, to explore, to evaluate - in short, to do research. In between these end points exist a number of other steps.

Having defined, or at least acknowledged, the problem or area of interest, researchers may carry out a preliminary study. This will enable them to set out the parameters of the problem and to gain some idea of the essential information to be sought. Such exploratory studies, free from too much bias or preconceived ideas, can be of great value in setting the research in the right direction. For example, the problem being looked at may have been concerned with inadequate bonus earnings related to immediate post-training periods. The temptation here is to blame the training. An exploratory study (usually much less costly than the full treatment) might uncover poor supervision during the first weeks on the job, or lack of understanding of the bonus scheme, as possible alternative explanations. If this preliminary work is reasonably thorough, the next stages can be less embracing than might otherwise be the case. From this work the researcher may well set up a hypothesis, or a series of hypotheses, which can then be tested against reality. In simple terms a hypothesis is an imagined answer to a real question. In the example just given, the question would be ''What causes low levels of bonus earnings in immediate post-training periods?'' The answer, as we have seen, might be based on guesswork, theoretical inspiration, or an appreciation of the factors involved, or indeed a combination of all three. In our case, the hypothesis might be that, in immediate post-training periods, operators will earn low levels of bonus if inadequate supervision persists.

Having framed this hypothesis, researchers then seek information, or data, which will allow them to test its validity. They might decide to check records for low earnings, and see what situations led to this; or they could monitor earnings and performance levels in two sections, one of which had a high ratio of supervision, the other a low ratio. The data collected would then be analyzed and subjected, possibly, to several statistical tests to determine whether the proposed ''answer'' holds true or not and with what degree of confidence or faith it can be accepted. The results of this analysis and deliberation would be interpreted and communicated - via reports, seminars, planning groups or whatever - to the ''client''. This phase can be a difficult one, but need not be so inconclusive as so often is the case.

It should be stressed that the research process may not necessarily be geared to the testing of hypotheses. Often a researcher will be more interested in the exploratory stage, with a view to developing a number of alternative hypotheses for later testing. If this proves successful, a useful contribution will have been made to knowledge.

Levels of research

Not all research takes place at the same level of scientific sophistication. The reason for this hinges on the state of knowledge of the subject under investigation and the hoped-for outcomes (and uses) of the research. In general, most sciences follow a similar pattern of development and progression, and the social and behavioral sciences are no exception. In some disciplines - such as, perhaps, biology and botany - the emphasis is on the one level rather than another.

Perhaps the most basic level of research is that connected with describing what exists around us. For example, we may not have enough knowledge about the different types of training procedures in use - the first step in knowing about them must be to describe them. Thus, job descriptions are quite useful in telling us something about the work of managers. Having obtained a description of these phenomena, the researcher may be interested in comparing them for differences or similarities, as we would with job descriptions, in order to establish some form of job evaluation framework, or training characteristics. This process of comparing and grouping is known as classification (or categorization).

The next level of research, that of explanation, then becomes possible. We can start to ask questions such as why? and how? Our interest is in understanding what is happening and seeking ways of representing this through theoretical development, models, propositions and so on. You may want to know, for example, why one student progresses more quickly under the same conditions as someone else. Hopefully, all this knowledge will lead to a stage of development where prediction of events, circumstances, behavior, etc. is possible. In the physical and advanced sciences, this is the level at which most researchers are now operating. None of the space programs would have been possible if this were not so. In those disciplines concerned with human behavior, it is exceptional to find some truly predictive theory based on adequate research. While the testing of hypotheses may take on this predictive form, we are still very much concerned with understanding and explaining human behavior. In the field of training and education, this must be so - with exceptions - until the disciplines associated with our efforts (psychology, sociology, neurology and so on) become more precise and predictive themselves.

Research methods

A number of quite different methods can be employed in establishing the acceptability or otherwise of a hypothesis, or helping solve a problem, and in some cases these can be used to complement each other. Each has its advantages and drawbacks, knowledge of which can aid you in assessing the feasibility of achieving your objectives.


The classical method, used in the physical sciences for many years, is the experiment. In most physical sciences, if not all, the researcher aims to set up a situation in which all variables can be controlled or varied at will. The usual approach is to hold all variables constant except one. By varying this one and monitoring changes in the ''output'', the relationship between variables can be carefully studied and documented. In essence, the researcher seeks to vary one of several independent (or input) variables while measuring the effects on the dependent (or output) variable(s), keeping intervening variables constant. For example, it would be possible to vary the petrol mixture fed to an internal combustion engine and note the difference in speed or power achieved but, at the same time, keeping (say) pressure or load constant and controlling room temperature in the laboratory. When dealing with human behavior, it is not possible strictly to adhere to this approach, although sometimes one can get reasonably close. It might be possible to vary the instructional techniques used for training managers and to measure their achievements. Here, however, control over intervening variables such as ability, intelligence, attitude and the like would be complex but the use of matched groups (e.g. different groups of managers who had roughly the same IQ, etc.) undergoing different approaches would take us a step nearer to the ''scientific'' method. We must not delude ourselves, however, into thinking that this approach is ''foolproof'' - it is not. We can not control, for example, the activities of people outside work - their love-lives, drinking habits, arguments with spouses - which may well affect their performance. We can, nonetheless, attempt to recognize and account for these factors. Experiments can broadly be considered to be of two types - the laboratory experiment, where the problem to be studied is divorced from the other facets of the real world surrounding it, but not connected to it; and the field experiment, where attempts are made to study the problem in its real setting and to minimize the influences of seemingly unconnected factors or variables. Most experiments in training and education are likely to be field experiments, although the existence of training schools, simulators and so on, make laboratory experiments quite attractive - even though the results may not have much significance in the ''real'' setting.


This is almost certainly the most widely adopted method in the social sciences - and most aspects of training and education are of a virtually ''social scientific'' nature. Surveys are usually cheaper, quicker and broader in coverage than any experiment can hope to be but, on the other hand, very often lack the control and in-depth exploration of the experiment. Relying in the main on the techniques of sampling, interviewing and/or the questionnaire, a survey can provide useful information on many problems or issues faced by the trainer or educator. For example, you may have wondered how people feel about the training provided; what subject-matters people think should be given priority treatment on courses; if members of your organization think participation is a good thing; or maybe what young managers think about their career prospects. These and other issues can be explored using survey research methods involving research instruments (e.g. questionnaires, checklists) which, if constructed and tested adequately, can produce useful information. By their very nature, surveys produce a lot of information - or data, as researchers tend to call the basic responses to questions. Thought must therefore be given to how it can be analyzed, preferably before the data are collected. If this is not done, severe problems can arise causing frustration, and even the abandonment of the project. Many excellent techniques of analysis exist - from slogging it out by hand to computer processing, and can be found described in a number of sources.

A survey, of course, is not the answer to all research requirements. Used widely, it can produce useful information in a short time, but may suffer from problems associated with people not wanting or bothering to respond to questions; giving false answers where they do; treating it as a joke; misunderstanding its purpose; and a host of others. Many of these problems can be avoided or certainly reduced in terms of their impact on the results, but only if care and attention are applied throughout. Carrying out a survey is not so simple as some people would have us believe, nor is it so difficult and scientifically immoral as others obviously do believe. As with all things in life, it has its place - as a planned collection of information: no more, no less!

Field (or case) study

Probably falling between the experiment and the survey in terms of scientific acceptability, usefulness to the practitioner, and capacity to produce theoretical advances, the field study (of which the case study is a particular example) has considerable utility. While the techniques adopted (e.g. interviews, observations, questionnaires) are similar to survey research techniques, breadth of coverage is sacrificed for depth of probing and understanding. Unlike the experiment, a field study does not normally involve manipulating independent (or input, or causal) variables, except possibly through statistical means. Rather, the study involves measuring, looking at - studying! - what is there, and how it got there, i.e. it is historical. Two types of study can be carried out. Exploratory studies seek to establish ''what is''; to discover significant variables and relations between them and to lay the foundations for perhaps more scientific work aimed at testing hypotheses. For example, you may have wondered what variables have the greatest influence in on-the-job learning: a field study, probing through discussions with, and observations of, the people involved, might throw some light on this question. You might then be in a position to predict how the variables would be related to each other in certain situations, and set out to test this prediction. This would be a different form of field study: hypothesis testing rather than hypothesis generating. The point to note about field studies is that they do not attempt rigorous control - both a strength and a weakness. The strength is that we obtain greater realism in the research; the weakness is that things may get out of hand (sudden incidents erupting) destroying the validity of the research. Field studies are often costly and time-consuming, and may, of course, not produce much in the way of earth-shattering conclusions. For most of our requirements, however, the results can be rewarding. In a more specific sense, studies can be confined to particular persons or units or organizations, and such case studies can produce illuminating information. It must be recognized, though, that single cases may have little value in explaining events outside the confines of the case itself - it thus lacks ''generalizability''.

Techniques of research

While many texts refer to instrumentation, measurement devices, methods of data collection and the like to mean the way in which the researcher goes about acquiring information within one of the frameworks just described, it is best to use the term ''technique''. This is because some of the other terms are too precise (such as ''instrumentation'') or involve the use of terms applied elsewhere (such as in ''data collection method''). In essence, we are talking about ''how'' we do it as opposed to ''what'' we do or ''why'' we do it. Only a brief description of the most general techniques is given here - most are well discussed (if not always jargon-free) in texts on research methods.


This is the most classical and natural of techniques. It simply involves looking at what is going on - watching and listening. We all do it, most of us badly because we do not know what to look for or how to record it. Work study practitioners are probably the most competent of observers - after all, they have been trained to do it. So, too, are most researchers and teachers. Important in being a good observer is to have a wide scope, great capacity for being alert, and the ability to pick up significant events. Here, technology can aid us, offering services ranging from simple pen and paper through tape recorders and cameras to videotapes. If carried out quietly, unobtrusively, and shrewdly, observation can be a useful, if not powerful, technique. It does not allow much scope for probing, exploring relationships further, unless used in conjunction with other techniques. The combinational use of techniques is now quite widespread and has much to commend it. Since, however, observation is ''simple'' (if time-consuming) and opportunities for using it often present themselves, it can be used quite effectively for its purpose - enabling a general picture to be built up.


It is quite tempting to suppose that the interview was first ''created'' by the early observers who could not resist asking people why they were doing what they were doing. Whatever its origin, the interview has a fundamental role in social and behavioral research. It allows for exploration and probing in depth and, if you have got the money and the time, in breadth as well. The questions asked might stem from periods of general observation - and this is to be preferred to just dreaming up questions in the bath! Interviews can be unstructured and free-ranging: a general discussion, picking up points and issues as they emerge and pursuing them in some depth; or they can be structured around questions and issues determined in advance: based on a literature search, preconceived ideas or prior investigation. If the questioning is non-directive and free from biased or loaded questions; if the interviewer is a good, attentive listener (and adept recorder); and if the interviewee is of a mind to ''tell it like it is'', the results can be very effective. However, problems of time, cost and sampling related to your research objectives may mean that a full-scale interview program is not possible or necessary. For example, you may wish to gain ideas for the development of a job appraisal form - for this, a small number of ''pilot'' interviews would be quite effective.

If you wanted detailed views on the attitudes of people on your courses, a wider program of in-depth interviews could be of use. Remember, too, that for some purposes (e.g. where a ''testing of views'' is required), group interviews have a role to play. While they can be a bit more difficult to handle, the overall end results may provide more insights than would the same people interviewed separately. Whatever sort of interview is relevant, the means of recording information must be thought through in advance: whether to tape record unstructured group interviews or take notes; how to design an interview schedule (a ''questionnaire'' completed by the interviewer) for structured interviews with maximum ease of recording and information capture but minimum effect on interviewees - e.g. a feeling of ''not being listened to'' as you write copious notes. As with all research matters, a little advance thinking and planning can save a lot of later difficulties.


While undoubtedly the most used technique - or, more correctly, instrument - of researchers in the behavioral and social sciences, questionnaires do pose problems. The major difficulties are associated with response rates, bias and flexibility. Since questionnaires are important to the survey researcher (as are interviews) the effect on the results of someone not responding must be considered. Who are they, what are their characteristics, would they share the views of those who did respond? are questions that have to be faced. Even when reasonable response rates are achieved (more than 40 per cent) those problems still exist, and in any case the resulting data may be biased. Bias might be due to respondents anticipating the answers they think the researcher wants, or putting down ''socially expected'' answers (on the basis of what is ''good'', or would be the ''right sort of thing to say''), or simply as a result of finding some form of pattern to, say, the first ten questions and assuming the pattern must be repeated. These and other difficulties can be minimized, if not overcome, by careful design and piloting of the questionnaire. Flexibility, however, is not so much a design problem (although it can be considerably reduced by poor design) - it is much more a function of the nature of the research questions being asked. Answers might range from factual information (e.g. date of birth) - through simple ''yes''/''no'' replies (e.g. do you smoke?), to scale-type responses of the agree/disagree form (e.g. training is a waste of time!) with a number of possible responses in between. Often, however, the person filling in the questionnaire would like to say ''yes - but!'' and has no opportunity to do so. It is the qualifying ''but'' that may be important and an interview would allow it to be explored. For information of a somewhat broad and superficial nature (detail can be obtained, of course, but mostly factual), involving large numbers of people, the questionnaire is a useful technique and is relatively easy and cheap to use. If thought is given to the major drawbacks and to the way in which the data are to be analyzed, there is every reason to expect fairly reliable and valid results. If preceded or backed-up by interviews or observations, many additional benefits can be derived as well as difficulties minimized.

Other techniques

Many other techniques exist, some of them variations on those briefly described here, others developed for specific purposes. They will not be discussed here since most of them require considerable experience in their design and use. They can be found in many of the early texts, for example Helmstadter (1970), from which special references (e.g. to sociometry; testing; scaling and projective techniques in psychology) can be obtained. Often such techniques are limited

  • Model: A pictorial representation of concepts and relations between concepts, e.g. graph or flow diagram. Not to be confused with the use of ''model'' which implies ''perfect'' - as in ''a model job''!
  • Paradigm: Another word for model, but without the latter's value connotations.
  • Proposition: A statement or assertion concerning the problem or topic being researched: origins and use mainly in philosophy, logic and mathematics.
  • Reliability: A term used mainly in connection with measurements (as via a questionnaire, or test) and refers to repeatability, i.e. getting the same results on different occasions when measuring the same entity which has not changed in dimensions since it was first measured.
  • Sample: A number of people, objects or events chosen from a larger ''population'' on the basis of representing (or being representative of) that population. Sampling, and sampling theory, are important facets of survey research.
  • Theory: A set of general laws (interrelated concepts) that specifies relations among variables. A theory thus represents, in a systematic way, the phenomena in the world around us, explaining them and allowing predictions to be made or, to borrow a phrase, ''there's nothing so practical as a good theory''!
  • Validity: A partner of ''reliability'', expressing the extent to which a test, say, actually measures what it is supposed to measure, i.e. does it do the job for which it was designed? Various types of validity are looked for as evidence of this.
  • Variable: In the strictest sense, a variable is a symbol to which a number is assigned. Constructs such as intelligence are also referred to as variables. The terms ''factor'' and ''variables'' are sometimes used interchangeably. Variables may be continuous (time, age) or dichotomous (sex, marital status).

Sources of information for researchers


In all research a most important requirement, even before the research proposals are developed, is to find out what other relevant research has been (or is being) carried out by others. To search for and review the research in the field effectively is often a time-consuming, if not tedious and frustrating, activity - largely because no one central and comprehensive source is available. There exist a number of reports, registers, bibliographies and other sources which provide useful information on what is happening but not all research gets reported anyway. While it is impossible to be sure that every piece of relevant research has been uncovered, a very large proportion can be identified through the sources listed here.

Personal contact

An important channel of communication is with other researchers, supervisors (if you are doing research for a higher degree) and others in the research field who may be known to you. Get in touch with them and ask for their help/advice - at worst they will say ''sorry, can't help''. You may find some researchers reluctant to give out too much information, especially if the research is in its early stages. Obviously, such privacy/secrecy has to be respected, although it can hold back general development of the field of study.

Journals, reports, bibliographies

These represent a good source of information on completed or partially completed research. Unfortunately, not all research gets written up at the results stage, although some eminent researchers hold the view that there is at least a moral obligation to let the scientific/academic community know what has been achieved. Information contained in these sources is usually quite up-to-date but it must be remembered that many articles may have been waiting for up to two years before getting published in the journals. Most bodies that fund research expect a written report on completion of the work. Lists of reports available are usually obtainable from the funding body. It is a good idea to spend time in the library.

Ask what services they have - such as indexes, abstracts, bibliographies and the like, and go through these. Of considerable use are sources such as our Electronic Library Service, Contents Pages in Management, etc. Look through recent and past issues of journals, and at the bookshelves - and take a note of what you find. Further personal contacts can be developed with authors of highly relevant articles and/or reports. They may even be able to send you copies of working papers - usually very up-to-date.

Research reports and registers

Very good sources of information are the reports/registers published (often on an annual basis) by research councils, government departments/agencies, foundations and other grant-awarding bodies. These sources usually list and describe projects recently completed, under way or about to be started, and provide names/addresses of research workers and their institutions.

Science Citation Index

Most scientific and scholarly writings include references to earlier papers on the same subject. The references listed in this way are cited references, or citations; and the paper which cites them is a citing paper. Through these references an author identifies subject relationships between his or her current article and the cited documents. In addition newer articles that cite the same older documents usually have subject relationships with each other. The Science Citation Index comprises three parts which are: the Citation Index, the Source Index, and the Permuterm Subject Index

Keyword searching

Due to the sophistication of traditional libraries and other information bases, the availability of courseware resources and the opportunity to access international data sources through the internet, it is essential to be able to capture the critical mass of related knowledge in training and development cleanly and efficiently. Keyword searching has proved to be an important device in achieving this.

The importance of a particular 'keyword' varies over time due to the rapidly changing business environment and the resource state within organizations. The keyword groups shown below indicate what are considered to be the main search trajectories for training and development at this time. The development and maintenance of these lists is an important part of operating an effective research process.

Research Methods

Action research
Field research
Market research
Multidisciplinary research
Operational research
Project management
Time management


Action learning
Career development
Group work
Learning organization
Learning sets
Learning styles
Self-managed learning
Workplace learning

Management of Change

Behavioral change
Change agents
Change management
Corporate Culture
Employee involvement
Human resource management
Information technology
Learning organizations
Management Styles
Organizational change
Organizational development
Organizational structure
Technological change


 • Research cycle • Stats • Thesis writing • About research • Data_sources_Stats • Resources •

RSS Recent issues of Simulation & Gaming: An Interdisciplinary Journal
Peace and survival of life on Earth as we know it are threatened by human activities that lack a commitment to humanitarian values.  Destruction of nature and natural resources results from ignorance, greed, and a lack of respect for the Earth's living things... .  It is not difficult to forgive destruction in the past, which resulted from ignorance.  Today, however, we have access to more information, and it is essential that we re-examine ethically what we have inherited, what we are responsible for, and what we will pass on to coming generations.  Clearly this is a pivotal generation... .  Our marvels of science and technology are matched if not outweighed by many current tragedies, including human starvation in some parts of the world, and extinction of other life forms... .  We have the capability and responsibility.  We must act before it is too late.  Tenzin Gyatso the fourteenth Dalai Lama.