Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now

Introduction

Psychological research involves the empirical pursuit of exploring and explaining phenomena. We might ask questions such as: Does X vary with Y? Does X cause Y? What is the strength of the relationship between X and Y? Within the context of clinical psychological research, we might ask questions such as: Can we better test for anxiety and maladaptive thinking patterns? Which treatment is better: pharmacological or psychological?

To answer such questions, we need to acquire some knowledge of a given phenomenon, as well as the scientific method that enables us to systematically evaluate observations of the phenomenon. This topic discusses these concepts as well as outlines in broad terms the research approach by which phenomena might be explained.

Acquiring knowledge

There are a number of ways to acquire knowledge. The traditional folk methods include intuition, authority and rationalism.

INTUITION: This might be defined as coming to know things without substantive reasoning. Therefore, the knowledge acquired is not always accurate.

AUTHORITY: We are more likely to accept information from an authority structure or a person in authority. However, this information may not be accurate and may amount to little more than propaganda espoused by a structure, system or person with vested interests in presenting the information in that way (e.g. political self-interests).

RATIONALISM: This approach assumes that reasoning and logic will lead to correct interpretations. However, such common-sense approaches may also lead to erroneous conclusions. With rationalism, a conclusion is often logically derived from an incorrect premise, such as underpinning stereotypes leading to incorrect interpretations of people and situations such as all poor people are lazy. Rationalism has its place (even in science in the form of empirical reasoning); however, it is a biased approach to knowledge acquisition when used in isolation.

EMPIRICISM: Empiricism involves direct observation. Systematic observation underpins the scientific method of knowledge acquisition. In order to begin to understand things objectively, we turn to the empirical approach.

Science and the scientific method

Science is the objective, systematic process of describing, analysing and explaining phenomena in the form of existence and events, and the relationships between them. Science is not a single entity. Instead, it is a set of best-practice techniques used to accumulate knowledge about what is real. This is based on the premise that there is some order and structure within the universe and that things can potentially influence one another. If there is complete randomness, then there is no way to predict occurrences. We know there is not complete randomness, so we can draw upon various scientific methods in research.

There are three main goals associated with the scientific pursuit in research:

  • To describe phenomena
  • To explain phenomena
  • To predict the existence of a relationship or forecast an outcome.

You are likely to come across varying typologies of the scientific method, but the main aspects we consider in this subject are described below.

Scientific method: The main aspects

Observation, systematic and objective methodology

The scientific method involves observing, systematically collecting and evaluating evidence so as to be able to test ideas and answer questions like those above. Systematic empiricism is key here. This involves carefully structured observations and objective measurement (i.e. without bias). As psychology researchers, we may merely observe and measure without interfering in the order of things (an observational research approach such as a survey), or conversely, constrain or manufacture particular conditions or circumstances in order to see the effects (these are quasi-experimental or full experimental approaches where we seek to manipulate the way people think or behave). Regardless of the general approach taken, careful planning, recording and analysing are key. Ideally, though, we conduct our empirical investigations under controlled conditions in a systematic manner so as to minimise researcher bias and maximise objectivity.

At times we might just dive into an investigation without solid preconception as to what we expect to find (data fishing expeditions), but more often the approach is theoretically based. At its simplest, a theory is a conceptual framework of tentative linkage and/or explanation for relationships among data. A theory can come about before data acquisition and guide the data acquisition process, or post data acquisition, where a theory might be constructed in order to explain observation data.

Falsifiability

Perhaps the most central aspect of the scientific approach is that it has to be falsifiable (i.e. there must be a way to systematically observe and measure the aspect of interest in such a way that claims about it can be refuted). For example, the premise or hypothesis that women are more emotionally labile than men is falsifiable (e.g. we can measure and assess the differences in physiological responses), whereas, the argument that an all-powerful entity gave rise to the universe is unfalsifiable.

Scepticism

Another central aspect of the scientific approach that is aligned with falsifiability is that of scepticism. Scepticism is also closely aligned with critical thinking. Science should reject notions of folk psychology because time and time again so called common-sense explanations of human behaviour have been shown to be wrong.

Openness

There are two aspects to openness:

  • We need to be open to alternative explanations of phenomena before locking in final conclusions (e.g. asking oneself whether the research outcomes might actually be due to another factor(s) that we did not think to investigate).
  • Science also needs to be open to public scrutiny, and findings need to be disseminated appropriately. This is so that findings can be verified and replicated.

In sum, we have identified the central aspects of the scientific method: observation, systematic and objective methodology, falsifiability, scepticism and openness (including verification and replicability)

No introduction on the scientific method would be complete without a brief note on pseudoscience. This is false science wrapped up and presented as science. Here we see new age treatments such as crystal therapy and homeopathy, miracle diets and the like touted as being scientifically proven (often as a profit-seeking venture). Underpinning the belief in these types of treatments is often a penchant for magical thinking or a need to believe in something (e.g. a miracle cancer cure).

Research approaches

This topic identifies and differentiates the main typologies of research approaches.

Introduction

Research approaches can be differentiated as follows. The first typology relates to the research focus, while the second relates more to the nature of data acquired:

  1. Basic versus applied
  2. Quantitative versus qualitative.

Basic vs applied research approaches

There are two main research approaches that differ in focus and setting. Psychological research is often classified as being either basic or applied in focus.

BASIC research: This type of research is generally thought of in terms of knowledge advancing. Basic research is often very specialised and nuanced, and is commonly undertaken in university settings. In this context basic does not mean elementary or simple; indeed, basic research might involve complex aims and operations, such as establishing and refining theory.

Applied research: This type of research focusses more on solving practical and community-based issues (e.g. modifying attitudes and behaviours). The findings from basic research may help inform applied research directions (e.g. leading to intervention-based applied research strategies to solve real-world problems).

Quantitative vs qualitative approaches

There are two main research approaches, which differ based on the nature of data.

Quantitative approaches : These approaches use data with numerical values, counts or groupings assigned to them, measuring how many, how much, how often, how fast/slow and so on. Stated another way, quantitative data provide information about values or quantities, therefore, quantitative data can be counted, measured and expressed in numbers. Quantitative approaches assume a fixed and measurable reality. Data can be evaluated by making numerical comparisons and applying relevant mathematical and statistical procedures. Depending upon methodology and design, quantitative approaches have the capacity to describe phenomena, showcase relationships and, in some cases, establish evidence of cause and effect.

Qualitative approaches: These approaches do not have numerical values assigned, and the data are descriptive. Methodologically, qualitative data are grouped according to themes or like-minded descriptions (e.g. extracted from participant narratives such as opinions or views on things). Qualitative approaches are generally descriptive in nature and have no capacity to establish cause-and-effect relationships.

Qualitative approaches assume a dynamic, socially constructed reality. The approach involves collecting and analysing narrative, observational and pictorial data (e.g. from interviews, focus groups, open-ended surveys, archival footage and recordings). Thematic analysis is common in qualitative research. The analysis involves identifying ideas or groups of meanings. For example, assume a researcher is interested in the felt experience of residents in an aged-care facility. By asking carefully selected questions to encourage resident elaboration on key topics (e.g. the propensity for engagement with other residents, thoughts as to the user-friendliness of the layout of the facility), the researcher can acquire a rich tapestry of information. By inputting this narrative information, thematic analysis might identify key recurring narrative themes. These extractions subsequently form the basis of an interpretation and ultimately a report about what was identified. No numerical values are assigned to anything beyond simple counts of recurring themes and the percentages of people who expressed a particular view or thought.

In short, qualitative research is interpretive (i.e. the researcher attempts to understand data from the participants subjective experience) as opposed to data driven. Consequently, different researchers may derive different interpretations from the same data. Qualitative research is often undertaken in the field, such as within the context of an aged-care facility, so as to retain naturalism.

Also note that a mixed-methods approach might be used by some researchers, leveraging the advantages of both quantitative and qualitative methods to understand the subject better than any individual approach could offer on its own. Some researchers may also adopt an approach whereby they quantify qualitative data (e.g. assign numerical codes to themes or key words extracted from participant narratives, for use in quantitative analyses).

In this subject we deal almost exclusively with quantitative data and approaches. The term statistics does not apply to qualitative data.

See the following figure for a summary of qualitative and quantitative approaches.

a summary of qualitative and quantitative approaches

Variables in research

This topic provides an introduction to variables and their measurement properties.

Introduction

In this section we define a variety of common variables seen in research, and describe the type of measurement properties that might be assigned to them.

Variable types

Variable: These are characteristics or conditions that can assume more than one value (i.e. they can vary). In quantitative research, variables are assigned values, groupings or levels of some sort. Example variables include extraversion, intelligence quotient (IQ), age, gender, height, weight, self-esteem, perceived control and political orientation.

Variables are meant to represent some hypothetical construct or concept. A hypothetical construct is a theoretical concept that we operationalise (turn into a variable). In order to be accurately operationalised, the construct needs to be clearly defined as to what it represents and what it does not. An operational definition specifies exactly how we define and measure a variable. Note that a variable like IQ is really only an alleged index of intelligence, unlike the operationalisation of the measurement of height.

To put it another way, variables are the operationalisation of constructs. Operationalisation is the procedure used to make the variables. An operational definition gets us started in this process by clearly defining what it is we want to measure and operationalise.

There are a variety of different types of variables based upon their construction and application. We now turn to discussion on these.

Dependent variable (DV): Also known as an outcome variable, response variable or criterion variable, the DV is assumed to be dependent upon the independent variable; therefore, change to the DV is the outcome being measured (e.g. reaction time, level of prejudice). A research design may include a number of DVs. The researcher does not interfere with or manipulate the DV, as it is meant to be impacted by the independent variable (IV).

Independent variable (IV): Also known as a predictor variable or explanatory variable, the IV is the variable assumed to influence the DV. A research design may include a number of IVs. Sometimes the IV is manipulated (e.g. prescribing different amounts of sleep to participants). In observational studies the IV is not manipulated.

Manipulated variable: The manipulated variable is a special type of IV. A manipulated variable in experimental research is an experimenter-imposed condition or grouping, or influence of some type (e.g. the researcher deprives one group of participants of sleep while the other group sleep normally and then they are all assessed on a reaction time task). Here the experimenter has manipulated the variable of sleep by allocating participants differing levels of sleep. Some variables cannot be manipulated such as IQ, physical injury or any action that may harm a participant.

Extraneous variable: This is a variable that is not an IV in the study but still influences the DV. The extraneous variable has a relationship (i.e. it is correlated) with both the IV and DV, but may not differ systematically with the IV. In reality, there are probably a great number of variables we do not measure or cannot measure and take into account that may have an impact on the DV. This is evident in research studies where we see a large amount of unmeasured or error variance in our statistical outcomes (which we will discuss later in the subject).

An extraneous variable might occur in the following way. A workplace study seeks to establish whether or not the addition of background music increases employee productivity in a packing plant based on the premise (hypothesis) that music will be motivating. The DV is the number of items packed in a given work shift. The IV comprises two levels: no music and music. Assume results show the average number of items packed by our employees to be greater in the music condition than the non-music condition. An extraneous variable might be another variable that accounts for the difference (e.g. the music induces a state of relaxation that enables performance instead of inducing a motivational state), so we have a third variable (the extraneous underlying variable accounting for the observed difference) instead of the hypothesised underlying variable. If we have not considered the potential influence of this third variable of relaxation in our design by, for example, measuring it, then we incorrectly assume employee motivation was influenced by music.

Confound variable: A confound is an extraneous variable that moves systematically with the IV (i.e. the mean value of the extraneous variable differs between IV levels) to influence the DV. Therefore, we do not know whether the IV, confound or both are impacting scores on the DV. Using our packing plant study example above (and assuming the same outcome difference), a confound might be introduced to the study if, for example, the workers in the music condition were those who worked later in the day, while those not exposed to music worked an earlier shift. There might be something about the time of shift that moves systematically with the IV (i.e. differs between the levels of the IV) such as level of concentration, tiredness, or perhaps age of employees who work later in the day versus earlier in the day. We do not know whether it was the IV, some aspect related to time of shift, or the influence of both that resulted in the observed outcomes.

As we can see, both an extraneous variable and a confound will muddy results interpretation because they impact DV scores. In other words, we cannot be sure the IV was solely responsible for changes in the DV when we have these additional variables as potential contributors to outcome differences. Note that it is the expectation that we cannot account for all potential extraneous variables. To have a dedicated confound in a design and not realise it (much more egregious) is embarrassing to a researcher! Also note that the terms extraneous variable and confound variable are often used interchangeably in research. In reality it can be quite difficult to distinguish between these types of variables.

There are two main ways to deal with suspected or known problematic variables.

  • Create a design that excludes differential impact of unwanted variables. We can do this by holding procedures constant. For example, returning to our packing plant study, the time of shift could be held constant (i.e. test both conditions at the same time of day).

Some extraneous variables such as those involving individual differences among people (e.g. personality variables, IQ) can be evened out through random allocation to conditions (e.g. drawing straws to allocate participants in the sample to treatment and control conditions). Although there are no guarantees, individual differences among people should now be fairly balanced between the conditions (only by chance occurrence should there be differences). These then are examples of designing problematic variables out of the study design.

  • Include the problematic variable in the design. This is the approach to use when the problematic variable can be identified and measured. It is also the approach to use when the variable cannot be designed out of the study design, or when we actually want to know its effect. For example, let us assume we want to manipulate participants perceived personal control by creating a control loss state for half of our participants, and a control bolstering state for our remaining participants. The hypothesis is that control loss, in contrast to control bolstering, is more likely to motivate compensatory control-seeking behaviour (i.e. control loss is hypothesised to activate an attempt to restore the sense of lost control).

Therefore, we randomly allocate our participants (so that individual differences are largely balanced out) to either a low control condition or a high control condition (this is a between-subjects design, which means no participant is in both conditions). Our participants are then manipulated into a low control or high control state by recalling and writing about a time they had no control over an outcome or a time they had complete control over the outcome. This type of manipulation has been shown to invoke the relevant control states (Knight et al., 2014).

Let us assume for the moment that statistical analysis shows our hypothesis to be supported. Does this mean that relative levels of control were responsible for the outcomes? Perhaps not. It is possible that causing participants to think about a low control situation might make them feel bad (i.e. create negative affect), while those thinking about a high control situation may feel much better (i.e. they experience relatively less negative affect, if not positive affect). Thus, it may be that differences in affect or mood are impacting the outcomes alongside levels of control. Affect, then, is an example of a confounding variable in this design because it is expected to have both an influence on the DV and vary systematically with the IV of control.

As it is not possible to prevent such changes in affect (if they are there), the only option open to the researcher is to measure affect to determine or statistically control its impact. This is easily done with a scale that measures affect  the 20-item positive and negative affect scale (PANAS). Participants respond to PANAS items such as interested, distressed, upset and irritable based on how they are feeling right now. Participants use a scale from 1, very slightly or not at all, to 7, extremely. Note that we would measure affect very soon after participants complete the control manipulations so as to best see the potential effect of the manipulation tasks on affect. Now that affect is measured, we can assess its relative influence on the DV by factoring it in as another IV, or we can control for affect by averaging out the influence of affect across both the manipulated control conditions in our statistical analysis (more on control variables below).

Control variable: These can take a variety of forms. Some variables are controlled in that they can be held constant by the researcher. A control variable is often something that a researcher includes in a study that is not of key interest, but nevertheless should be measured and accounted for in analysis and results interpretation (it might be a well-considered extraneous variable or potential confound as we saw above). This then is a statistical control variable. For example, suppose a researcher is investigating the effect of different levels of sleep (IV) on a reaction time task (DV). If for some reason the researcher suspects that females might perform better under conditions of less sleep than males, but was not actually interested in this nuance (instead preferring to investigate the effect without regard to gender), gender could be statistically included as a control variable (i.e. the effects of gender could be averaged out across the conditions of eight hours sleep versus four hours sleep). Stated another way, this allows statistical interpretation of the effect of differing levels of sleep on reaction time after removing the differences in this relationship that are associated with gender. If, however, the researcher was actually interested in this gender difference, then gender might be included as a variable in the study in a more complex factorial design (i.e. two IVs: gender is one IV and sleep is another, with both IVs having two levels).

A general note: Control in research can mean different things.

  1. A verification of an effect via comparison, such as using a control group in conjunction with a treatment group for the purposes of comparison
  2. Restriction through holding aspects or variables constant, or to eliminate the influence of problematic variables, as in the control variable example above
  3. Manipulating conditions so that their effects are both maximised and targeted, such as using a lab situation to generate greater control over the manipulations participants receive.

Person variable (PV) and person attribute variable (PAV): A PV is some aspect or difference tied to the participants, such as personality traits, attitudes, views, cognitive functioning, IQ, reading speed, employed/unemployed or male/female, that participants bring to the study (an individual difference).

A PAV is a person-variable put to use by the researcher as an IV in the design (e.g. the case of dividing participants into groups based on gender, with the purpose of exploring potential gender differences). Allocation to condition  levels of the IV  on the basis of a PAV means that full random assignment to condition is not possible. Not all PVs are PAVs, but all PAVs are PVs.

Manipulation check item: Manipulation check items are not technically variables, but have been included in this section given their variable-related focus. A manipulation check is included in designs where the researcher has manipulated the IV so as to determine whether or not the manipulation has worked. Manipulation checks take various forms, and may be as simple as subtly or indirectly asking participants what effect the manipulation had on them.

Returning to our study about feelings of control for a moment, we might include a question in the design, positioned after the manipulation, to get at participant perceptions of control. For example, we might ask participants to think back to the writing task and indicate how much control they had over the outcome they wrote about by recording their responses using the response format (1, absolutely no control, to 7, complete control). If we find that responses to this item correspond to the relevant manipulation the participant received, then we can assume the manipulation was at least sufficiently understood by the participant. This does not necessarily mean that the participants actually internalised the manipulation, but it is about as close as we can get to knowing whether or not the manipulation worked as intended. This outcome combined with the measurement and control of the potential confound of affect (explained earlier), provides us with a body of evidence that the study was internally valid (i.e. it was the desired manipulation that led to changes in DV scores).

Note, though, that the positioning of the manipulation check item is crucial to maintaining the integrity of the study. Positioning the check item immediately after the manipulation might illuminate the focus of the study to participants (i.e. the realisation it is about personal control). A manipulation check such as this is better positioned towards the end of the study, after the DV has been administered.

Variable measurement

In this section we look more deeply at the makeup of research variables and their measurement.

  • Quantitative variables: These are variables expressed in some numerical fashion, and are classified as either ratio or interval variables.
  • RATIO VARIABLES: have an absolute zero point and equal intervals, and assess magnitude (e.g. reaction time, height).
  • INTERVAL VARIABLES: have no absolute zero point, but have equal intervals, and assess magnitude (e.g. degrees Celsius, IQ).
  • Categorical variable: This is a variable with either natural or manipulated groupings, such as with/without depression, female/male, low/medium/high level of some attribute, low/high control state imposed by the experimenter. These variables are assumed to be discreet categories. A categorical variable can be either nominal (without rank order) or ordinal (with a rank order).
  • Ordinal variables: Ordinal variables assess magnitude by virtue of being rank ordered, but have no set intervals or zero point (e.g. the ordering of a list of preferences or severity of illness ranked as mild, moderate and severe). Consider, for example, the finishing order of runners in a foot race. If we only look at the difference in finishing order, then this is an ordinal categorisation. The finishing order in this example tells us nothing about the distance or time interval between each finisher. However, the first runner over the line is assumed to have performed better than the second finisher, who has in turn performed better than the third finisher, and so on.
  • Nominal variables: Nominal variables have no level of magnitude  no one category is seen to be more important than another (e.g. male/female, place of birth).

Note that the nature of any given variable and its scaling need to be considered when selecting a suitable statistical analysis. Measures of IQ, aptitude and personality are technically ordinal in nature. In many cases in psychology, however, such strictly ordinal measures are treated as though they are interval in nature because then we can justify the use of more powerful tests (i.e. those that potentially improve chances of finding important effects). This is because interval and ratio data are more open to statistical flexibility and manipulation, something we will explore with parametric and nonparametric tests later in the subject. For the moment, recognise that average scores (means) can be computed for interval and ratio data, but technically not for ordinal or nominal data.

Describing data patterns

This topic introduces ways by which collections of data might be summarised and understood.

Introduction

Collections of data form patterns that can be understood in terms of measurement of central tendency (how data are grouped around some central value) as well as in terms of spread and dispersion. We look to these and related concepts in this topic.

Introduction to the normal curve

The arrangement of the values of a variable is called the distribution of the variable. The normal curve/distribution (also known as the bell curve or the Gaussian distribution) reflects the notion that many continuous data in nature display a bell curve when graphed. The normal distribution is the most useful distribution in statistics, as most datasets can be approximated normally, and nearly all sampling distributions eventually converge to a normal distribution.

For example, if we sample a number of people who are representative of the population and we measure attributes such as IQ, height, weight, personality, driving ability or athletic prowess, we would see a curve emerge as we plot all those values. The midpoint reflects the point where most peoples scores will reside (e.g. the average). If, for example, we have plotted scores for athletic ability in such a way that higher scores reflecting greater athletic ability are to the right of the midpoint of the distribution, then we can see that as scores increase, the number of people who are so gifted decreases. Similarly, those persons who are less gifted will be seen in the tail of the left side of the distribution.

These are the core properties of a normal distribution:

  • The mean, mode and median are all equal.
  • The curve is symmetrical with half the values lying to the left of the centre and half to the right of the centre.
  • The total area under the curve is 100 per cent, expressed as 1.

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now