Week 1 – Assignment: Examine Statistical Analysis Within the Context of a Dissertation Topic

Hide Folder Information

Turnitin®

Turnitin® enabledThis assignment will be submitted to Turnitin®.

Instructions

For this assignment, you first will identify a topic of interest that you might want to pursue research. You are not tied to this topic when you reach the dissertation sequence, but it should be a topic that you find interesting now and also relates to your program and specialization.

Next, conduct a literature search using the NCU library to locate two quantitative studies examining your selected topic and in which the authors present statistical findings.

Once you have located your articles, you will prepare a short paper using the following format:

Introduction to the selected topic of interest

Brief summary of first article

Include research question(s) and hypotheses, and general findings.

Brief summary of second article

Include research question(s) and hypotheses, and general findings

Include statistical tests used.

Synthesis

Specifically, compare and contrast the two articles, assessing the types of statistical methods and analysis used.

Conclusion

Assess what approach you might take if you were to conduct a study in this topic area.

Length: 3 to 5 pages not including title page and reference page.

References: Include a minimum of 3 scholarly resources.

Your paper should demonstrate thoughtful consideration of the ideas and concepts that are presented in the course and provide new thoughts and insights relating directly to this topic. Your response should reflect graduate-level writing and APA standards. Be sure to adhere to Northcentral University’s Academic Integrity Policy

Introduction to Business Statistics (7th ed.)

NCU School of Business Best Practice Guide for Quantitative Research Design and Methods in Dissertations

Week 1 – Assignment: Examine Statistical Analysis Within the Context of a Dissertation Topic

Hide Folder Information

Turnitin®

Turnitin® enabledThis assignment will be submitted to Turnitin®.

Instructions

For this assignment, you first will identify a topic of interest that you might want to pursue research. You are not tied to this topic when you reach the dissertation sequence, but it should be a topic that you find interesting now and also relates to your program and specialization.

Next, conduct a literature search using the NCU library to locate two quantitative studies examining your selected topic and in which the authors present statistical findings.

Once you have located your articles, you will prepare a short paper using the following format:

Introduction to the selected topic of interest

Brief summary of first article

Include research question(s) and hypotheses, and general findings.

Brief summary of second article

Include research question(s) and hypotheses, and general findings

Include statistical tests used.

Synthesis

Specifically, compare and contrast the two articles, assessing the types of statistical methods and analysis used.

Conclusion

Assess what approach you might take if you were to conduct a study in this topic area.

Length: 3 to 5 pages not including title page and reference page.

References: Include a minimum of 3 scholarly resources.

Your paper should demonstrate thoughtful consideration of the ideas and concepts that are presented in the course and provide new thoughts and insights relating directly to this topic. Your response should reflect graduate-level writing and APA standards. Be sure to adhere to Northcentral University’s Academic Integrity Policy

Introduction to Business Statistics (7th ed.)

NCU School of Business Best Practice Guide for Quantitative Research Design and Methods in Dissertations

5/31/22, 12:44 PM BUS-7105 v3: Statistics I (7103872203) – BUS-7105 v3: Statistics I (7103872203)

https://ncuone.ncu.edu/d2l/le/content/258948/printsyllabus/PrintSyllabus 1/3

Books and Resources for this Week

Week 1

BUS-7105 v3: Statistics I (7103872203)

Introduction to Statistics and Relevance to the Dissertation

In this course, you will develop skills to help you make informed decisions in the business

world. In particular, you will focus on the collection, analysis, interpretation, and reporting

of data relevant to a business decision. But even more relevant to your immediate future

is a consideration of how statistics can be used in dissertation research. Both perspectives

are explored in this course.

In this course we will discuss basic research, which is the traditional hypothesis testing by

gathering, organizing, and analyzing data.

Heads up to Signature Assignment.

Your culminating Signature Assignment (due in Week 8) will be a reflection of all that you

have learned within the course in the form of a presentation in which you display and

explain a series of data analyses research. While this assignment does not require that you

complete work ahead of time such as collecting data, you will want to look ahead to Week

8 in order to prepare. You can contact your professor if you have questions. The

assignment includes many aspects of the course including analyzing data, creating graphs

using SPSS, analyzing relationships between variables, explaining results from the analysis,

and more.

Be sure to review this week’s resources carefully. You are expected to apply the

information from these resources when you prepare your assignments.

66.67 % 2 of 3 topics complete

javascript:void(0);

https://ncuone.ncu.edu/d2l/home/258948

5/31/22, 12:44 PM BUS-7105 v3: Statistics I (7103872203) – BUS-7105 v3: Statistics I (7103872203)

https://ncuone.ncu.edu/d2l/le/content/258948/printsyllabus/PrintSyllabus 2/3

Introduction to Business Statistics (7th

ed.)

External Learning Tool

NCU School of Business Best Practice

Guide for Quantitative Research Design

and Methods in Dissertations

Link

Week 1 – Assignment: Examine Statistical Analysis

Within the Context of a Dissertation Topic

Assignment

Due June 5 at 11:59 PM

For this assignment, you first will identify a topic of interest that you might want to

pursue research. You are not tied to this topic when you reach the dissertation sequence,

but it should be a topic that you find interesting now and also relates to your program and

specialization.

Next, conduct a literature search using the NCU library to locate two quantitative studies

examining your selected topic and in which the authors present statistical findings.

Once you have located your articles, you will prepare a short paper using the following

format:

Introduction to the selected topic of interest

Brief summary of first article

Include research question(s) and hypotheses, and general findings.

Brief summary of second article

Include research question(s) and hypotheses, and general findings

Include statistical tests used.

Synthesis

Specifically, compare and contrast the two articles, assessing the types of

statistical methods and analysis used.

Conclusion

Assess what approach you might take if you were to conduct a study in this

topic area.

Length: 3 to 5 pages not including title page and reference page.

https://ncuone.ncu.edu/d2l/le/content/258948/viewContent/2188881/View

https://ncuone.ncu.edu/d2l/le/content/258948/viewContent/2188910/View

https://ncuone.ncu.edu/d2l/le/content/258948/viewContent/2188871/View

5/31/22, 12:44 PM BUS-7105 v3: Statistics I (7103872203) – BUS-7105 v3: Statistics I (7103872203)

https://ncuone.ncu.edu/d2l/le/content/258948/printsyllabus/PrintSyllabus 3/3

References: Include a minimum of 3 scholarly resources.

Your paper should demonstrate thoughtful consideration of the ideas and concepts that

are presented in the course and provide new thoughts and insights relating directly to this

topic. Your response should reflect graduate-level writing and APA standards. Be sure to

adhere to Northcentral University’s Academic Integrity Policy.

Upload your document and click the Submit to Dropbox button.

School of Business

2

First Edition.

Published by the Center for Teaching and Learning, Northcentral University, 2020

Contributors:

John Bennett, Mary Dereshiwsky, Robert Dodd, David Fogarty, John Frame, Raymie

Grundhoefer, Larry Hughes, Sharon Kimmel, Vicki Lindsay, Edward Maggio, Gordon

McClung, NCU Library Team, Susan Petroshius, Lonnie K. Stevans, Gergana Velkova,

Steve Ziemba

In addition to the collaborative process that engendered this guide, it was also informed by

the quantitative methods and statistics courses in the School of Business.

For comments or suggestions for the next edition, please contact John Frame: [email protected]

3

TABLE OF CONTENTS

Foreword

Introduction

Research Ethics and the IRB

Research Questions

Four Main Designs

Population and Sample

Sampling Method, Sample

Design, and Sample Size

Surveys and Questionnaire Design

Pilot Study

Datasets

Analyzing Secondary Data

Observational Research

Multivariate vs. Univariate Analysis

Measurement of Variables

Descriptive Statistics and Exploratory Data Analysis (EDA)

Inferential Statistics

Alpha Level (level of significance, or p-value)

Hypotheses

Hypothesis Diagrams

Hypothesis Testing

T-Test

Analysis of Variance (ANOVA)

ANOVA Examples

Correlation

Regression Analysis

Factor Analysis

Power (Statistical Power)

Power Analysis

Measuring Validity and Reliability

Internal/External Validity

Selection of Parametric vs. Nonparametric Techniques

Presentation of Statistical Results and Explaining Quantitative Findings in a Narrative Report

1

Foreword

Dear School of Business Community,

Welcome to the Best Practice Guide for Quantitative Research Design and Methods in Dissertations!

With well over 700 doctoral students in the School of Business working on their dissertation this year, this

guide serves as an important resource in helping us shape and implement quality doctoral-level research.

Its primary purpose is to offer direction on quantitative research in School of Business dissertations, serving

students as they craft and implement their research plans, and serving faculty as they mentor students and

evaluate research design and methods in dissertations.

We encourage you to explore this guide. It is filled with details on important topics that will help ensure

quality and consistency in quantitative research in the School of Business.

Thank you to the faculty and staff of the School of Business and wider NCU community that worked to cre-

ate this guide. It is a great contribution to our School, and each of these individuals played an important

role in its development.

– School of Business Leadership Team

2

Introduction

As an accredited university, NCU aims to have

robust expectations and standards for dissertations

produced by its students. This guide, developed

collaboratively by NCU School of Business (SB)

faculty, aims to provide guidance on best practice

in quantitative research design and methods for SB

dissertations.

While this guide can serve as a refresher to those

less familiar with quantitative methods, it will also

help ensure good practice and rigor across com-

mittees and students. To that end, this document is

a guide to help students, as well as faculty, when

judging the merits of student dissertation prospec-

tuses, proposals, and manuscripts. Students should

be familiar with the best practices in this guide and

apply them to their dissertation.

Additional supports related to quantitative research

design and methods are available in the NCU

Dissertation Center (including several webinars),

and statistics experts are available for 1-1 coaching

through the NCU Academic Success Center.

Importantly, before students plan to embark on

a quantitative research design, they need to be

comfortable with quantitative analysis, including

data analysis computer software, such as SPSS. If

students are not comfortable with their level of skill

in quantitative analysis, it is recommended that they

consider how qualitative methods could be used to

explore their research interests. Students interested

in qualitative methods should consult the SB’s Best

Practice Guide for Qualitative Research Design and

Methods in Dissertations, published in 2019, and

available in the Dissertation Center.

Research Ethics and the IRB

Research involving human participants involves

certain ethical responsibilities on the part of the

student and dissertation Chair. These responsibili-

ties are an important part of the overall educational

experience for the student, in that they learn that

https://ncu.libguides.com/c.php?g=901477&p=6486925

https://ncu.libguides.com/c.php?g=901477&p=6486925

3

obtaining data and other information from partici-

pants needs to be done in a manner that respects

the rights of the participant and the wishes of other

organizations that might become involved in the

research. As part of the research ethics review

process, the Institutional Review Board (IRB) at

NCU serves as a resource to provide guidance to

students and faculty to ensure the ethical principles

of Respect for Persons, Beneficence, and Justice

are incorporated into the research design. The IRB

review process is as much a part of students’ doc-

toral education as any other part of the dissertation

process. The intention is not only to ensure studies

are conducted ethically, but also that students un-

derstand the importance of ethics in research and

how to design and conduct research that is consis-

tent with federal regulations.

It is important to keep in mind that recruitment

and data collection can only occur after receiving

NCU IRB approval. The IRB process starts with

IRB Manager, an online system that facilitates the

submission and management of studies for review.

Students should plan ahead and be sure to leave

time for the IRB review to take place, as it may take

up to 15 business days after submission of the IRB

application to receive notification of the IRB’s de-

termination. Also, it is possible that the application

will not be approved the first time through, due to

the need for additional information or clarification.

These factors need to be kept in mind when con-

structing the study timeline. Additional variables

that can impact the timeline include: securing site

permission, site IRB approval (if applicable), in-

ternational research, research involving sensitive

topics or vulnerable populations, research in one’s

place of employment, research involving the De-

partment of Defense or Veterans’ Affairs, and the

development of appropriate recruitment materi-

als and an informed consent form, or, for studies

involving minors, child assent and parental consent

forms. These and other items need to be submitted

as part of the IRB application and can significantly

delay the review process if not present. For exam-

ple, the inclusion of an informed consent form that

does not use (or where the researcher has altered)

the NCU informed consent form template will result

in the application being returned to the student. As

indicated in the Student-Chair Engagement section

of this guide, it is important for students to work

closely with their Chair in the lead up to the IRB

approval process.

A variety of resources are available for students

and faculty as they navigate the IRB process. Guid-

ance materials are available directly within IRB

Manager and are easily accessed from within the

application. Resources are also available via the

NCU Dissertation Center. Finally, when questions

come up, the IRB can be contacted at [email protected]

When doing so, be sure to include the name of the

student in the subject line.

Please review the IRB website for further informa-

tion and resources: https://ncu.libguides.com/irb/

home

Research Questions

Research Questions outline the problem to be inves-

tigated in a study, stated in the form of a question.

Research questions that describe data are called

descriptive. Descriptive research questions typically

ask “How,” or “What.” (As explained elsewhere in

this guide, descriptive research design and meth-

ods that are solely descriptive are not sufficient for

a doctoral-level dissertation. Thus, rigorous research

questions that go beyond descriptive research need

to be included in a dissertation, as explored below.)

Research questions that compare one or more

groups are called comparative. Comparative re-

https://ncu.libguides.com/irb/home

https://ncu.libguides.com/irb/home

4

search questions typically ask, “What is/are the

difference(s)” (for example, what are the differences

between X and Y?).

Research questions that examine relationships are

called correlational, or relationship, questions.

More specifically, these questions typically ask,

“What is the strength and direction of a linear rela-

tionship between the two variables in question.”

Research questions that consider predictions are

called predictive research questions. These types of

questions typically ask, “To what extent does X pre-

dict Y?” Predictive analysis may have one or many

independent variable(s), which may be expressed

as predictive variables. The dependent variable

may be expressed as the outcome variable.

Each research question, with the exception of de-

scriptive research questions, contains a minimum of

two hypotheses: the null hypothesis and the alterna-

tive hypothesis.

Students are encouraged to get 1-1 coaching at the

NCU Academic Success Center on their research

questions and/or sign up for live group sessions of-

fered weekly by the NCU Academic Success Center.

More in formation can be found at: https://vac.ncu.

edu/resources-for-statistics/

References and/or Suggested Reading:

Cramer, D., & Howitt, D. (2004). The SAGE dictio-

nary of statistics. London: SAGE Publications, Ltd.

http://dissertation.laerd.com/how-to-structure-quanti-

tative-research-questions-p2.php

Four Main Designs

There are four main designs that can be used with

a quantitative methodology: experimental, quasi-ex-

perimental, correlational, and descriptive. Students

need to look at their research study to figure out

which design will be most appropriate to answer

their research questions (but, as indicated else-

where in this guide, a descriptive design is insuffi-

cient for a doctoral-level dissertation). The Methods

Map Online Tool (see link below) is a fun and inter-

esting interactive website that provides an overview

of a number of methodological procedures.

Before researchers can begin to think about their

research design, it is essential for them to begin at

the foundation of the business research process:

defining the problem. It is extremely important to

define the problem carefully because this will deter-

mine the purpose of the research and the research

design.

A brief introduction to the four main research de-

signs are as follows:

Experimental Research

Experimentation is conducted in order to test a

causal hypothesis (that is, if a researcher wants to

determine if an independent variable (X) is the sole

cause of any change in the dependent variable

(Y)). In an experiment, a researcher manipulates

the independent variable and measures its impact

on the dependent variable while, at the same

time, controlling for all other variables that may

have influenced the dependent variable. These are

referred to as extraneous or potentially confounding

variables. An experiment is internally valid if it can

be shown that the independent variable is the sole

cause of any change in the dependent variable. In

order to do so, three pieces of evidence are need-

ed: (1) for X to be a cause of Y, X must precede Y

https://ncu.libguides.com/academicsuccesscenter

https://ncu.libguides.com/academicsuccesscenter

http://dissertation.laerd.com/how-to-structure-quantitative-research-questions-p2.php

http://dissertation.laerd.com/how-to-structure-quantitative-research-questions-p2.php

5

in time; (2) X and Y must vary together; (3) for X to

be a cause of Y, other possible causes of Y (alterna-

tive explanations) must be eliminated. In contrast to

internal validity, external validity refers to whether

the results of the experiment can be generalized to

other populations, settings, etc. For instance, with

respect to generalizing to the population, there

would be better external validity if the sample was

selected randomly from the population. This would

have no impact on internal validity, however. Note

that there is often a tradeoff between internal and

external validity and the experimental setting (a lab

vs. field experiment). A laboratory experiment is

an artificial setting that allows the researcher better

control over extraneous/potentially confounding

variables. However, the artificiality of a lab ex-

periment tends to lessen the external validity since

a researcher will want to be able to generalize

to a more realistic setting. Essentially, laboratory

vs. field experiments represent opposite ends of a

continuum having to do with the artificiality of the

setting.

Quasi-Experimental Research

Quasi-experimental designs are used when it is not

viable to randomly assign participants to treatment

groups. In many real-life social situations, groups

of interest may be naturally occurring or pre-ex-

isting. There may also be ethical reasons when

randomization to groups is not practical. Manipu-

lation of an independent variable (also referred to

as a treatment variable), comparison groups (also

referred to as experimental units), and outcomes

measures are present in quasi-experiment designs.

Unequivalent groups are also present because of

the inability to randomly assign participants to

comparison groups. Because of the inability or

decision to not use random assignment to groups,

it is difficult to compare and infer treatment-caused

changes. Quasi-experiment designs are used by

researchers in these situations. Common non-time

series quasi-experiment designs include Cohort

Designs, Counterbalanced Design, Non-equivalent

Control Group Design, Regression-discontinuity

Design, Separate-Sample Pretest-Posttest Designs,

and Separate-Sample Pretest-posttest Control Group

design.

Correlational Research

If the research questions focus on a relationship

between multiple variables, a correlational design

will likely be used. Research is correlational when

at least two, and often more, variables/conditions

are observed and measured and the extent of the

relationship is estimated based on tools such as the

Pearson Product Moment Correlation, the Spear-

man Rank Correlation Coefficient, or even Kendall’s

Rank Correlation Coefficient. In fact, correlational

research is often descriptive in that the associations

are reported to the reader, often in the same table,

as the means and standard deviations. In a pub-

lished research study, a reader can use correlations

and the other descriptive statistics to get a sense of

the data before reading about t-tests, ANOVA, or

multiple regression, whichever the author(s) used

in their analysis. This causal inference is distinct

from prediction or forecasting and a common error

made by students and novice researchers (Cook &

Stanley, 1979). A caution in correlational research

is that, as the famous phrase goes, “correlation

does not imply causality.” There are no “depen-

dent” or “independent” variables in correlational

research – we’re simply comparing the variables on

the basis of association and cannot assert that the

effect of one causes another.

Descriptive Research

Descriptive research (see the section, “Descriptive

Statistics and Exploratory Data Analysis (EDA)”

later in this guide) describes individuals in a study

that was typically conducted in one of three ways:

(a) observational – viewing and recording partici-

6

pants (See “Observational Research” in this guide);

(b) case study – in-depth study of an individual

or group of individuals; and (c) survey – a brief

interview or discussion with an individual about

a specific topic. Descriptive research designs are

common in fields related to behavioral and social

sciences to observe phenomenon such as: natural

behavior, consumer habits, individual morality,

and ethical climate. The observations of the subject

should occur in an unchanged natural environment.

The weaknesses of descriptive research designs

are that observational studies are not repeatable

and not replicable. Descriptive research designs

are often designed in a manner which allows it to

be a precursor to quantitative research. Descriptive

research does not involve statistical testing, thus it

is considered to lack reliability, validity, and scien-

tific rigor. As discussed elsewher e in this guide, a

descriptive research design alone is insufficient for

a doctoral-level dissertation at NCU.

References and/or Suggested Reading:

Cook, T.D. & Stanley, D.T. (1979). Quasi-experimen-

tation: Design and analysis issues for field settings.

Boston, MA: Houghton Mifflin.

Research Methods Knowledge Base website: https://

socialresearchmethods.net/kb/quasiexp.php

The SAGE handbook of social research meth-

ods (2008). London, United Kingdom: SAGE

Publications, Ltd. https://doi-org.proxy1.ncu.

edu/10.4135/9781446212165

Sage Methods Map Online Tool: http://methods.

sagepub.com.proxy1.ncu.edu/methods-map

Population and Sample

The population represents the totality of units under

study, or to whom we wish to generalize or project

the results of statistical research. These are usually,

but not always, people.

Usually, it is not practical to do a census of an

entire population in a single research study, due to

time and cost factors. For this reason, it is neces-

sary to select a sample from that population.

A sample in a dissertation needs to be a substantial

number (see “Power Analysis” in this guide), and

should be determined based on best practices in

quantitative research. Students should be aware

that quantitative research demands a suitable

amount of data, and that the response rate from

samples (such as the response rate for surveys),

will typically be very low. Thus, a large number of

persons will need to be surveyed in order to obtain

an adequate amount of data.

After sampling, it is possible to generalize the

sample results to the population from which it was

http://proxy1.ncu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=edsskl&AN=edsskl.9781848608429&site=eds-live

http://proxy1.ncu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=edsskl&AN=edsskl.9781848608429&site=eds-live

https://doi-org.proxy1.ncu.edu/10.4135/9781446212165

https://doi-org.proxy1.ncu.edu/10.4135/9781446212165

http://methods.sagepub.com.proxy1.ncu.edu/methods-map

http://methods.sagepub.com.proxy1.ncu.edu/methods-map

7

selected. Here are some commonly applied ways to

select a sample in quantitative research:

Simple random sample:

Every element of the target population has an equal

chance of being selected for the sample. This is es-

pecially valuable when doing experimental studies.

Stratified sample:

In this sampling method, it is recognized that there

is not one overall homogeneous population, but,

instead, subpopulations where the subgroups differ

from one another. For example, a researcher may

want to see if there is a significant difference in an

average number of units of Product X purchased by

men and women. The researcher would subdivide,

or ‘stratify,’ the overall population into men vs.

women (strata or subgroups), and randomly select

a sample from each gender to ensure it is ade-

quately represented in the overall sample.

Cluster sample:

In this sampling method, the researcher randomly

draws intact groups, (‘clusters’) instead of individ-

uals, for the study. For instance, a cluster could be

an entire division or department of an organiza-

tion. The researcher then includes all sampling units

(e.g., persons, employees) in that randomly drawn

cluster in our study. The idea is to simulate the

randomness of a true random sample, but without

having to select individuals one by one.

Systematic sample:

In this sampling method, the researcher lists the

elements of the target population, makes a random

start in the list (‘sampling frame’), and then sys-

tematically cycles through the list in a predictable

pattern (e.g., every third sampling unit; every fifth

sampling unit) to select subjects. This pattern of

cycling through the list is known as the sampling

fraction. The items in the list should be scrambled

in random order before beginning to cycle through

that list.

Sampling Method, Sample

Design, and Sample Size

In most cases, a researcher will not contact every-

one in the population (a census), but rather take a

subset of the population, a sample. While the selec-

tion of a sample can involve a non-random (non-

probability) procedure, in most quantitative studies

researchers strive to use a probability procedure

in which every unit in the population has a known

chance of being included in the sample. There are

a number of alternative ways to generate a ran-

dom sample that may vary over time, with respect

to cost, and the amount of information needed to

draw the sample. The researcher often needs a list

of the population in order to select the sampling

elements (a sampling frame) and has to determine

the size of the sample as well.

Students should be aware that, because response

rates to surveys are usually very low, a very large

number of surveys will need to be sent out in or-

der to obtain the sample size expected (see “Power

Analysis” in this guide). Students should discuss the

number of surveys they will need to send out with

their Chair, and should also obtain guidance from

the statistics coaches at the NCU Academic Success

Center.

In addition to sampling, a critical issue that is

unique to quantitative studies is the measurement of

variables. A variable is what a researcher calls the

construct that is identified in the research question

and hypotheses. Examples would include gender,

job satisfaction, behavioral intention, attitudes, etc.

Often these concepts are abstract and not directly

related to physical reality, such as a person’s

8

intelligence. Before a concept can be measured it

needs to be defined, both conceptually and oper-

ationally. An operational definition specifies the

operations necessary to measure the construct. For

instance, intelligence may be conceptually defined

as the ability to think abstractly. It may be opera-

tionalized as the score on an IQ test. Measurement

is much more complex than emerging scholars

believe. This is in part due to the complexity of op-

erationalizing an abstract object that is not related

to physical reality. In addition, many constructs are

multi-dimensional. The good news is that scholars

build on one another’s work. And this includes

the measurement of particular constructs. While

researchers may not agree on how a construct is

to be operationalized, this information is typically

shared in the publication of the research. In fact,

there are entire books and websites that are devot-

ed to providing measurement scales. This is another

reason why it is so important to know the literature

in the discipline and the specific topic of interest.

Sample Design

The two major decisions in designing sampling

plans are the sampling method and the sample

size. Given the desire to generalize the results of a

quantitative study, researchers will use a probability

procedure, if at all possible. This includes a simple

random sample, systematic sample, stratified sam-

ple, cluster sampling (See “Population and Sample”

in this guide). A common form of cluster sampling

is area sampling, where the clusters are the geo-

graphical area. The decision of what method is

used is dependent on a number of factors including

the cost, information, and knowledge of the popu-

lation, accuracy and the time required. The factors

that have to be specified to determine the appro-

priate size is the variability in the population, the

degree of acceptable error, and the confidence in-

terval. Note that it is not the size of the population

that is important but the degree of heterogeneity

of the population. While a researcher can deter-

mine the necessary sample size statistically, this

9

may have to be modified due to other factors. For instance, if a survey was being conducted to obtain the

desired sample size given the level of precision and confidence desired, the initial sample may have to be

larger. This may be due to the completion rate (the number of selected respondents who actually complete

the interview or questionnaire, which, as stated above, is typically very low), as well as the incidence rate

(the percentage of people eligible for participation) in the population. If a study is not designed adequate-

ly, then it may be largely a waste of time. A researcher must have a large enough sample to assure that

all statistical assumptions are met. At the same time, cost, and the ability to collect the desired number of

sample elements, have to be considered.

Sample Size

There are four factors involved in calculating sample size:

Statistical test: the sample size is partly a function of the statistical test used. Some

tests (e.g., Chi-squared) require larger samples to detect a difference than others

(e.g., ANCOVA).

Expected/estimated effect size: the effect size is potency of the strength of the rela-

tionship being investigated. In the language of statistics, an effect size is a difference

between the mean scores of two groups divided by the pooled standard deviation.

This is called ‘Cohen’s d’. A researcher will calculate an effect size as part of the

analysis of the data in order to determine that something meaningful has been found

(not merely statistically significant). However, in advance of doing a study, a

researcher must estimate the effect size in the study.

α

Alpha: the alpha level is the probability of a Type I error—of rejecting the null

hypothesis when it is true. By convention, this is set at . It is best to use the literature,

as well as good judgment, to justify an alpha level that makes sense for a study. This

justification will involve looking at the danger of a Type I error versus the cost in

resources of avoiding it. Given this, the most common used α levels are .01, .05,

and .10.

β Beta: the beta level is the probability of a Type II error of accepting the null

hypothesis when it is false. In other words, of failing to detect a difference when

one exists. As with alpha, a researcher sets beta based upon a judgment. The

convention is .2, which yields a power of .8 (1– β) acceptable level (see “Power

Analysis” in this guide).

10

Surveys and Questionnaire Design

The term ‘questionnaire’ is often confused with

‘survey,’ but they are actually quite different. A

questionnaire is a measuring instrument used in

conjunction with the survey. Basically, it consists of

a list of questions used to gather information from

respondents.

A survey is a research method involving communi-

cation with respondents. In quantitative data, the

asking of questions in mass form, such as with the

use of questionnaires, by phone or interview, is

called a survey. A survey is the distribution of the

questions and the creation of data.

If there is a problem remembering the difference

between a questionnaire and a survey, think of a

survey as the process of disseminating, collecting,

and computer entry of the research items. Surveys

can be on paper, telephone, face-to-face, or web-

based.

A questionnaire is used with all surveys. In a tele-

phone or personal interview, the interviewer is

using the questionnaire to ask the questions and

record responses. In this way, all respondents are

asked the same questions.

The design of questionnaires is much more complex

than one would think. Many researchers believe it

to be more art than science. A researcher needs

to avoid leading or double-barreled questions, as

explored further below. In addition, all respondents

should be given the same questions in the same

order.

In terms of facilitating a survey, an internet survey,

while low cost and quick, lacks control of the sam-

ple, and tends to have low response rates. A re-

searcher needs to carefully consider the objectives

of the study, the questions that need to be asked,

and the target respondents, in addition to the pros

and cons of the alternatives. A combination of

methods is also possible. For instance, self-admin-

istered questionnaires could be hand-delivered to

encourage participation, but left with envelopes to

be returned via mail.

Once data is collected, it is too late to make chang-

es, so it is critical that sufficient time and effort is

placed in the development of a questionnaire.

This involves careful articulation of conceptual and

operational definitions, as well as the measurement

scales used in consideration of the intended anal-

ysis. In fact, a researcher should have a plan for

analysis before any data is collected.

Thorough communication between the student and

Chair is imperative when designing a question-

naire, as well as during data collection.

Types of Questions

What are good questions, and what are bad

questions? Developing an instrument is difficult, and

a researcher should consider if another research-

er’s questionnaire can be used. Questions must be

worded a certain way to make ensure consistency

between the sample and the target population, and

that the items truly measure what they are supposed

to measure. A big problem that could happen if

the questions in the questionnaire are not valid,

nor reliable, is the wrong questions being asked

and, subsequently, the responses not relating to the

research questions. If this happens, the question-

naire basically did not measure what it intended

to measure and, thus, the data collected will not

answer the research questions. Therefore, the items

on the questionnaire must ask exactly what is meant

11

to be asked. A student must communicate well with

his/her dissertation Chair about this issue.

There are many methodological textbooks that state

numerous ways to form a suitable questionnaire.

However, each questionnaire changes with each

topic. Therefore, it is best when one learns through

repetition. The work referenced below by Krosnick

& Presser (2009) is highly encouraged, as they

discuss a number of ideas for writing the perfect

questionnaire. Some of these are:

* The questions need to be at a basic educational

level. Questions must not use colloquialisms, jar-

gon, or slang that is not familiar to the participants.

The simplest wording is the best wording for a

question. Shorter questions are better.

* The wording of each question should be exclusive

and exhaustive. This means that if it is a closed-end-

ed question, all of the possible answers must

appear in the answers. In addition, the answers

should not overlap each other. For example, if there

is a question that asks someone’s age, and the

participants are from 18 through 90, the student

should not offer these as answers:

a. 18-29

b. 29-39

c. 39-49

d. 49-79

If questions such as this appear as the only an-

swers, they are not exclusive or exhaustive. If

someone was age 29, what answer would that per-

son pick? If the person was age 82, what answer

would that person pick?

* Do not ask leading or loaded questions. A lead-

ing question contains non-neutral wording. It will

suggest that something is good or bad. It could also

lead a person toward an answer. An example is:

Do you like your Honda because of the comfortable

seating?

* Give respondents a way out of answering a

question. If the respondent does not want to answer

a question, what should the respondent do? Unless

a question must have a “yes” or “no” response, a

researcher should allow a respondent a way out,

for example, by offering the option “I don’t know”

or “I don’t wish to answer.” If not, the respondent

may stop answering the questionnaire.

* Don’t ask “double-barreled” questions. These

questions ask two questions instead of one. An

example of this would be: When was the last time

that you updated your computer and your printer?

Finally, remember that actual behavior cannot be

measured via a survey; a survey only measures

reports of behavior.

References and/or Suggested Reading:

Bryman, A. (2006). Integrating quantitative and

qualitative research: How is it done? Qualitative

Research, 6(1), 97-113.

Heale, R. & Twycross, A., (2015). Validity and reli-

ability in quantitative studies. Evidence Based Nurs-

ing, 18(3), 66-67.

Krosnick, J.A., & Presser, S. (2009). Question and

questionnaire design. In J.D. Wright & P.V. Marsden

(Eds.) Handbook of Survey Research (2nd Ed.). San

Diego, CA: Elsevier. Retrieved from: https://web.

stanford.edu/dept/communication/faculty/krosnick/

docs/2009/2009_handbook_krosnick.pdf

SurveyMonkey (2019). 5 common survey question

mistakes that’ll ruin your data. Retrieved from Avoid

Bad Survey Questions, Loaded Questions, Leading

Questions, SurveyMonkey: https://www.surveymon-

X

https://web.stanford.edu/dept/communication/faculty/krosnick/docs/2009/2009_handbook_krosnick.pdf

https://web.stanford.edu/dept/communication/faculty/krosnick/docs/2009/2009_handbook_krosnick.pdf

https://web.stanford.edu/dept/communication/faculty/krosnick/docs/2009/2009_handbook_krosnick.pdf

https://www.surveymonkey.com/mp/5-common-survey-mistakes-ruin-your-data/

12

key.com/mp/5-common-survey-mistakes-ruin-your-

data/

Avoiding poor survey questions. Available

at: https://www.google.com/url?sa=t&rct=-

j&q=&esrc=s&source=web&cd=1&ved=2a-

hUKEwjg8PiD44vkAhUC7aYKHcIYBWcQFjAAe-

gQIABAC&url=http%3A%2F%2Ffaculty.nps.

edu%2Fmjdixon%2Fresources%2FStats-Class%2F-

Bad-Questions-Lecture-Examples.doc&usg=AOv-

Vaw3YPdVRsW1WuTxoRi1jA8pl

Pilot Study

A pilot study is a preliminary small-scale study. Its

purpose is to test certain aspects of what will be

the main research study. For example, a newly

developed survey may undergo testing through a

pilot study for refinement purposes. This is just one

example, as a pilot study can be applied in many

different situations. Pilot studies are useful for de-

termining the best research methodology to use,

troubleshooting of a research instrument, collecting

preliminary data for a grant application, or for

determining if a research study is even feasible.

A pilot study is still research. If it is determined

that a pilot study is needed, it will need to under-

go the same processes for approval as any other

research study. This includes going through the IRB.

As a result, the inclusion of a pilot study in one’s

research will add time and this added time can be

substantial. Generally speaking, a pilot study will

be discouraged for a doctoral dissertation because

of the time involvement. If one is using a preexist-

ing survey or other research instrument which has

already been vetted and accepted, a pilot study is

not necessary. However, a pilot study may be need-

ed in cases where the instrument is being devel-

oped for the first time. In such cases, determination

of validity and reliability may require a pilot study.

References and/or Suggested Reading:

NCU IRB (2019). Pilot studies and field tests. Avail-

able at: https://commons.ncu.edu/sites/default/

files/entity/paragraph/2019/Pilot%20Studies%20

and%20Field%20Tests%2004092019.pdf

Datasets

A dataset (also spelled ‘data set’) is a collection

of raw statistics and information generated by a

research study. Datasets produced by government

agencies or nonprofit organizations can usually be

downloaded free of charge. However, some non-

profit organizations may charge a fee for access

to their datasets, or restrict access. Datasets devel-

oped by for-profit companies are often available for

a fee.

Most datasets can be located by identifying the

agency or organization that focuses on a specific

research area of interest. For example, if one is

interested in learning about public opinion on so-

cial issues, Pew Research Center would be a good

place to look. For data about population, the U.S.

government’s Population Estimates Program from

American Factfinder would be a good source.

An “open data” philosophy is becoming more

common among governments and organizations

around the world, with the belief that data should

be freely accessible. Open data efforts have been

led by both the government and non-governmental

organizations, such as the Open Knowledge Foun-

dation. Learn more by exploring The Open Data

Handbook.

One factor to consider when utilizing a dataset that

is not publicly available is the presence of confiden-

tial information in the dataset. For example, patient

disease registries are an increasingly common way

to conduct medical research by constructing data-

https://www.surveymonkey.com/mp/5-common-survey-mistakes-ruin-your-data/

https://www.surveymonkey.com/mp/5-common-survey-mistakes-ruin-your-data/

https://www.studocu.com/en-us/document/grinnell-college/applied-statistics/summaries/bad-questions-lecture-examples/2899573/view

https://www.studocu.com/en-us/document/grinnell-college/applied-statistics/summaries/bad-questions-lecture-examples/2899573/view

https://www.studocu.com/en-us/document/grinnell-college/applied-statistics/summaries/bad-questions-lecture-examples/2899573/view

https://commons.ncu.edu/sites/default/files/entity/paragraph/2019/Pilot%20Studies%20and%20Field%20Tests%2004092019.pdf

https://commons.ncu.edu/sites/default/files/entity/paragraph/2019/Pilot%20Studies%20and%20Field%20Tests%2004092019.pdf

https://commons.ncu.edu/sites/default/files/entity/paragraph/2019/Pilot%20Studies%20and%20Field%20Tests%2004092019.pdf

https://factfinder.census.gov/faces/nav/jsf/pages/programs.xhtml?program=pep#_blank

http://factfinder2.census.gov/faces/nav/jsf/pages/index.xhtml#_blank

https://okfn.org/#_blank

https://okfn.org/#_blank

https://opendatahandbook.org/#_blank

https://opendatahandbook.org/#_blank

13

sets of patients with a common diagnosis, to assess

disease progress and the best treatment over time.

Such datasets would often include protected health

information, requiring additional safeguards for

their use.

When submitting a study that involves the use of a

dataset to the IRB, be prepared to indicate whether

it is publicly available or if permission is needed to

access it. If permission is needed, documentation

of having received permission needs to be part

of the IRB application. If protected or confidential

information is present in the dataset, a description

of how that information will be safeguarded is also

required. See “Analyzing Secondary Data” in this

guide for further information about datasets.

Some links to business datasets include:

• Damodaran Online: Corporate Finance and Val-

uation – NYU, Stern School of Business, Dr. Aswath

Damodaran

• International Monetary Fund Data & Statistics –

The IMF publishes a range of time series data on

IMF lending, exchange rates and other economic

and financial indicators.

• IMF DataMapper

• IMF Fiscal Rules Dataset (1985-2013)

• Mergent Online – An NCU Library database pro-

viding detailed financial records for company re-

search, including up to 15 years of historical data.

• National Longitudinal Surveys – Bureau of Labor

Statistics

• Organization for Economic Co-Operation and

Development Data

• Quandl – “Time-series” numerical only data for

economics, finance, markets & energy; Features

step-by-step wizard for finding and compiling data.

• SAGE Edge Datasets – Click on Links to Business

Datasets to download a Word file containing links

to business datasets available online.

• Statistical Abstract of the United States (2012):

Banking, Finance, & Insurance

• Statistical Abstract of the United States (2012):

Business Enterprise

• Surveys of Consumers – Thomson Reuters & Uni-

versity of Michigan

• U. S. Bureau of Economic Data

For more information on datasets, please see the

NCU Library’s Datasets LibGuide.

Analyzing Secondary Data

Secondary data is a term that relates to data that

was collected by someone else. Thus, data in a

dataset is secondary data. It is important to en-

sure the accuracy of secondary data. Poor data

will result in a poor study. With secondary data,

the manner in which the data was collected, and

consequently its quality, are beyond one’s control.

Therefore, it is important to carefully review the

manner in which a dataset was constructed. Some

factors to consider are:

• Purpose for which the data was originally collected

• Specific methods used in the data collection

• Population the data was collected from and the

validity of the sample used

• Credibility of the individual or organization who

collected the data

• Limits of the dataset

Another factor to consider is how the data was

categorized or coded in the dataset. This may

http://pages.stern.nyu.edu/~adamodar/#_blank

http://pages.stern.nyu.edu/~adamodar/#_blank

https://www.imf.org/en/Data#_blank

https://www.imf.org/external/datamapper/[email protected]/OEMDC/ADVEC/WEOWORLD

https://www.imf.org/external/datamapper/FiscalRules/map/map.htm#_blank

https://www-mergentonline-com.proxy1.ncu.edu/basicsearch.php

https://www.bls.gov/nls/#_blank

https://data.oecd.org/#_blank

https://data.oecd.org/#_blank

https://www.quandl.com/#_blank

https://edge.sagepub.com/field5e/student-resources/datasets#_blank

http://studysites.uk.sagepub.com/field4e/study/Datasets%20-%20CW/business_datasets.doc#_blank

http://studysites.uk.sagepub.com/field4e/study/Datasets%20-%20CW/business_datasets.doc#_blank

https://www.census.gov/library/publications/2011/compendia/statab/131ed/banking-finance-insurance.html#_blank

https://www.census.gov/library/publications/2011/compendia/statab/131ed/banking-finance-insurance.html#_blank

https://www.census.gov/library/publications/2011/compendia/statab/131ed/business-enterprise.html#_blank

https://www.census.gov/library/publications/2011/compendia/statab/131ed/business-enterprise.html#_blank

http://www.sca.isr.umich.edu/#_blank

https://www.bea.gov/#_blank

https://ncu.libguides.com/researchprocess/datasets#_blank

14

influence how data analysis will take place. For

example, the data may have been modified in

some manner, or the full range of data needed for

a study may be spread across different categories

of the dataset. Measurement error in the dataset,

whether or not that bias was intentional (i.e., a

dataset put out by a political party) may also be

present, and needs to be considered (see “Data-

sets” in this guide for additional information about

datasets and secondary data).

Descriptive statistics (See “Descriptive Statistics and

Exploratory Data Analysis (EDA)” in this guide), as

the term suggests, will involve the summarization of

data. For example, this may include the range of

data, number of data points, mean, median, and

mode, standard deviation, and 95% confidence

interval. As explained in the “Descriptive Statistics

and Exploratory Data Analysis (EDA)” section of

this guide, this type of analysis, while important, is

not sufficient for a doctoral-level dissertation.

Inferential statistics involve subjecting the data to

statistical tests, such as for significance. A common

inferential test involves the detection of a statistical-

ly significant correlation between two sets of data

from a dataset. Other types of statistical techniques

that are often applied to datasets include analysis

of variance, regression analysis, logistic regression,

etc. The type of statistical test applied depends on

the research question and the nature of the study

(see “Inferential Statistics” in this guide).

References and/or Suggested Reading:

https://www.managementstudyguide.com/second-

ary_data.htm

Observational Research

Observational research is a form of non-experimen-

tal research in which a researcher observes ongo-

ing behavior in their chosen surroundings (Sauro,

2015). Three types of observational research are

discussed below.

Naturalistic Observation

This form of research occurs in the everyday setting

of participants (as they live life normally). Thus,

there is no intervention by the researcher to influ-

ence the environment (McLeod, 2015).

Participant Observation

In participant observation, the experimenter in-

volves him/herself in the environment of the partic-

ipant, for example, as a member of a group. The

purpose of this is to focus on observing participant

behaviors that may not otherwise be discoverable

by the researcher. Such participant observations

can either be covert or overt in terms of the knowl-

edge or awareness made to the other participants.

The advantage to this form of observational re-

search is that it results in a greater insight into the

participants (McLeod, 2015).

Controlled Observation

This form of observational research is often used by

universities or labs and is carried out under spe-

cifically designed conditions. Such conditions are

discussed by the researcher in detail, and designed

with an attention to detail. Participants experience

the same situation so that their reactions can be

https://www.managementstudyguide.com/secondary_data.htm

https://www.managementstudyguide.com/secondary_data.htm

15

monitored. A key advantage of this form of obser-

vational method is that the study is reproducible

(McLeod, 2015).

References and/or Suggested Reading:

CIRT (Center for Innovation in Research and Teach-

ing) Grand Canyon University (n.d.). Observation-

al method. Retrieved from https://cirt.gcu.edu/

research/developmentresources/research_ready/

descriptive/observational

McLeod, Saul (2015). Observation methods. Simply-

Psychology. Retrieved from https://www.simplypsy-

chology.org/observation.html

Sauro, Jeff (2015). 4 types of observational research.

MeasuringU. Retrieved from

https://measuringu.com/observation-role/

Multivariate vs. Univariate

Analysis

When a statistical analysis is performed on a single

variable in a research setting, it is known as Uni-

variate Analysis. Statistical methods, such as single

and two sample t-tests and ANOVAs, are examples

of Univariate Statistical Procedures.

Consider the population consisting of all U.S. firms

with the number of employees greater than 100.

A researcher may wish to analyze CEO compen-

sation of these firms in a particular year. She may

consider CEO Compensation as the criteria, or

dependent variable (generically known as Y), and

examine three other factors as predictors or ex-

plainers of CEO compensation (Experience, Gen-

der, Corporate Assets).

The following figure depicts this:

100+

CEO

Experience

CEO

Gender

Corporate

Assets $

CEO

Compensation $

POPULATION:

ALL FIRMS WITH

NUMBER OF

EMPLOYEES > 100

Thus, in Multivariate Analysis, a researcher wants

to study many characteristics of a population

(whereas in Univariate Analysis, a researcher is

interested in analyzing a single characteristic of

a population). The most fundamental difference

between Univariate and Multivariate is that multi-

variate statistical techniques take into account the

inter-relationships among variables, while univari-

ate statistical methods do not.

Some widely used multivariate statistical methods

include:

• Regression and Correlation Analysis

• Canonical Correlation Analysis

• Principal Components and Factor Analysis

• Linear Structural Relations Models (LISREL)

• Multivariate Analysis of Variance Models

(MANOVA)

• Cluster Analysis

https://cirt.gcu.edu/research/developmentresources/research_ready/descriptive/observational

https://cirt.gcu.edu/research/developmentresources/research_ready/descriptive/observational

https://cirt.gcu.edu/research/developmentresources/research_ready/descriptive/observational

https://www.simplypsychology.org/observation.html

https://www.simplypsychology.org/observation.html

https://measuringu.com/observation-role/

16

For tutorials on SPSS and Multivariate Analysis,

please see: https://www.youtube.com/results?-

search_query=spss+multivariate+analysis

Measurement of Variables

A central concept in statistics is the level of mea-

surement of variables. It’s so important to every-

thing a researcher does with data that it is usually

taught within the first week in every intro statistics

course. The four levels of measurement are (in

hierarchical order): nominal, ordinal, interval, and

ratio.

Nominal: These are unordered categorical vari-

ables. As Grace-Martin (n.d.) states, “These can

be either binary (only two categories, like gender:

male or female) or multinomial (more than two cate-

gories, like marital status: married, divorced, never

married, widowed, separated).” There is no logical

order to the categories, and it makes no sense to

rank, add, or subtract data that is nominally mea-

sured.

Relevant Statistical Methods:

Since arithmetic operations are not relevant

for nominal variables, the only descriptive

measure that can be used is the calculation

of frequencies. In order to use statistical infer-

ence methods on nominal data, it must first be

converted to the interval scale.

Ordinal: These are ordered categories (still cate-

gorical, but in an order, such as Likert items, with

responses, such as, “Never, Sometimes, Often,

Always”) (Grace-Martin, n.d.). There is a logical

order to ordinal data, but since the difference

•

between ordinal values are meaningless, it makes no

sense to perform addition or subtraction on ordinal

data.

Relevant Statistical Methods:

Only the median can be meaningfully calculated

using ordinal data. Parametric statistical inference

procedures should not be used with ordinal data.

Interval: As Grace-Martin (n.d.) states, these are “nu-

merical values without a true zero point. The idea

here is the intervals between the values are equal and

meaningful, but the numbers themselves are arbitrary. 0

(zero) does not indicate a complete lack of the quantity

being measured. IQ and degrees Celsius or Fahrenheit

are both interval.” Measurements belonging to this cat-

egory can be counted, ranked, added, or subtracted.

Relevant Statistical Methods:

All descriptive measures of central tendency and

dispersion can be calculated using interval data.

In addition, all of the most powerful parametric

statistical methods (e.g., t-tests, ANOVA, regres-

sion/correlation, factor analysis, etc.), can mean-

ingfully use data that is measured at the interval

scale.

Ratio: Ratio data are numerical values having the same

properties as those from the interval scale, but with

a true zero point. Most measurements in the physical

sciences, engineering, and economics are done on

ratio scales.

Relevant Statistical Methods:

All descriptive measures of central tendency and

dispersion can be calculated using ratio data. In

addition, all of the most powerful parametric sta-

tistical methods (e.g., t-tests, ANOVA, regression/

correlation, factor analysis, etc.), can meaningful-

ly use data that is measured at the ratio scale.

•

•

•

https://www.youtube.com/results?search_query=spss+multivariate+analysis

https://www.youtube.com/results?search_query=spss+multivariate+analysis

17

It is important to remember that the most powerful

statistical techniques only yield meaningful results

when the variables used are either measured at the

interval or ratio scale.

As Grace-Martin (n.d.) states, “Interval and Ra-

tio variables can be further split into two types:

discrete and continuous. Discrete variables, like

counts, can only take on whole numbers: number

of children in a family, number of days missed from

work. Continuous variables can take on any num-

ber, even beyond the decimal point. Not always

obvious is that these levels of measurement are not

only about the variable itself. Also important are the

meaning of the variable within the research context

and how it was measured.”

Discrete interval or ratio variables can be analyzed

using statistical procedures such as Chi-Squared

tests, Poisson regression, and the Negative Binomi-

al regression.

References and/or Suggested Reading:

Grace-Martin, Karen (n.d.). When a Variable’s Level

of Measurement Isn’t Obvious. The Analysis Factor.

Retrieved from

https://www.theanalysisfactor.com/level-of-measure-

ment-not-obvious/

Descriptive Statistics and

Exploratory Data Analysis (EDA)

Various kinds of statistical methods may be utilized

in any study (including a dissertation), and some of

these techniques prove to be more challenging than

others because of the concepts and mathematics in-

volved. Descriptive Statistics are numerical summa-

ries utilized to describe/explain empirical informa-

tion or data and are on the less difficult side of the

spectrum. Descriptive Statistics, by themselves, are

not rigorous enough to be used exclusively for data

analysis in doctoral-level dissertations. Neverthe-

less, exploratory data analysis (EDA) should always

be performed and presented at the beginning of

any analysis. More rigorous and diverse statistical

procedures include Inferential Statistics (See “Infer-

ential Statistics” in this guide).

EDA should include both numerical summaries and

visual displays (e.g., figures, tables, graphs, and

charts). Numerical summaries are used to help

understand:

• the central tendency of the data (mean, median,

mode);

• the dispersion of data (variance, standard devi-

tion, range); and

• the shape and type of frequency distribution of

data (Normal Distribution or other).

Please note that, above, “data” refers to a hypo-

thetical two-dimensional spreadsheet where the

rows represent the subjects (or time periods) and

the columns represent the variables. For example, a

researcher can collect responses to 25 rating items

on an employee satisfaction survey for 100 sub-

jects. The columns for each subject would contain

their individual ratings on each of the 25 survey

questions.

18

Given a sample of data, (n), from a population represented by a single random variable X, the formulas

for the sample measures of central tendency are:

The variance is not easily interpretable, since its for-

mula consists of sums of squares. A more interpreta-

ble measure of dispersion is the standard deviation,

which is calculated by taking the squared root of

the variance. Simply put, the standard deviation

tells one how far, on average, each data value lies

from the average (mean). In Finance and Econom-

ics, standard deviation is used as a measure of

error or “risk.” A larger standard deviation is as-

sociated with more error or higher risk (e.g., stock

market volatility).

It is also important to know what the overall pop-

ulation (or sample) looks like (e.g., is it Normally

Distributed or something else?). The best way to see

this is to plot a histogram and examine its shape.

This link can be helpful for how to create a histo-

gram in SPSS: https://www.youtube.com/results?-

search_query=SPSS+25+descriptive+statistics+his-

togram.

If the histogram looks “bell-shaped,” then the data

probably came from a Normal Distribution. How-

ever, sometimes looks are deceiving, so it is more

precise to test whether or not the data comes from

(or looks like) a Normal Distribution. Different

tutorials consisting of how to test for the presence or

absence of a Normal Distribution are available at

this link: https://www.youtube.com/results?search_

query=SPSS+25+normal+distribution+testing.

https://www.youtube.com/results?search_query=SPSS+25+descriptive+statistics+histogram.

https://www.youtube.com/results?search_query=SPSS+25+descriptive+statistics+histogram.

https://www.youtube.com/results?search_query=SPSS+25+descriptive+statistics+histogram.

https://www.youtube.com/results?search_query=SPSS+25+normal+distribution+testing

https://www.youtube.com/results?search_query=SPSS+25+normal+distribution+testing

19

The reason why a researcher would want to know

whether or not their population is Normally Dis-

tributed is important. When the data are Normally

Distributed, things are better defined, because the

sampling distribution of the statistics used for esti-

mation and testing is also Normal. However, it is

not critical if the data are found to deviate substan-

tially from a Normal Distribution. There are two

reasons why:

• one can always use certain mathematical trans-

formations of the data to try to induce a Normal

Distribution (e.g., natural log, squared root,

etc.); and

• if the sample is large enough (n >100), it can be

assumed that, while the population may not be

Normal, the sampling distribution of the statistic

used to test the hypothesis will be Normal—re-

gardless of the actual shape of the distribution.

(This is known as the Central Limit Theorem.)

• In other words, as long as the sample is large

(n >100), one does not have to worry about

whether or not the data follows a Normal Distri-

bution.

Inferential Statistics

Inferential statistics is about taking samples and

using the sample results to make inferences about

population parameters, such as the mean and the

standard deviation. In other words, sample data

is used to gain insight into a whole population’s

characteristics. This makes sense because it is rare

to be able to take a complete census of a popula-

tion. Because of expense, time, and a variety of

other factors, it makes much more sense to draw a

random sample from this population.

An example of using interferential statistics include

a researcher who is interested in studying the

personality traits of accountants vs. sales represen-

tatives. She cannot survey or interview all accoun-

tants and sales representatives that exist. Therefore,

the researcher can observe a smaller segment, or

sample, of people who work in their fields, and

ensure that this sample is representative of the pop-

ulation under study.

There are two (2) requirements necessary for estab-

lishing that a sample is representative of the larger

population:

• when the sample is chosen, every element in the

population must have an equal chance of being

selected; and

• each sample selection is independent of all oth-

er sample selections.

Ensuring independence in the sampling process is

necessary, and not that difficult, when one is able

to run one’s own experiment and collect data. How-

ever, if a researcher is using data that was collect-

ed by another entity, then he or she must make sure

that the samples used are representative before

making any conclusions. The “equally-likely” issue

is not a problem when the population size is large.

As stated above, the purpose of taking a sample

from a larger population is to make inferences

about that population by using the information in the

sample. The sample mean ( X ) is used to estimate

the population mean ( μ ), and the sample stan-

dard deviation ( S ) is an estimate of the population

standard deviation ( σ ) . The sample histogram (see

“Descriptive Statistics and Exploratory Data Analysis

(EDA)” in this guide) can be used to estimate the

population histogram (or distribution). Sample statis-

tics, X and S, may be used to test hypotheses about

the population parameters, μ and σ.

Alpha Level (level of significance,

or p-value)

Alpha Level (level of significance or p-value) is the

20

probability of rejecting the null hypothesis when the

correct decision should be to fail to reject the null

hypothesis. Alpha levels are determined by the re-

searcher before performing the statistical analysis.

Determination of the alpha level is influenced by

Type I and Type II errors and repeated testing.

An alpha level of .05, or lower, is considered an

acceptable level of significance for a statistical test.

The alpha level drives decision making to reject,

or fail to reject, the null hypothesis. The lower

the alpha level, the less chance of a Type I error,

although this also places tighter controls around de-

tecting a difference, or relationship. The alpha levels

that are most often used are .01, .05, and .10.

References and/or Suggested Reading:

Cramer, D., & Howitt, D. (2004). The SAGE dictio-

nary of statistics. London: SAGE Publications, Ltd.

Hypotheses

Hypotheses are analytical statements (using popula-

tion parameters) about the relationships outlined in

the research question. Hypotheses should be close-

ly related to, and well aligned with, the research

questions guiding the study.

Each research question, with the exception of de-

scriptive research questions, contains a minimum of

two hypotheses: the null hypothesis and the alterna-

tive hypothesis (sometimes referred to as a research

hypothesis).

The null hypothesis is the hypothesis that the re-

searcher would like to disprove. This is the hypoth-

eses to be rejected, or nullified. Null hypotheses

for comparative research questions typically state

that the population means of two or more groups

are the same. Null hypotheses for correlational

research questions typically state that there is zero

(or no) correlation between the variables or con-

structs of interest. The alternative hypothesis is the

logical opposite of the null hypothesis, and it is

most often the hypothesis that the researcher would

like to prove. It is important to note that the null and

alternative hypotheses are mathematical statements

about population parameters (e.g., population

means, standard deviations, correlations, etc.) and

never contain statistics.

References and/or Suggested Reading:

Cramer, D., & Howitt, D. (2004). The SAGE dictio-

nary of statistics. London: SAGE Publications, Ltd.

Hypothesis Diagrams

One of the overall goals of quantitative research is

to seek theories that focus on possible relationships

among variables. A diagram is an effective method

to demonstrate the hypothetical pathways (relation-

ships) involved in a research project. It can assist

both the author/researcher and reader to follow the

intended suppositions.

Variables can have numerous different types of

relationships. For example, variable relationships

can be casual, conditional, reciprocal, symmetrical,

spurious, or controlling. When presenting a visual

representation of these relationships, pathways are

typically diagrammed, such as moderating, mediat-

ing, and confounding variable connections.

Example of diagramming a mediating relationship:

MEDIATING

VARIABLE

INDEPENDENT

VARIABLE

DEPENDENT

VARIABLE

21

Example of diagramming a moderating relationship:

MODERATING

VARIABLE

INDEPENDENT

VARIABLE

DEPENDENT

VARIABLE

References and/or Suggested Reading:

Creswell, J. W. & Creswell, J. D. (2017). Research

design: Qualitative, quantitative, and mixed methods

approaches. London: Sage Publications, Ltd.

Greenland, S. & Pearl, J. (2006). Causal diagrams.

Encyclopedia of epidemiology. Retrieved from

https://ftp.cs.ucla.edu/pub/stat_ser/r332.pdf

Hypothesis Testing

Hypothesis testing is the primary means for making

decisions based on statistical testing. Hypotheses

are declarative statements that outline relationships

or comparisons to be tested in a research study.

The null hypothesis is the core idea in hypothesis

testing. The null hypothesis is the hypothesis to be

rejected, or nullified. The steps in hypothesis testing

are:

1. State the null and alternative hypotheses (also

referred to as a research hypothesis), using only

population parameters.

2. Determine if a one-tailed or two-tailed test should

be used. Note: if the alternative hypothesis

contains a greater than (>) or less than (<) sign,
then a one-tailed test should be used. Conversely,
if the alternative hypothesis has a “not equal to”
sign , then a two-tailed test should be used.
3. Determine the alpha level (see “Alpha Level” in
this guide) to use in decision making to reject,
or fail to reject, the null hypothesis.
4. Select the appropriate statistical test to use.
5. Perform the statistical analysis. If a parametric
test is selected, be sure to test for assumptions.
6. Compare the probability value (p-value) from the
statistical analysis with the pre-determined alpha
level. If the p-value is at, or less than, the select-
ed alpha level, then reject the null hypothesis in
favor of the alternative hypothesis. Conversely,
if the p-value is greater than the selected alpha
level, then fail to reject the null hypothesis.
References and/or Suggested Reading:
Cramer, D., & Howitt, D. (2004). The SAGE dictio-
nary of statistics. London: SAGE Publications, Ltd.
T-Test
The T-test is used to analyze whether or not there is
a significant difference between the means of two
groups. T-tests are hypothesis testing tools which
give the researcher the ability to test an assumption
about a population, and are used to determine if
there is a significant difference between the means
of two groups. The process of finding differences
between two sets of data are analyzed through the
use of the t-statistic, t-distribution, and degrees of
freedom.
The goal is to test the null hypothesis that there is
no statistically significant difference between the
means of two groups. If the null hypothesis is reject-
ed, then there is a statistically meaningful difference
between whatever is being compared.
https://ftp.cs.ucla.edu/pub/stat_ser/r332.pdf
22
One version of the t-test is used to compare inde-
pendent groups. In other words, each group con-
tains unique membership, and no two items may
belong to both groups.
Another version is known as the paired-samples
t-test. This version is used to compare a single
group at two different time periods (before and
after). For example, say a group’s job satisfaction
was surveyed, and then a new incentive schedule
is announced. This incentive is an intervention that
may impact one’s satisfaction with their work. The
group will be surveyed again after the incentive is
announced to see if the potential for a reward ties
the group members closer to their jobs.
Here is a link to several video tutorials on how to
do independent and paired-sample t-tests in SPSS:
https://www.youtube.com/results?search_que-
ry=SPSS+tutorial+t+tests
Analysis of Variance (ANOVA)
Analysis of Variance (ANOVA) is a statistical tech-
nique which allows a researcher to investigate
whether or not there are differences between group
mean scores. ANOVA essentially finds these differ-
ences by “splitting” the variability found inside a
data set into two parts: systematic and random. The
systematic factors are due to the variation between
groups, while the random component is dependent
upon the variation among the experimental units or
data observations. ANOVA is an “omnibus” test.
This means that the test will indicate whether or not
there are statistically significant mean differences
across the comparison groups. But it will not, neces-
sarily, identify which groups are different.
If the null hypothesis of no mean differences is
rejected in an ANOVA (i.e., there is a statistically
significant difference between two or more groups),
a researcher needs to dig deeper to find out exactly
where the nature of those differences lie. To do that,
the researcher has two options. One of the options
is to conduct a Tukey HSD test; the other is to con-
duct Dunnett’s C test. Tukey’s HSD is used when
assumptions are not violated, and Dunnett’s C test
is used when assumptions are violated. If the null
hypothesis is accepted, then there is no need to do
a Post Hoc test. A Post Hoc test is conducted only
when a researcher has found a statistically signif-
icant difference between group means and wants
to discover which groups have different, and which
groups have equivalent, means.
ANOVA Examples
A One-Way ANOVA is used when there is a single
variable which is measured across two (2) or more
groups. For example, a researcher is interested in
technology use across age groups, and observed a
large group of people and how they use their devic-
es. The people were randomly assigned into groups
aged 19-25, 26-35, and 36-45. In this situation,
ANOVA may be used to see if the different age
groups use technology differently, by examining the
mean differences of technology use across these
age groups. For a more detailed example, consider
a healthcare manager who is interested in studying
employee engagement in a hospital system. As
client and patient satisfaction surveys play a role
in the level of insurance reimbursement a hospital
will receive, these are important data to investigate.
The administrator consults a researcher who ex-
plains that job satisfaction is a predictor of patient
satisfaction (in other words, the worker’s emotional
ties to the job will rub off, so to speak, on patients
they care for). The manager then administers a job
satisfaction survey to three different departments in
the hospital: emergency room, intensive care unit,
and out-patient services. A fourth group, administra-
tive support staff, who have little patient interaction,
https://www.youtube.com/results?search_query=SPSS+tutorial+t+tests
https://www.youtube.com/results?search_query=SPSS+tutorial+t+tests
23
is used as a control group. The manager could then use a One-Way ANOVA to determine whether or not
job satisfaction is different across the four departments (to see if one department is more satisfied than the
other, or if the three patient-facing units are more satisfied than the control group). The results will help the
manager to triage the emotional attachment of the workers.
A Two (or More)-Way ANOVA is a more complex design in which there are two or more independent
variables measured over two or more groups. Consider an operations manager of a financial services firm
who is interested in surveying how committed the firm’s workers are to their jobs. The manager would like
to test that men and women have different levels of commitment and also that people over 40 years of age
have a different level of commitment than those under 40. The manager may use a Two-Way ANOVA in
order to compare two independent variables: gender and over-under 40. The result of this analysis will
lend empirical support for or against the manager’s belief.
In the above example, since both gender and over-under 40 contain two (2) groups, there are four (4)
means (2 x 2) to consider and test for differences. However, this testing is confounded when the presence
of “interaction” is considered. To keep it simple, in the two-variable case (Two-Way ANOVA considered
above), interaction would exist if the effect that being male or female had on commitment depended upon
whether a worker was over 40 or not (or vice versa). A full factorial ANOVA includes both main effects
(gender and above-below 40) and interaction effects (the interaction between gender and above-below 40).
24
It is important to note that in the absence of inter-
action, there are only two mean differences to test
for the dependent variable commitment—one due
to gender and the other from being over-under 40.
However, when interaction is present, there are six
(6) such mean differences to consider (in our ex-
ample—see if you can work it out). So, interaction,
while being more realistic from a research perspec-
tive--is also more complex. That is why the first step
in a Two (or More)-Way ANOVA is to test the null
hypothesis of the absence of interaction, which, if
accepted, would make things a lot easier.
Here is a link to several video tutorials on how to
do One (or More)-Ways ANOVAs in SPSS:
https://www.youtube.com/results?search_que-
ry=SPSS+anova
Another common instance of ANOVA, known as
Analysis of Covariance (ANCOVA), is to control
for, or “hold constant,” interval or ratio scale vari-
ables. For example, continuing with the sex/age
example, say the manager thinks that the level of
education might matter in this analysis. He/she
could use ANCOVA techniques to compare women
and men and both age groups, controlling for their
level of education. In this way, any statistical effects
of educational level are removed from the analysis;
and the manager gets a “purer,” (so to speak), indi-
cation of the effects of sex and age on commitment
to the employer.
Here is a link to several video tutorials on how to
do ANCOVA in SPSS:
https://www.youtube.com/results?search_que-
ry=SPSS+ancova
References and/or Suggested Reading:
Kerlinger, F., and Lee, H.B. (2000). Foundations of
behavioral research (4th ed.). Orlando, Florida:
Hartcourt College Publishers.
Saunders, M., Lewis, P., Thornhill, A. (2015). Re-
search methods for business students (7th ed.) Essex,
England: Pearson Education Unlimited.
Correlation
Correlation is a measure of the degree of “linear”
relationship between (or amongst) variables. The
most common is known as Pearson Product Mo-
ment Correlation, noted in the literature as “r.”
Correlation ranges from -1 (perfectly inverse/neg-
ative correlation) to a +1 (perfectly direct/positive
correlation). A correlation that hovers around zero
indicates that variables are not linearly related.
Correlation is only a measure of the “linear” or
straight-line relationship between variables. Cor-
relation does not measure the degree of a non-lin-
ear relationship. Moreover, researchers must be
careful when drawing conclusions using correla-
tion, as it does not assume causation (correlation
does not imply causation).
There are three types of correlation:
1. simple correlation (between one “dependent”
variable, Y, and one “explanatory” variable, X);
2. multiple correlation (between one “dependent”
variable, Y, and many “explanatory” variables
X1, X2, X3,…);
3. canonical correlation (between many “depen-
dent” variables and many “explanatory” vari-
ables).
Simple correlation is calculated between two vari-
ables. Multiple correlation is computed between
https://www.youtube.com/results?search_query=SPSS+anova
https://www.youtube.com/results?search_query=SPSS+anova
https://www.youtube.com/results?search_query=SPSS+ancova
https://www.youtube.com/results?search_query=SPSS+ancova
25
one variable, on one hand, and two or more variables, on the other (e.g., think of the relationship between
the weight of an individual as a function of height and average daily calories). So, it involves using many
variables.
It is important to note that correlation tells us not only the strength of a linear relationship
(close to -1 or +1), but also the direction.
In other words, a positive simple correlation indicates that increases in the X variable is associated with
increases in the Y variable (and vice versa). A negative simple correlation tells us that increases in the X
variable is associated with decreases in the Y variable (and vice versa).
This “scatterplots” (below) show, graphically, the different strengths and directions of linear relationships that can
exist between two variables. The straight blue line going through the points depicts the “linear” relationship.
CORRELATION
(INDICATES THE RELATIONSHIP BETWEEN TWO SETS OF DATA)
STRONG POSITIVE CORRELATION WEAK POSITIVE CORRELATION STRONG NEGATIVE CORRELATION
WEAK NEGATIVE CORRELATION MODERATE NEGATIVE CORRELATION NO CORRELATION
The source for this graphic can be accessed by clicking here.
https://www.bing.com/images/search?view=detailV2&id=ED8CCC8ACCEE3BCF1A591F4191064BD2D13AEE95&thid=OIP.u8vsEyGv4ZtA5_pmPD-BJQHaE4&mediaurl=http%3A%2F%2Fcdn.pythagorasandthat.co.uk%2Fwp-content%2Fuploads%2F2014%2F07%2Fcorrelation-1.jpg&exph=1000&expw=1518&q=scatter+plot+examples&selectedindex=45&ajaxhist=0&vt=0&ccid=u8vsEyGv&simid=608014540731908841&sim=11
26
A correlation matrix is used when a researcher
wants to display many different simple correla-
tions for multiple variables. The correlations are
all assembled into a table with the number one (1)
always going down the main diagonal of the table.
For instance, with four variables, there would be
four different simple correlations computed; the ma-
trix (table) would consist of these four correlations
in the off-diagonal elements, and the number one
(1) in the main diagonal of the table.
Here is a link to several video tutorials on how to
run simple correlations in SPSS:
https://www.youtube.com/results?search_que-
ry=SPSS+correlation
Spurious correlations can also occur when two
variables seem to be correlated (numerically) but
are actually not correlated. Often, their correlation
is really driven by a third, hidden variable.
Correlation and regression analysis go hand in
hand. While correlation measures the strength and
direction of relationship, it does not give the actual
linear relationship. Regression analysis will yield
the equation of the straight line (the blue line in the
plots above) going thru any scatter of points,
where b0 is the “Y” intercept and b1 is the slope.
The correlation coefficient and the slope will always
have the same mathematical sign.
References and/or Suggested Reading:
Rodgers, J.L. & Nicewander, A.W. (1988). Thirteen
ways to look at the correlation coefficient. The
American Statistician, 42(1), 59-66.
Regression Analysis
Regression analysis is used to make predictions
about how one variable may influence another. The
concept of regression analysis was first conceived
by Sir Francis Galton, a cousin of Charles Dar-
win. Galton was studying the theory of evolution
and observed the concept of “regression toward
the mean,” when studying sweet peas. This led to
predictable measurement outcomes and, eventually,
some of the early concepts of regression analysis.
Regression analysis can be used to determine the
strength and direction of a relationship between
variables (similar to correlation analysis). However,
regression analysis differs from correlation analysis
in being able to predict the levels of the dependent
variable by knowing the values of the independent
variable.
Regression is used for many applications in industry
from sales forecasting to credit scoring. It is also
used extensively in government applications for esti-
mating budgets, economic forecasting, and improv-
ing the provision of public services to citizens.
Simple linear regression (also known as a bivariate
regression) is the prediction of a dependent vari-
able using a single independent (or explanatory)
variable. Multiple Regression is the prediction of a
single dependent variable using two or more inde-
pendent (or explanatory) variables. Data are col-
lected on an independent variable (X) and a depen-
dent or criterion variable (Y) for each individual,
and an equation is computed that depicts a linear
relationship between the two variables.
https://www.youtube.com/results?search_query=SPSS+correlation
https://www.youtube.com/results?search_query=SPSS+correlation
27
Here is a link to several video tutorials on how to
do a regression analysis is SPSS:
https://www.youtube.com/results?search_que-
ry=SPSS+regression+example
Please review this YouTube video for information
about fitting a non-linear regression in SPSS:
https://www.youtube.com/results?search_que-
ry=SPSS+nonlinear+regression+example
SPSS is offered at no cost to NCU students through
the university. This is available through the
University Services Module in NCUOne.
In simple linear regression, the research question
posits the relationship between two variables. For
example: Does the transformational leadership style
(independent variable) have a direct effect on work-
er productivity (dependent variable)? A researcher
can explore this using simple regression. Another
example of a research question is: What is the
linear relationship that would predict the extent of
physical injury from body strength for elderly wom-
en, and how accurately does this equation predict
the extent of physical injuries?
An index, known as “r-squared”, is obtained in
a regression analysis by squaring the correlation
coefficient. R-squared directly tells us how well we
can predict Y from X. It is also referred to in the
literature as the coefficient of determination, and
is formally defined as the proportion of variation
explained in the dependent variable, Y, by the
explanatory variable, X. R-squared is a measure of
the “goodness of fit.”
Multiple (Linear) Regression Analysis refers to when
there are two or more independent variables used
to predict a dependent variable. An example of
multiple regression might be suggesting that a
leader’s behavioral transparency (X1) and sense of
humor (X2) will lead workers to experience a high-
er level of positive emotions (Y). Here are a number
of tutorials to learn how to fit multiple regression
analyses in SPSS:
https://www.youtube.com/results?search_que-
ry=SPSS+multiple+regression+example
Building a linear regression model is only part of
the process. When using the model in a real-world
application, one should take steps to ensure the
model conforms to the assumptions of linear regres-
sion. There are 9 key assumptions of regression
analysis:
1. The regression model is linear in parameters;
2. The mean of residuals is zero;
3. Homoscedasticity of residuals or equal
variance;
4. Zero (0) correlation of residuals;
5. The X variables and residuals are uncorrelated;
6. The number of observations must be greater
than the number of Xs;
7. The regression model is correctly specified;
8. The independent variables are not highly
correlated with each other; and
9. Normality of residuals.
These assumptions will vary in importance de-
pending on how one intends to make predictions
for individual data points, or if the coefficient is to
be given a causal interpretation. One of the most
important assumptions, which is often overlooked,
is that of validity. This means that the data used
should address the research question seeking to be
answered (Gelman & Hill, 2007).
https://www.youtube.com/results?search_query=SPSS+regression+example
https://www.youtube.com/results?search_query=SPSS+regression+example
https://www.youtube.com/results?search_query=SPSS+nonlinear+regression+example
https://www.youtube.com/results?search_query=SPSS+nonlinear+regression+example
https://www.youtube.com/results?search_query=SPSS+multiple+regression+example
https://www.youtube.com/results?search_query=SPSS+multiple+regression+example
28
References and/or Suggested Reading:
Gelman, A., Hill, J. (2007). Data analysis using
regression and multilevel/hierarchical models. Cam-
bridge: Cambridge Press.
Saunders, M., Lewis, P., Thornhill, A. (2015). Re-
search methods for business students (7th ed.) Essex:
Pearson Education Unlimited.
Stanton, J.M. (2017). Galton, Pearson, and the peas:
A brief history of linear regression for statistics in-
structors. Journal of Statistics Education, 9(3), 10-23.
Warner, R.M. (2008). Applied statistics: From bivar-
iate to multivariate techniques. Thousand Oaks, CA:
Sage Publications.
Factor Analysis
Factor analysis may be used to analyze the struc-
ture of interrelation or correlations across a large
set of variables in a dataset (e.g., test scores, test
questions, or questionnaire responses). The pro-
cedure will derive a smaller set of uncorrelated
variables, known as factors. These factors must be
interpreted in order to give meaning to the new
composite measures.
Factor analysis techniques are either exploratory
or confirmatory. Exploratory factor analysis (EFA) is
useful for analyzing structure in the original set of
variables and is used as a variable reduction tech-
nique. This method is appropriate for reducing the
size of datasets with many variables. Confirmatory
factor analysis (CFA) is useful when researchers
have conceptual theories, or prior research which
support preconceived ideas on the actual structure
of the data. In the situation where the researcher
wishes to test hypotheses about how variables
should be grouped from factors, or the number of
factors, a confirmatory approach must be taken to
assess the level from which the data may meet the
new expected structure. However, in order to be
able to undertake statistical testing in CFA, all of
the variables and factors must have a Multivariate
Normal Distribution. This is a significant limitation
of this approach.
Once the research problem is defined adequately,
the researcher must make the decision as to wheth-
er the factor analysis will be exploratory for iden-
tifying structures through data reduction, or confir-
matory for data summarization. If confirmatory, the
researcher should also decide if structural equation
modeling might be appropriate if it is hypothesized
that a tight fit (or close relationship) may exist in the
data. If exploratory, the researcher should select the
type of factor analysis regarding variables or cas-
es. Cases are comprised of Q-type factor analysis
or cluster analysis, while variables are R-type factor
analysis.
Factor analysis is typically conducted using inter-
val or ratio measured variables, and incorporates
some assumptions regarding testing. A structure
should not exist prior to conducting factor analysis.
Bartlett’s test of sphericity (sig. < .05) can show if
enough correlations exist among variables to pro-
ceed when statistically significant. Measurements
denoting sampling adequacy values must exceed
.50 for both the overall test and each individual
variable. Next, the factor matrix is specified to de-
termine the number of factors to be retained. After-
wards, a rotational method with considerations of
whether the factors should be correlated (oblique)
or uncorrelated (orthogonal) is chosen. Orthogonal
methods include VARIMAX, EQUIMAX, and QUAR-
TIMAX. Oblique methods include Oblimin, Promax,
and Orthoblique. The factor model respecification
will consider whether any variables were deleted,
changing the number of factors. The factor matrix
then undergoes validation with consideration of
29
split/multiple samples, separate analysis for sub-
groups, and identifying influential cases. Once all
of this is completed, the researcher then can make
a selection of surrogate variables, compute factor
scores, and create summated scales.
When selecting factor models, and number of
factors, some best practices may be helpful. Com-
ponent analysis models are appropriate when the
aim is data reduction. The common factor model
is best when there are highly specified theoretical
applications.
This is a YouTube tutorial on how to do an EFA in
SPSS:
https://www.youtube.com/results?search_query=-
exploratory+factor+analysis+in+spss+step+by+step
This is a tutorial on how to do a CFA in SPSS:
https://www.youtube.com/results?search_query=-
confirmatory+factor+analysis+in+spss+step+by+-
step
References and/or Suggested Reading:
Hair, J. F., Black, B., Babin, B., & Anderson, R. E.
(2010). Multivariate data analysis: A global perspec-
tive (7th ed.). Upper Saddle River: Pearson.
Power (Statistical Power)
Power (Statistical Power) is the ability of the statisti-
cal test to detect and reject a null hypothesis when
the null hypothesis is false (and should be reject-
ed). The power of a statistical test is reported as a
probability with values ranging between zero and
1.0. For example, a null hypothesis of no difference
between two groups is rejected. A statistical power
value of 0.8 would be interpreted as an 80% prob-
ability that the null hypothesis is false. A statistical
power of between 0.80 and one (1) is considered
acceptable power for a statistical test.
References and/or Suggested Reading:
Hedberg, E. (2018). The what, why, and when of
power analysis. In Hedberg, E. Introduction to pow-
er analysis: Two-group studies (pp. 1-9). Thousand
Oaks, CA: SAGE Publications, Inc.
Power Analysis
Power Analysis is, technically, the computing of
statistical power (see “Power (Statistical Power)” in
this guide). There are two occasions when a power
analysis should be performed:
1. During research design (a priori power
analysis).
2. After the statistical test has been run (post hoc
power analysis).
There are statistical packages available to perform
these types of power analyses. The most common
statistical package used by NCU dissertation can-
didates is G*Power because it is available over the
internet free of charge, and it is user friendly.
In a dissertation, a student needs to state that the
minimum required sample size has been reached,
and plan early about how to reach this sample size
(the percentage of a sample that will actually re-
spond to a survey or questionnaire is very small).
A student should thus discuss how participants will
be recruited and/or how the data will be obtained
with sufficient detail.
The purpose of an a priori power analysis is to
determine the minimal sample size needed to detect
the relationship of interest and the probability of
rejecting a null hypothesis when the null hypothesis
https://www.youtube.com/results?search_query=exploratory+factor+analysis+in+spss+step+by+step
https://www.youtube.com/results?search_query=exploratory+factor+analysis+in+spss+step+by+step
https://www.youtube.com/results?search_query=confirmatory+factor+analysis+in+spss+step+by+step
https://www.youtube.com/results?search_query=confirmatory+factor+analysis+in+spss+step+by+step
https://www.youtube.com/results?search_query=confirmatory+factor+analysis+in+spss+step+by+step
30
is false. This helps the researcher determine if their
sampling frame (the group the researcher will be
recruiting from) is large enough so that the research-
er might conceivably recruit enough participants
and support hypothesis testing. An a priori power
analysis does not compute power of the statistical
test because data has not yet been collected and
the statistical test has not yet been run. Instead, the
researcher selects the statistical test to be performed
and enters the pre-determined power, alpha level
(see alpha level), and estimated effect size. Most
software packages include standardized effect size
values based on small, medium, and large catego-
ries. Power values should be set at, or greater than,
0.80, and alpha levels should be set at .05 or lower.
The purpose of a post hoc power analysis (also
referred to as observed power in the literature) is to
determine power of the statistical test based on the
known sample size, known effect size, and known
alpha level. A statistical power of 0.80 or greater
is considered acceptable power for a statistical
test. Some researchers purport that post hoc power
values are inflated (so it is best to interpret this value
conservatively).
References and/or Suggested Reading:
Cohen, J. (1988). Statistical power analysis for the
behavioral sciences (2nd ed.). New Jersey: Lawrence
Erlbaum.
Gogtay, N. (2010). Principles of sample size calcula-
tion. Indian Journal of Ophthalmology, 58(6), 517-
518.
Universität Düsseldorf. (2014). G*Power 3.1 manual.
[PDF]. Retrieved from: http://www.gpower.hhu.de/fil-
eadmin/redaktion/Fakultaeten/Mathematisch-Natur-
wissenschaftliche_Fakultaet/Psychologie/AAP/gpow-
er/GPowerManual.pdf
G*Power [computer software] available at: http://
www.psychologie.hhu.de/arbeitsgruppen/allge-
meine-psychologie-und-arbeitspsychologie/gpower.html
http://www.gpower.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Psychologie/AAP/gpower/GPowerManual.pdf
http://www.gpower.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Psychologie/AAP/gpower/GPowerManual.pdf
http://www.gpower.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Psychologie/AAP/gpower/GPowerManual.pdf
http://www.gpower.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Psychologie/AAP/gpower/GPowerManual.pdf
http://www.psychologie.hhu.de/arbeitsgruppen/allgemeine-psychologie-und-arbeitspsychologie/gpower.html
http://www.psychologie.hhu.de/arbeitsgruppen/allgemeine-psychologie-und-arbeitspsychologie/gpower.html
http://www.psychologie.hhu.de/arbeitsgruppen/allgemeine-psychologie-und-arbeitspsychologie/gpower.html
31
Measuring Validity and Reliability
Validity can be defined as the degree to which
instruments and tools in a research study measure
what they are intended to measure. If study findings
are determined not to be valid, then the results are
essentially meaningless. The tools must measure
what they are intended to measure. Otherwise,
the results will not allow the investigator to answer
the research question(s). In other words, without
validity, the study’s purpose is missed. Validity is
sometimes contextual in that a valid research study
in one circumstance does not necessarily mean that
it is valid in another.
Validity and reliability are two unique concepts.
While validity, defined above, basically notes
whether or not an instrument measures what it
is intended to (e.g., assessing the efficacy of a
teaching style versus a student liking an instructor
or a course), reliability is measured in terms of
the consistency of the results. There can be three
types of consistency that are considered: over time,
across items, and across researchers. Assessing
reliability over time is assessed using a test-retest,
where respondents are measured at one point in
time and then at another point in time. If the results
are correlated (such as over .80), the measure is
considered reliable. Internal consistency measures
the consistency of items in a multi-item measure. The
most common measure on internal consistency is
a statistic known as Cronbach’s coefficient alpha.
Finally, inter-rater reliability assesses the consistency
in the judgment of observers or raters.
In relation to measurement validity, there are four
primary types: face validity, content validity, crite-
rion validity, and construct validity. Each of these
types defines validity from a unique perspective
and evaluates it differently. Face validity is like a
“gut check,” the weakest assessment of validity.
Does the measurement look like it should yield
results as intended? For example, if a researcher in-
tends to study positive emotions, an instrument that
appears to measure positive emotions makes sense.
Content validity is the degree the measure covers
the entire scope of the concept that is being mea-
sured. For instance, if brand loyalty is considered to
be both a behavioral and cognitive phenomenon,
both of these dimensions need to be included in the
measurement. Criterion validity refers to how well
the measure is related to an outcome that may be
classified as concurrent or predictive that refers to
the time sequence. For instance, criterion validity
assesses whether the measure is correlated with
what it is intended to measure, such as a preg-
nancy test predicting pregnancy, or whether an
SAT predicts college performance. Last, construct
validity is a determination of whether or not the
measurement is actually measuring what it is intend-
ed to measure. For instance, does an IQ test really
measure intelligence? Or might it be measuring
educational level?
It is important to ensure there is no confusion be-
tween validity and reliability. It is possible for a
study to be reliable, but not valid. In other words,
reliability is a necessary but not sufficient criteria
for validity.
To conclude, validity is essential to attain in con-
ducting research, especially in the social sciences.
Validity should be considered by researchers as
early as the development of the research questions,
and certainly through study design and implemen-
tation. In order to discover results efficacious to
answer a research question, validity must be con-
trolled as much as is feasible.
32
References and/or Suggested Reading:
Petty, R. E, Briñol, P., Loersch, C., & McCaslin, M. J.
(2009). The need for cognition. In M. R. Leary & R.
H. Hoyle (Eds.), Handbook of individual differences
in social behaviour. New York, NY: Guilford Press.
Kerlinger, F.N., & Lee, H.B. (2000). Foundations of
behavioral research (4th ed.). Belmont: Wadsworth/
Thomsen Learning.
Trochim, Wiliiam M., Donnelly, James P., & Arora,
Kanika (2016). Research methods: The essential
knowledge base. Boston: Cengage Learning.
Internal/External Validity
There are two main types of validity that are consid-
ered in the design and evaluation of experimental
designs. Internal validity refers to whether or not an
experiment can demonstrate that the effect of an
independent variable can be clearly attributed to
changes in the dependent variable. For example, if
we explore the impact of role mentorship on resil-
ience, we must rule out (as best as possible) the fact
that other variables may influence (i.e., moderate
and/or mediate) this association. External validity
refers to the generalizability of the study results. For
instance, with respect to generalizing to the popula-
tion, a researcher would have better external validity
if the sample was taken randomly from the popula-
tion. In fact, a primary challenge in all research is to
suggest that research findings are generalizable to
populations, settings, products, time periods, etc.
Threats to Validity
Internal validity has been widely written about, and
a number of factors have been identified that can in-
fluence it (Campbell & Stanley, 1963). Investigators
attempt to control for these factors as much as pos-
sible during research to attempt to achieve internal
validity. However, the need for this control can also
impact generalizability.
These factors include experimental mortality, or the
loss of participants in the comparison groups. This is
especially true during longitudinal experiments. His-
tory is another threat and refers to events that occur
outside of the experiment but during the same mea-
surement periods. Similarly, maturation are changes
that occurs in subjects during the course of the ex-
periment. Subjects may age, or may simply become
hungry or bored during the course of an experiment.
The threat of testing indicates a practice effect in that
repeated applications of a measurement may impact
subsequent data collection.
There are also threats to external validity, or the gen-
eralizing of findings. These include interaction effects
where a pretest might decrease a participant’s sen-
sitivity to an experimental variable. Another threat
is multiple-treatment interference, where participants
are exposed to a series of treatment conditions and
the effects of prior conditions are not ‘erasable.’
Note that there is often a tradeoff between internal
and external validity and the experimental setting (a
lab vs. field experiment). A laboratory experiment is
an artificial setting that allows the researcher better
control over extraneous/potentially confounding
variables. However, the artificiality of an experiment
tends to lessen the external validity since a research-
er wants to be able to generalize to a more realistic
setting. Essentially laboratory vs. field experiments
represent opposite ends of a continuum having to do
with the artificiality of the setting.
References and/or Suggested Reading:
Campbell, D.T., & Stanley, J.C. (1963). Experimental
and quasi-experimental designs for research. Boston:
Houghton Mifflin.
Zikmund, William G., Babin, Barry J., Carr, Jon C.,
& Griffen, Mitch (2013). Business research methods
(9th ed.) Mason: South-Western, Cengage Learning.
33
Selection of Parametric vs.
Nonparametric Techniques
Most researchers would always opt to use paramet-
ric statistics to analyze their data, given that most
consumers of their research are familiar with these
techniques and, in general, these tests are very
powerful.
Many statistical methods (e.g., t-test, correlation,
and regression) are referred to as ‘parametric’ and
require that the parameters and underlying distri-
bution of the data exist. In the t-test, for example,
these parameters are the mean and standard devi-
ation. Moreover, the sample must have come from
a univariate Normal distribution. There are various
statistical tests which can be used to assess whether
data are likely to have come from a Normal distri-
bution. These include the Anderson-Darling test, the
Kolmogorov-Smirnov test, and the Shapiro-Wilk test.
Parametric statistics require assumptions to be
made about the format of the data to be analyzed.
Perhaps the most important aspect is when the data
is not normally distributed, such as when the out-
come is an ordinal variable or a rank, then there
are outliers, and the outcome has clear limits of
detection. Parametric tests also involve estimation
of the key parameters of that distribution (e.g., the
mean or difference in means) from the sample data.
In many cases in the social sciences (including
business), these assumptions hold, and even where
they do not, the data can often be transformed by
researchers in order to meet the required assump-
tions. However, there are cases where the assump-
tions, even with transformed data, do not support
the use of parametric statistical techniques.
Nonparametric tests are sometimes referred to as
being distribution-free tests because they are based
on fewer assumptions (e.g., they do not assume an
approximate univariate normally distributed vari-
able). However, nonparametric methods are “less
powerful” than parametric methods. The probability
that the null hypothesis will be rejected when it is
false, is less for nonparametric tests as compared
with parametric tests.
It should be remembered that, with parametric tests,
the hypotheses are about population parameters
(e.g. μ = 50 or μ1 = μ2 ). With nonparametric tests,
the null hypothesis is more generalized. For exam-
ple, in a parametric test the null hypothesis may be
that two populations are equal. However, in non-
parametric statistics, this is interpreted as the two
populations being equal in terms of their central
tendency (which could involve medians).
Nonparametric tests have some definite advantag-
es when analyzing variables which are ordinal,
contain outliers, or are measured imprecisely. If
one wanted to still analyze with parametric meth-
ods, then major assumptions would have to be
made about distributions, as well as difficult and
error-prone decisions about coding values. Interest-
ingly enough, many parametric tests perform well
in non-Normal and skewed distribution environ-
ments--as long as the sample size is large enough.
Researchers should always take this into consider-
ation before assuming they need to choose a non-
parametric test as their only option. However, when
sample sizes are relatively small, many statisticians
34
choose to conduct nonparametric tests which are simpler to conduct and easier to interpret. Below is a
table of parametric tests and their nonparametric counterparts:
PAR
AME
TRIC TESTS (MEANS)
1-SAMPLE
T-TEST
2-SAMPLE
T-TEST
ONE-WAY
ANOVA
with one factor and one
blocking variable
FACTORIAL
DOE
NO
NP
ARA
METRIC TESTS (MEDIANS)1-SAMPLE
SIGN,
1-SAMPLE
WILCOXON
MANN
WHITNEY
TEST
KRUSKAL
WALLIS,
MOOD’S
MEDIAN
TEST
FRIEDMAN
TEST
Source: https://blog.minitab.com
Below are some general guidelines for applying nonparametric statistical tests to data:
If one’s analysis includes two independent samples,
and the data are:
Nominal: consider Chi-square test or Fisher
exact test.
Ordinal: consider Wilcoxon-Mann-Whitney test
or Kolmogorov-Smirnov two-sample test.
If one’s analysis includes matched (or related) sam-
ples, and the data are:
Nominal: consider McNemar change test.
Ordinal: consider Wilcoxon signed ranks test.
If one’s analysis includes three or more independent
samples, and the data are:
Nominal: consider Chi-square test.
Ordinal: consider Kruskal-Wallis one-way
analysis of variance.
If one’s analysis includes measuring relationships,
and the data are:
Nominal: consider a Phi coefficient or kappa
coefficient.
Ordinal: consider a Spearman correlation coef-
ficient or Kendall’s Tau.
https://blog.minitab.com/blog
35
Here are some YouTube tutorials for learning how
to work with nonparametric statistics:
https://www.youtube.com/results?search_que-
ry=SPSS+tutorial+nonparametric+statistics
References and/or Suggested Reading:
Box, G. E. (2013). An accidental statistician: The life
and memories of George E. P. Box. Hoboken: Wiley
and Sons.
Lamorte, W., W. (2017). When to use a nonpara-
metric test. Boston University School of Public Health
Best Practice Module, Retrieved from: http://sphweb.
bumc.bu.edu/otlt/mph-modules/bs/bs704_non-
parametric/BS704_Nonparametric2.html
Riegelman, R. (2013). Studying a study and testing a
test. (6 ed.). Baltimore: Lippincott Williams & Wilkins.
Whitley E., & Ball, J. (2002). Statistics review 6:
Nonparametric methods. Critical Care, 6, 509-512.
Retrieved from: https://doi.org/10.1186/cc1820
Presentation of Statistical Results
and Explaining Quantitative
Findings in a Narrative Report
What a researcher discovers is just as important as
how they communicate it to readers. Communicat-
ing data and statistical findings is an essential skill
and an important element in presenting research
findings. If miscommunicated, an audience may be
lost or, at a minimum, bored. Proper presentation,
done correctly, can have an enduring impact on
audiences—both readers and those who attend
presentations or doctoral dissertation defenses.
Data and statistics are left-brained material, in
that they tap into logical and rational information
processing. However, readers and audiences are
more likely to retain right-brain presentation ele-
ments, such as demonstrations, examples, stories,
and analogies. In order to be enduring, statistical
findings should be logical and rational, as well as
memorable.
For example, a study finding that job performance
and job satisfaction share a positive correlation of
r = .31 is rational evidence. Such data could be
bolstered, for example, with other data or research
(e.g., a story of a qualitative prediction by a busi-
ness guru stating that to impact job performance, a
manager might influence job satisfaction).
The 6th edition of the Publication Manual of the
American Psychological Association (APA) devotes
an entire chapter to “Displaying Results” through
proper design and placement of tables and figures
to illustrate findings. In other words, a picture truly
is worth a thousand words if the narrative augments
the displayed evidence. This is true both in a man-
uscript and in a presentation. While in a presenta-
tion, voice may be used to help illustrate the impact
of research findings. The APA manual provides
important formatting and presentation guidelines
for tables and figures so that they are not cluttered
and have the greatest potential for impact. Exam-
ples are also provided for enhancing the visual aids
with a well-composed narrative. An emphasis is
placed on conciseness of the content of each visual,
as well as standard form.
Because the APA understands that most of the
information stemming from a quantitative result is
usually manipulated by a statistical package, the
charts and graphs usually follow a Microsoft Word
document-based construction. This construction is
based on the normal APA rules of formatting, such
as font, size, and spacing. This can be easily found
in the APA 6th edition manual, chapter 5. Most of
the statistical packages will be able to convert these
https://www.youtube.com/results?search_query=SPSS+tutorial+nonparametric+statistics
https://www.youtube.com/results?search_query=SPSS+tutorial+nonparametric+statistics
http://sphweb.bumc.bu.edu/otlt/mph-modules/bs/bs704_nonparametric/BS704_Nonparametric2.html
http://sphweb.bumc.bu.edu/otlt/mph-modules/bs/bs704_nonparametric/BS704_Nonparametric2.html
http://sphweb.bumc.bu.edu/otlt/mph-modules/bs/bs704_nonparametric/BS704_Nonparametric2.html
https://ccforum.biomedcentral.com/articles/10.1186/cc1820
36
tables or figures to APA. Even though this is noted,
make sure that everything within a chart or figure
matches the requirements of the correct font, size,
and color for APA when used within the dissertation
process.
The student must make sure that everything has con-
verted correctly and all numbers have been copied
over properly. Everything should also look concise
and clear as it is retrieved from one chart and
re-entered into the dissertation. This may include
the use of the same number of decimal points and
similar cell formatting within a table. Students must
realize that some statistical software packages may
not typically put a demographic in the same sam-
pling frame as a frequency. Therefore, a student
must have enough forethought to work within the
Microsoft Word program to be able to add columns
and rows within table properties.
The most needed explanations for quantitative
methodology consist of: 1) the explanation of the
sample data (descriptive analysis including inde-
pendent and dependent variables); 2) the statistics,
both in text and in charts, tables, or graphs (using
the correct APA recommended spacing, alignment,
and punctuation marks for those particular statis-
tical tests); and 3) final results, including the null
hypothesis testing with the probability of occur-
rence (p-value). These explanations are normally
displayed within the results section (Chapter 4) of
the dissertation, but they may also cross over into
Chapter 5.
During the explanation of the sample, the pop-
ulation, sample size, type of sampling, power
analysis, instrumentation, and variables should be
explained using measures of central tendency and
tables, as needed (Creswell & Cresswell, 2018).
These explanations of the sampling data should
include measures of central tendency that are ap-
propriate to the data: the range, means, medians,
modes, standard deviations, etc. This can exist in
37
table format to help readers understand the many
variables that are being used. A best practice for
what a researcher enters into a frequency or distri-
bution table would be to add any typology of the
variables. This means that most of the questions on
a questionnaire should exist in a table format.
A researcher should also explain any categories of
numerals listed within a table that are being used
to explain the sample (Elliott & Woodward, 2020).
If a researcher decides to use categories of ages
instead of an open question of age for the sample,
the category and the demographics of that cate-
gory need to be explained, using the frequency or
description for each (e.g., years of age: “young”,
< 18 = 0, 0%; “middle age”, 18.5 - 35.0 = 50,
60%, etc.). Describing tables and charts in words
is important. Many of these tables will occur in the
appendices and be alluded to within the in-text
explanation. An example is: “A total of 188 people
answered the questionnaire. It consisted of only
people over 18. However, these people were load-
ed into two groupings, ‘young’ and ‘middle-aged,’
created by the researcher. Sixty percent of the
sample was… There were more males (100) than
females (88) within the dataset.” Without the expla-
nation of the dataset, the reader would not be able
to understand what happened within the sample,
nor understand the subsequent findings or results
(Adams & Lawrence, 2019).
To be able to have more knowledge of how to con-
duct or write a results section, it is best to read how
other researchers have constructed their own results
sections, and learn how to write a clear and con-
cise results section. Explaining the findings of the
study is imperative. The researcher should cover the
null hypothesis, explain the type of statistical test,
the statistical significance of the testing, confidence
intervals, and the effect size. During this portion,
the written in-text wording of an APA formatted
write-up is necessary. This type of formatted write-
up will include all the steps that are needed for a
correct, finalized explanation of the findings. Nor-
mally, within this section, a table may be alluded to
and used in an appendix.
Do not forget to state what the hypothesis testing
found, including the effect, and if it was the direc-
tion that was expected (if that was stated in the
hypothesis testing). Remember that a test that may
not have a good ending still has an ending. Also,
not finding the answer that was desired or expect-
ed is still a result (e.g., ‘The one-way ANOVA, F (2,
112) = 2.414, p = 0.101 did not show significant
differences between the age groups as the con-
ceptualization of the theory supported.’). With no
significant finding, the effect size or confidence in-
terval does not need to be reported. Depending on
the tests, it may be that a researcher decides to use
effect size over confidence intervals, or confidence
intervals over effect size, or both. If a researcher
does not find what was expected, the discussion
section is a good place to explain this.
Finally, it is important for a dissertation student to
stay in close communication with his or her Chair as
the results are analyzed and the findings are being
reported.
38
References and/or Suggested Reading:
Adams, K. A. & Lawrence, E. K. (2019). Student
study guide with IBM SPSS Workbook for research
methods, statistics, and applications (2nd ed.). Thou-
sand Oaks, CA: SAGE Publications.
American Psychological Association (2012). Publica-
tion Manual of the American Psychological Associa-
tion (6th ed.). Washington, DC: American Psycholog-
ical Association.
Creswell, J. W. & Creswell, J. D. (2018). Research
design: Qualitative, quantitative, and mixed methods
approaches (5th ed.). Thousand Oaks, CA: SAGE
Publications.
Elliott, A. C. & Woodward, W. A. (2020). Quick
guide to IBM SPSS: Statistical analysis with step-by-
step examples (3rd ed.). Thousand Oaks, CA: SAGE
Publications.
Foreword
Introduction
Research Ethics and the IRB
Research Questions
Four Main Designs
Population and Sample
Sampling Method, Sample
Design, and Sample Size
Surveys and Questionnaire Design
Pilot Study
Datasets
Analyzing Secondary Data
Observational Research
Multivariate vs. Univariate Analysis
Measurement of Variables
Descriptive Statistics and Exploratory Data Analysis (EDA)
Inferential Statistics
Alpha Level (level of significance, or p-value)
Hypotheses
Hypothesis Diagrams
Hypothesis Testing
T-Test
Analysis of Variance (ANOVA)
ANOVA Examples
Correlation
Regression Analysis
Factor Analysis
Power (Statistical Power)
Power Analysis
Measuring Validity and Reliability
Internal/External Validity
Selection of Parametric vs. Nonparametric Techniques
Presentation of Statistical Results and Explaining Quantitative Findings in a Narrative Report
TOC
Button 1:
Page 2:
Page 4:
Page 6:
Page 8:
Page 10:
Page 12:
Page 14:
Page 16:
Page 18:
Page 20:
Page 22:
Page 24:
Page 26:
Page 28:
Page 30:
Page 32:
Page 34:
Page 36:
Page 38:
Page 40:
Button 2:
Page 3:
Page 5:
Page 7:
Page 9:
Page 11:
Page 13:
Page 15:
Page 17:
Page 19:
Page 21:
Page 23:
Page 25:
Page 27:
Page 29:
Page 31:
Page 33:
Page 35:
Page 37:
Page 39:
Page 41:

#### Why Choose Us

- 100% non-plagiarized Papers
- 24/7 /365 Service Available
- Affordable Prices
- Any Paper, Urgency, and Subject
- Will complete your papers in 6 hours
- On-time Delivery
- Money-back and Privacy guarantees
- Unlimited Amendments upon request
- Satisfaction guarantee

#### How it Works

- Click on the “Place Order” tab at the top menu or “Order Now” icon at the bottom and a new page will appear with an order form to be filled.
- Fill in your paper’s requirements in the "
**PAPER DETAILS**" section. - Fill in your paper’s academic level, deadline, and the required number of pages from the drop-down menus.
- Click “
**CREATE ACCOUNT & SIGN IN**” to enter your registration details and get an account with us for record-keeping and then, click on “PROCEED TO CHECKOUT” at the bottom of the page. - From there, the payment sections will show, follow the guided payment process and your order will be available for our writing team to work on it.