Chat with us, powered by LiveChat Download the code book excel file (this is where you will work everything, and this is the only file that you will be submitting for this assignment). Code B - Wridemy

Download the code book excel file (this is where you will work everything, and this is the only file that you will be submitting for this assignment). Code B

1. Download the code book excel file (this is where you will work everything, and this is the only file that you will be submitting for this assignment).

Code Book – Data Analysis Example-1.xlsxDownload Code Book – Data Analysis Example-1.xlsx

2. Download the #19712 Survey Question Word document and based on that survey, create a code book on the Excel file. 

#19712_SurveyQuestions.docx

Actions

3. Once you are done, please download these two surveys that I completed randomly. 

#19712_SurveyQuestions 1.docx

Actions

    #19712_SurveyQuestions 2.docx

Actions 

Look into those surveys and input the data into the Excel file using the codebook you created!! 

At the end, you should have an Excel file with completed Code Book and 2 sets of survey data inputted in the second tab.

69

B. RELIABILITY AND VALIDITY IN QUALITATIVE RESEARCH

Discussions of the terms “reliability” and “validity” have traditionally been attached to

quantitative research. Reactions by qualitative researchers have been mixed regarding whether or not

this concept should be applied to qualitative research. At the extreme, some qualitative researchers

have suggested that the traditional quantitative criteria of reliability and validity are not relevant to

quantitative research. Most qualitative researchers argue that some qualitative research studies are

better than others, and they frequently use the term validity to refer to this difference. When

qualitative researchers speak of “validity” they are usually referring to qualitative research that is

credible and trustworthy, and therefore, defensible. The trustworthiness of the study asks the reader to

assess the extent to which the research findings are believable – was the study sufficiently well done

such that we can believe in what its outcomes purport to tell us? When questions are raised about the

credibility of the study, they are asking if the researchers understand their theoretical framework and

base it on the data generated. This is somewhat comparable to issues related to assessing the internal

validity of a quantitative study, where the question is asked “is the researcher measuring what he

thinks he is measuring?” The term transferability is meant to focus our attention on the application of

the findings to other settings and populations, roughly comparable to the external validity of the

quantitative approach. Confirmability in qualitative studies asks about any biases inherent in the data

collector, seeking to establish the neutrality or auditability of the findings. The concept of reliability

in quantitative studies is applied to qualitative research through its more general interpretation of the

dependability of the observations obtained; in qualitative research it refers to the ability of others to

follow the methods used to collect the data (rather than replicating the findings).

A number of specific techniques have been offered to enhance the trustworthiness and credibility

of a qualitative research study.

A) Triangulation is the process of corroborating evidence from different individuals (e.g., employee

and employer), types of data (e.g., observational field notes and interviews), or methods of data

collection (e.g., documents and interviews). The researcher examines each information source

and finds evidence to support a particular theme or pattern. This ensures that the study will be

accurate because the information is not drawn from a single source, individual, or process of data

collection.

B) Researchers also check their findings with participants in the study to determine if their findings

are accurate. Member checking is a process where the researcher asks one or more participants in

the study to check the accuracy of the account. This check involves taking a study back to

participants and asking them (in writing or in an interview) about the accuracy of the report.

Participants are asked about many aspects of the study, such as whether the description is

complete and realistic, if the themes are accurate to include, and if the interpretations are fair and

representative of those that can be made.

C) Peer debriefing is a process similar to member checking, although the researcher’s colleagues are

brought into assist. Every effort is made to include both colleagues who are experts in the field of

study, as well as those who are not. The researcher enlists the help of colleagues by asking them

to code a few of the transcripts so that they are familiar with the process and some sample

70

responses. These peers are also requested to listen critically to the analysis being developed, and

to offer their feedback and suggestions.

D) The analysis and interpretation of the data should be “thick” in that all of the complexities in the

data should be included. This means that the commonalities among the participants, as well as

their variability, should be captured and adequately described. Thick description reveals the

multiplicity of perspectives among the research participants, often leading to an interpretation

that includes discussion of their diversity under varying contexts and circumstances.

E) In negative case analysis, the researcher focuses on the responses of those who appear to diverge

from more frequent responses. These are not considered as “outliers” as they are in quantitative

research, but rather, variability among the responses from the study participants is embraced. The

qualitative investigator scrutinizes these cases in an attempt to learn from them – why is this

case/person different from the others? This results in a more complex, dense, think analysis and

level of description and ultimately, understanding.

F) Qualitative researchers continue their data collection and analysis until saturation has been

reached. They continue collecting and analyzing data until no new questions are being asked or

observations are being obtained. While quantitative methods typically determine a priori what

their sample size should be to obtain a certain level of confidence in the findings, qualitative

researchers must rely on the data to reveal itself as “saturated” so that they do not prematurely

end data collection or analysis. Qualitative reports almost always indicate questions for further

study that have arisen during the course of the study, indicating what areas have not yet been

adequately addressed.

G) While one of the goals of conducting quantitative research is to strive for objectivity, qualitative

researchers recognize and herald the subjective role of the researcher throughout the process. In

accordance with this, qualitative researchers typically keep a journal throughout their research,

delineating their own ideas, reactions, and “biases” in order to try to separate their own responses

from those of the participants. They address and acknowledge their own biases by locating

themselves in the data, continuing a self-reflective position throughout the study. The

researcher’s acknowledgement of what they bring to the study and the extent of their

involvement is important information for the reader to consider in assessing the trustworthiness

and credibility of the study.

H) Researchers may also ask a person outside the study to conduct a thorough review of the study

and report back, in writing, the strengths and weaknesses of the project. This is the process of

conducting an external audit, where a researcher hires or obtains the services of an individual

outside the study to review different aspects of the research. This auditor reviews the project and

writes or communicates an evaluation of the study. This audit may occur both during and at the

conclusion of the study, and auditors typically ask questions such as: “are the findings grounded

in the data?”, “are inferences logical?”, “are the themes appropriate?”, “can inquiry decisions and

methodological shifts be justified?”, “what is the degree of researcher bias?”, and “what

strategies are used for increasing credibility?”

,

60

A. RELIABILITY AND VALIDITY IN QUANTITATIVE RESEARCH

Reliability

What is reliability? If you think about how we use the word reliable in everyday language you

might get a hint about its meaning. For instance, we often speak about a machine as reliable, as in, “I

have a reliable car.” Or, news people talk about a “reliable source”. In both cases, the word reliable

usually means “dependable” or “trustworthy”. In research, the term reliable also means dependable in

a general sense, but there’s more to the definition. In research, the term reliability means repeatability

or consistency. A measure is considered reliable if it would give you the same result over and over

again (assuming that what you are measuring isn’t expected to change). You cannot use a

measurement in your research without first showing that it is reliable. There are four ways of

assessing reliability. Each is used in a different circumstance but they are all ways of estimating the

reliability of the measures you are using in your research study.

Test-Retest Reliability

You assess test-retest reliability when you administer the same test to the same sample on two

different occasions. You would use this procedure for examining reliability if you were measuring

something that you were not expecting to change between the two testing occasions. The amount of

time allowed between the measurements is critical. If you measure the same thing twice, the

consistency between the two measurements will depend in part on how much time has elapsed

between the two measurement occasions. The shorter the time gap, the more the scores will be the

same; the longer the time gap, the more the scores might differ because there might be changes to the

individual during the time interval.

Parallel Forms (Alternate Forms) Reliability

In parallel forms reliability, you first have to create alternate forms of the same measure. One way

to accomplish this is to start with a large set of questions that address the same thing and then

randomly divide the questions into two sets. You then administer both instruments to the same sample

of people; the correspondence between the two parallel forms is the estimate of reliability. One major

problem with this approach is that you have to be able to generate lots of items that reflect what you

want to measure, which is no easy feat. Further, this approach makes the assumption that the

randomly divided halves are equivalent, or parallel. Even by chance, this sometimes will not be the

case.

Internal Consistency Reliability

When you assess reliability using the internal consistency method, you use your single

measurement instrument and administer it to a group of people on one occasion. What you are doing

by this method of estimating reliability is assessing how well the items on the instrument reflect the

same thing. You are in essence looking at how consistent the results are for different items for

measuring the same thing within the measurement instrument.

Inter-observer (Inter-rater) Reliability

Whenever you use humans to conduct observations of others as part of your measurement

procedure, you have to worry about whether the results you get are reliable or consistent. People are

notorious for their inconsistency. We are easily distractible. We get tired of doing repetitive tasks. We

61

daydream. We misinterpret. So how do you determine whether two observers are being consistent in

their observations? You should first establish inter-observer (inter-rater) reliability before you start

your actual study in a pilot study. That way, if you find your observers don’t agree with each other

(which means the reliability will be low) you can conduct additional training with them rather than

having to discard all of the data that’s been collected by them.

There are two ways to go about estimating inter-observer reliability. If the measurement they are

taking consists of categories – the raters are checking off which categories each observation falls in –

you can calculate the percent of agreement between the raters. For example, suppose you had 100

observations that were being assessed by two raters. For each observation, the rater could check one

of three categories. Imagine that on 86 of the observations the raters checked the same category. In

this case, the percent of agreement would be 86%. This might seem like a crude measure, but it does

give an idea of how much agreement exists, and it works no matter how many categories are used for

each observation.

The other major way to calculate inter-observer reliability is appropriate when the measurement is

not categories, but rather is a numerically continuous score. If this is the case, all you would do is

calculate a statistic (called a correlation) which would tell you the exact correspondence between the

two observers, and thus provides a precise assessment of inter-observer reliability.

Validity

When examining a measurement instrument, the first question you must ask yourself is if the

scores it produces are stable and consistently reliable. If your answer is “yes”, the second question is

then whether you can draw meaningful and useful inferences from the measuring instrument – you are

asking about its validity. Seen in this way, reliability is an antecedent to validity: A measure cannot be

valid unless it is first reliable. Reliability is a necessary but not sufficient condition for validity. This

perspective also characterizes validity as the larger, more encompassing issue when you make a

decision about the choice of an existing instrument, or design your own.

The clearest way to define validity is to state that you are asking the question “Is this instrument

measuring what it is supposed to be measuring?” You might have an instrument that repeatedly yields

the same scores on the same sample members (that is, it’s reliable), but those scores aren’t a measure

of what you think they are (that is, the instrument is NOT valid). Validity inquires into the meaning of

the instrument – whether it’s actually measuring what it was designed to measure. There are several

different types of validity, but for our purposes we will address only the most frequently found ones.

Types of Validity

Criterion Validity

An existing measure that we accept as the best indicator of the target concept or behavior we are

trying to study is called a criterion. Criterion validity involves assessing the correspondence

(correlation) between this criterion measure and the new measure that we are trying to devise. If the

correspondence between the two is very high, we can conclude that our new measurement instrument

is high in criterion validity.

Criterion validity has a number of variants, one of which is of particular interest. Suppose we were

trying to design a new measurement instrument that could predict future behavior, such as the tests

that high school juniors take to determine how well they’ll perform in college, or those tests taken by

those aspiring to medical or law school a year or more in advance. We would need to know that the

scores on our test do a good job of predicting future behavior – that they are high in predictive

62

validity. To assess this, we would give a large sample of high school students our test and use their

scores to predict how well they are likely to do in their choice of schools in several years. If there is a

strong correspondence between the criterion of later success and the earlier test scores, we can

conclude that our measurement was high in predictive validity.

Content Validity

There are some concepts that we choose to study that have many component parts to them. For

example, job performance for the Director of a park and recreation agency consists of many domains.

If we were to develop an assessment and only included budgetary skills, for example, we would be

missing the majority of what that job entailed. That performance assessment instrument would not be

content valid because it did not cover all of the areas of the job. If we are trying to design a

measurement instrument for a concept that is comprised of multiple components, we must assess its

content validity – that is, we must be certain that the instrument has covered the entire domain of

content and asked questions about all of the components that make up that concept. Content validity is

assessed by inviting a team of experts in the subject matter of the concept we are researching and

asking if we have included all parts of it in the measuring instrument. A unanimous opinion from the

experts in the affirmative would allow us to conclude that the measurement is high in content validity.

Construct Validity

Construct validation asks how well the test reflects the concept (or more abstract “construct”) we

are trying to measure. Because of its abstract nature, this is a difficult type of validity to assess. At

best we can gather evidence that would strengthen or weaken our confidence in the construct validity

of the measure, but often times it is difficult to find a perfect measure to which we can compare the

one we are designing. To assess construct validity we need to find some objective valid measure of

what we are trying to measure, and administer both measures (the known valid one and ours which is

under development) to members of a large sample. If the correspondence between the measures is

very high, then we conclude that our new measure is high in construct validity. It is frequently

difficult to find another measurement instrument that is high in validity to which we can use for

comparison, so other types of criterion measures can be used if they are valid, such as certain

behaviors. For example, if we return to our desire to design a measure of job performance for the

position of Director of a parks and recreation agency, we might use measures such as increases in

revenue or participation from previous years, evaluations from department heads or input from the

public, etc. The important and challenging issue is to be able to find that objective valid criterion

measure against which to compare our developing measure in an effort to assess its construct validity.

Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.

Do you need an answer to this or any other questions?

About Wridemy

We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.

How It Works

To make an Order you only need to click on “Place Order” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Are there Discounts?

All new clients are eligible for 20% off in their first Order. Our payment method is safe and secure.

Hire a tutor today CLICK HERE to make your first order