Dissertation - Chapter 3
Home ] Up ] Dissertation - Chapter 1 ] Dissertation - Chapter 2 ] [ Dissertation - Chapter 3 ] Dissertation - Chapter 4 ] Dissertation - Chapter 5 ] Dissertation - Appendix 1 ] Dissertation - Appendix 2 ] Dissertation - References ]

 

Joseph H. Schuessler Ph.D.

Joseph H. Schuessler Ph.D.

CHAPTER 3

METHODOLOGY

This chapter outlines the research methods used in testing the hypotheses presented in Chapter 2.  It describes the development of the instrument, an assessment of the reliability and validity of the instrument, the way data was collected and analyzed, and data analysis procedures.  The chapter has the following sections: 1) population and sample, 2) unit of analysis, 3) instrument design and development, 4) survey administration, and 5) data analysis strategy.

Population and Sample

Research into the area of ISS has drawn from a broad sample including Network Administrators, Information Security Officers (Keller et al., 2005), Chief Information Officers, Chief Security Officers (Kotulic and Clark, 2004), and so on.  Because this research is measuring perceptual effects of threats, countermeasures, and ISS effectiveness, the sample was drawn from a similar population.  The Association of Information Technology Professionals (AITP) was used because of its size, the variety of IS professionals, and its use in prior studies.  Formerly known as the Data Processing Management Association, the organization was used as a population by Nance and Straub (1988) and Straub and Nance (1990).  AITP boasts of members in various levels of Information Technology (IT) from mainframe systems to LAN and WAN systems to microcomputers.  Members also span various facets of society including universities, banking, industry, retail, hospitals, military, local, state, and federal governments, and so on.  Membership is approximately 3,600.  With membership including people from such diverse positions, it was hoped that the heterogeneity of the population would increase the external validity of the study.  Additionally, due to the mission of the AITP to bring technology professionals together, the sample has a more technical understanding of the relationships of threats, countermeasures, and the risks involved with protecting an information system than general managerial personnel or end users.

Unit of Analysis

The unit of analysis for this study was organizational.  The questionnaire inquired from individuals about their perceived organizational security posture with respect to threats, countermeasures, and ISS effectiveness within the context of their organizational size and industry affiliation.  The goal of using the organization as the unit of analysis was to provide findings that were useful to organizations in assessing their current state of ISS effectiveness, to provide a metric with which to compare organizations of similar characteristics, and provide a benchmark tool with which organizations could attempt to achieve predetermined levels of ISS effectiveness by manipulating their countermeasure arsenal.

Instrument Design and Development

Burns and Grove (2004) identified three sources of content validity: (1) literature, (2) representativeness of the relevant population, and (3) experts.  Ultimately, the determination of whether or not an instrument contains content validity is subjectively based on the opinions of experts (Nunnally, 1978).  In addition to the ability of the content of a questionnaire to measure the trait of interest, the effectiveness of such an instrument is also affected by factors such as wording of questions, the ordering of questions, and so on.  Numerous techniques have been observed to improve an instrument and its ability to accurately capture the intended data.  One such approach suggested is the use of brief and concise questions (Armstrong and Overton, 1977).  This reduces the likelihood of any ambiguity being “read into” the question.  Along this same line of thinking, Mangione (1995) suggested the use of clearly understood terminology.  Schuman and Pressor (1981) raised awareness that the ordering of questions can also play a role in the effectiveness of a questionnaire.

The instrument developed for this study was developed through several stages.  First, in order to better understand the relevant countermeasures and threats, a grounded theory approach was used to further develop countermeasures identified by Whitman (2004) and Loch et al. (1992) as well as threats as identified by Whitman (2004).  The goal was to identify current, relevant countermeasures and classify each into the respective dimension of GDT.  Structured interviews probing with open-ended questions were conducted for data gathering purposes in the initial stage of the study.  The open-ended questions were guided using the framework of GDT by first defining each dimension and then asking the interviewee about the types of countermeasures that the interviewee had used or was familiar with that would be classified as that specific type of countermeasure.  After each of the open-ended questions had been answered, each interviewee was presented a table which listed each countermeasure identified by Whitman (2004).  The respondent was then asked to assess the list and identify any additional countermeasures that they felt might have been missing from the list.  They were then asked  to classify each countermeasure technique into the GDT framework.

The list of threats were developed similarly.  The initial set of threats used as a starting point were those identified by Whitman (2004) due to the recentness of his findings.  The classification scheme developed by Loch et al. (1992) in a similar study was used as a way of categorizing each threat.  Their categories for threats included internal-external, intentional-unintentional, and human-nonhuman.  The interview included asking respondents to identify the threats that they face, to classify each threat based on the classification scheme developed by Loch et al., (1992), and whether or not the initial set of threats identified should be modified to exclude some of the previously identified threats or include threats not previously identified. 

A total of six interviews were conducted with individuals having titles such as “Computer Systems Manager”, “Information Systems Technical Manager”, and “Network and Systems Manager”.  Total interview time was 337 minutes and 59 seconds.  Each interview was recorded using a digital recorder and included some written responses in order to obtain the classification information.  The transcriptions averaged over 7200 words each with a total of 43,696 words.  This equated to 96 pages of transcriptions or an average of 16 pages per interview.  Each individual referred to their role as being more managerial rather than technical though two specifically mentioned more of a balance between the two extremes.  Further demographics of the interviewees can be seen in table 4 below.

Table 4 Interviewee Demographics

 

Interviewee

 

1

2

3

4

5

6

Years with organization?

9-16

17-24

25+

9-16

9-16

9-16

Years in current position?

9-16

9-16

25+

9-16

9-16

9-16

Managerial versus Technical in nature?

Managerial

Managerial / Technical

Managerial

Managerial

Managerial

Managerial / Technical

Years of experience in IS?

20

16

32

19

9

30

Years of experience where security issue were a main component of that experience?

12

6

32

3

9

25

Interview Length

1:10:02

53:36

49:16

1:08:30

54:50

41:45

Word Count

9859

7062

5877

9372

6751

4775

Pages

20

16

13

20

16

11

 

To analyze the data collected in the process, the initial digital recordings of each interview was transcribed to a rich text file format.  Each file was then imported into Max QDA, a Qualitative Data Analysis software package was used to code and interpret the data.  Max QDA has been successfully used in data analysis in the social sciences (Randall, 2007; Sharp 2008) and is an accepted analysis tool.  Again using the GDT framework and the framework for threats identified by Loch et al. (1992), text segments were coded for each interview.  This allowed for a comprehensive identification of relevant threats and countermeasures faced by the group of respondents.

With the relevant list of threats and countermeasures determined, development of the survey instrument commenced.  This consisted of developing brief concise questions as per Armstrong and Overton (1977) that addressed the research questions outlined in Chapter 2.  Where terminology could have been misunderstood or foreign to respondents, brief definitions were included as part of the answer in order to clarify and reduce the potential for any ambiguity (Mangione, 1995).  Due to the potential for Common Method Bias(CMB), questions measuring the dependent variable and independent variables were separated using other relevant questions not specific to the study (Podsakoff, 2003).  Additionally, to further address the potential for CMB, the instructions were intentionally vague with respect to what the goal of the study was.

With the preliminary survey instrument developed, the next stage was conducted by having two information systems security experts examine the survey for any ambiguity, misleading, or otherwise unclear terminology.  The first expert was an academic currently in the position of managing security at the university level in large Southern university.  He currently holds the Certified Information Systems Security Professional (CISSP) certification.  The second security expert is a security architect for a large international airline.  Following the same format as Phelps (2005), feedback from both experts was sought using the criteria in Table 5.  Based on feedback from the two information systems security experts to the review criteria detailed in Table 5, modifications were made in order to clarify items and make the directions more concise.

Table 5. Qualitative Pre-Test Instrument Review Criteria

 

Directions

  • Are the directions concise? If no, please explain.
  • Are the directions clear? If no, please explain.
  • Are the directions complete? If no, please explain.

Instrument Items

  • Are the items appropriate?  If no, please explain.
  • Are the items clear?  If no, please explain.
  • Would you revise and item(s)?  If yes, please explain.
  • Do you recommend deleting any item(s)?  If yes, please explain.
  • Do you recommend adding an item(s)?  If yes, please explain.
  • Other comments?

 

Finally, a pilot test was conducted in order to determine approximate length of the survey in terms of time, as well as to further refine the instrument.  The pilot test was conducted using doctoral students due to both the convenience of their availability as well as their expertise with respect to survey instrument development.  Again, following Phelps (2005) the pilot test of the instrument included opportunities for comments relating to the clarity and content of the instrument, The instrument was finalized after making the necessary changes necessitated by the responses to the questions in Table 6.

Table 6. Pilot Test Instrument Review Criteria

 

 

Directions: For the following items, please indicate Yes or No about whether you found them effective, clear, and easy to understand.  If you answer No for any item, please specify the reason.

 

Instructions?  Yes  No

 

If no, please specify the reason:

 

 

 

Test Question?  Yes  No

 

If no, please specify the reason:

 

 

 

Format?  Yes  No

 

If no, please specify the reason:

 

 

 

How long, in minutes, did it take you to complete this survey?

 

 

 

Other comments?

   

Survey Administration

An online survey was used to obtain the sample data.  Surveys have been successfully used in numerous studies to investigate various ISS related studies (Hitchings, 1995; Nance and Straub, 1988; Kankanhalli et al., 2003).  The sensitive nature of security makes research into such an area difficult at best.  Several techniques were used to encourage participation.  First, it was explicitly stated in the instructions that participation was voluntary and that no identifying information would be gathered.  Additionally, in order to encourage participation, upon completion of the survey, respondents were directed to a password protected site which showed their responses in aggregate with others who had responded to the survey.  Also, an executive summary of the findings was provided to AITP for distribution to its membership at its discretion as yet another incentive to participate in the survey (Kotulic and Clark, 2004).

Cresswell (1994) detailed a three-step procedure for the administration of a questionnaire.  The first step includes the initial mailing of the instrument along with a cover letter explaining the purpose of the study.  The second step includes a second post card mailing two weeks after the initial mailing thanking those that had already participated and encouraging those who had not to do so.  Finally, two weeks after the second mailing, another cover letter asking for participation along with another copy of the instrument should be included.

In this research, a similar approach was followed but adapted in such a way as to take advantage of the Internet for data collection as well as to conform to the requirements set forth by the AITP leadership.  Of particular note, the AITP leadership was concerned with SPAMMING its membership.  As a result, instead of three separate contacts via email with the population, email contact was limited to just two.  Additionally, a link to the survey was provided in the AITP online monthly newsletter.  While this modified approach almost certainly impacted the response rate, it was necessary in order to gain access to the population.

Because the AITP membership represents a population intimately familiar with web technologies, administration of the instrument via a web-based instrument was appropriate.  A web based survey instrument allowed for faster data collection, the ability to control input, and the ability to reduce the expense that would be incurred using a paper-based instrument.  Using Cresswell’s approach as a guide, an initial email was sent to the 1000 professional AITP members explaining the purpose of the study along with the requisite statements that participation was voluntary and that no personally identifiable information was being gathered, and a link to the survey.  Approximately three weeks after the initial email was sent, a follow-up email was sent thanking those who had already responded and encouraged those who had not to please do so.  Approximately three weeks later, the monthly online AITP newsletter included a brief description of the survey and a link to the actual survey.  Data collection ceased approximately two weeks later after additional responses ceased.

Data Analysis Strategy

 

As with the instrument development, data analysis was also conducted in two stages.  During the instrument development stage, a Qualitative Data Analysis (QDA) approach was used to develop items for the survey.  Max QDA was used in order to analyze the qualitative responses to several open-ended questions.  See Appendix 1 to see the Interview Protocol Template used in order to guide the interview process.

The research model, as specified, required a structural technique to analyze the relationships.  Partial Least Squares (PLS) was used to analyze the data and was chosen for two fundamental reasons.  First, PLS’s ability to handle small sample sizes makes it an appropriate choice due to the characteristically small samples usually gathered in security related surveys.  Second, PLS does not impose homogeneity or normality requirements on the data (Chin et al., 2003).  Nominal data, such as industry, and ordinal data, such as organizational size, are unlikely to meet homogeneity and normality requirements needed by other techniques (Hair et al., 1998).  The same technique was used by Kankanhalli et al. (2003) for similar reasons.

Before moving on to the discussion of results and analyses, a discussion of the unique problems posed by the proposed non-recursive relationship is warranted.  Such a relationship implies temporal precedence such that a threat occurs, followed by a countermeasure(s) technique at a later time, which is itself later followed by a change in threats.  To measure such a process, longitudinal data is typically required.  However, a longitudinal approach creates problems with respect to resource availability and cooperation among sample organizations.  Additionally, such an approach increases other threats to validity including history effects.  Also, because there are no clear guidelines with respect to how long longitudinal data should be gathered to sufficiently establish the proposed relationships, cross-sectional data was used.  Paswan et al. (1998) note that considerable precedence has been established for the use of “testing processual hypotheses using cross-sectional data in the social sciences provided appropriate analytical tools are used” (page 131).  Paswan et al. (1998) developed their non-recursive research model using cross-sectional data and Structural Equation Modeling (SEM).  This research followed a similar path.

Home ] Up ]