THE UNIVERSITY OF MARYLAND

UNIVERSITY COLLEGE

GRADUATE SCHOOL



A STUDENT RESEARCH REPORT


for


CSMN 601-1111: Issues, Trends, and
Strategies for Computer Systems
Management


ESTABLISHING AND EVALUATING AN

EMPLOYEE DEVELOPMENT TRAINING

PROGRAM IN A SOFTWARE

DEVELOPMENT AND SUPPORT

ORGANIZATION


by

David R. Mapes


11/27/1995


Introduction

Training is a key factor in maintaining and enhancing a software development and support organization's (SDSO) competitiveness. In business, as in any facet of modern life, change is the only constant. This is especially so in the software industry, where new tools for, and approaches to, dealing with the complexities of software development and support appear almost daily. Additionally, the cost and uncertainty of hiring staff to apply these new languages, techniques and tools provide a strong incentive to develop current staff through training rather than acquiring new capability by hiring it. Therefore top SDSOs must, not only seek to identify useful new tools and techniques, but also maintain a well-organized and effective means of integrating them into the SDSO's way of doing business. This means instituting an effective and efficient training program. This paper deals with the initial establishment of a training program and its evaluation for the purposes of continual improvement. In the course of this investigation four additional questions pose themselves:

        a)     Is there a pay-off for the organization in
               terms of competitive advantage from
               supporting an on-going training/education
               program, and can it be measured?

        b)     What types of training should be supported
               and what are the best ways to provide them
               (in-house, third party, trade/technical
               schools, colleges and universities)?

        c)     What are the costs in terms of technical/
               administrative staff time and cash outlay?

        d)     What can the organization do to reduce the
               chance that newly trained staff members will
               leave, taking their new/updated skills with
               them?



Answering these questions should help to define an outline for an affordable program of technical/ managerial staff development that is scalable to organizations of varying size and type.

Analyzing The Current Situation: Defining Training

SDSOs range in size from single-person consulting ventures to corporations with many thousand employees. Existing training efforts tend to be scaled to the size of the business they support, with large companies having dedicated human resource development (HRD) departments and smaller organizations having smaller, less formal, training programs. Indeed, some smaller businesses may not even be aware that they have a training program. To develop a training program (or any other system), the first step is to identify what is already in place. For this to be accomplished there needs to be some definition of exactly what a training program is.

In general training can be defined as any instruction, formal or informal, provided to individuals to impart "skill, knowledge or attitude (SKA)" (Brinkerhoff, 1987, p. 10). The SKAs may be new or refreshed. The instruction may be in the form of interpersonal communication, computer assisted instruction, reading books and manuals, or other practice activities. Interpersonal communication might consist of something as formal as lecture in a classroom setting to something as informal as a brief exchange at the water cooler. Computer assisted instruction can vary from elaborate instruction systems like those on the Department of Health and Human Services' Parklawn Computer Center (PCC) mainframe or various compact disk driven PC systems to the simple use (for practice and experimentation purposes) of available development tools. Finally, written material can range from custom purpose training manuals to a quick note of instruction in answer to a question or in response to a perceived need.

Given this broad definition of training, it becomes obvious that all software organizations from the smallest and most static to the largest and most dynamic have some form of training program. The effectiveness of these programs, however, is difficult to demonstrate. Unless considerable thought and effort is expended on identifying training needs, designing the program and assessing results, the organization may not be realizing any benefit from the training effort. Also the program will not have any empirical data upon which to base and track improvement efforts and benefits.

Identifying Training Requirements: Needs Assessment

The purpose of any training program is to answer some need or exploit some opportunity within an organization and its people. Examples of needs include "reduced waste, improved morale, greater productivity" (Brinkerhoff, 1987, p. 17), and, ultimately, increased competitiveness. Opportunities might include such things as "a training grant, a resource newly available, or a new program" (Brinkerhoff, 1987, p. 17). Other opportunities might be that "a new retail outlet is to be opened, or a new product or service is about to be launched" (Robinson, D. G., Robinson, J. C., 1989, p. 15). Within the milieu of an SDSO, opportunities could include the integration of new software development tools like compilers or CASE tools, or new development management paradigms like the Software Engineering Institute's Capability Maturity Model (CMM) or Total Quality Management (TQM).

Each of these needs or opportunities may be brought to light in any one of several ways depending upon the size of the organization and the position, within that organization, of the person responsible for developing training programs. In a very small organization, where the person who must identify and act on these needs and opportunities is likely to be the business owner or chief executive, in addition to other duties having to do with running the business and doing development work, he/she must keep an eye open to new needs within the organization and new sources of opportunity wherever they arise. While a manager in this position is likely to have an intimate knowledge of the organization's strengths and weaknesses, they are less likely to have the leisure to act upon that knowledge. Because of this limited attention from upper management, small firms are apt to have very limited and simple training programs consisting, largely of a shelf full of manuals and a knowledgeable staff member or two to ask questions of over lunch. Organizations of this size would benefit from a more proactive approach to applying what the owner/CEO already knows and providing a more active and formal effort to identifying and acting upon business training needs and opportunities.

Within the atmosphere of a larger concern, where one or more people have as their primary responsibility the ongoing development of staff capabilities, these needs and opportunities may be identified through a wider array of channels, and additional incentives are present for this recognition. On the one hand full-time HRD professionals are more likely to be skilled at gauging an organization's needs, and on the other they may be less dialed into the business activities and competitive environment of the organization. For the HRD professional the identification of training needs and opportunities is key to both enhancing the competitiveness of the organization and continued employment. The channels through which an HRD professional may identify these needs and opportunities include requests from management for training programs, direct observation of ongoing work, interviews with staff at all levels, study of industry and market trends, and analysis of other organization's training programs.

Whatever the size of an organization, or the position of training staff within it, the key is to identify and assess the needs and opportunities for training. To aid in focusing this process Jack Phillips (1991, p. 64) poses a number of questions:

        -  Is there a performance problem?

        -  What is the gap between desired and actual
           performance?

        -  How important is the problem?

        -  Is there a lack of skill contributing to
           the problem?

        -  Do employee attitudes need to be improved?

        -  Is this program designed to meet some
           outside requirement or satisfy an external
           influence? If so, what is it/are they?

        -  Which employees will need the training?

        -  Are there alternative ways to satisfy the
           need?

        -  To what extent will other departments be
           involved?

        -  Are there natural barriers to accomplishing
           the program?

        -  Who supports this new program?


The application of the answers to these questions will help to determine what needs and opportunities are addressed by training, how the training is administered and whether it succeeds in imparting useful new SKAs. The last of the above questions is the most important; if any program does not have the support of management, it is likely to fail.

Developing the Training Program: A Structured Approach

Developing a training program is much like creating a computer software system or a new car. First a need or opportunity is identified: "we need a new inventory control system," "if we build a better minivan for less money, we'll clean-up," or "if we train our staff to be expert in this new integrated CASE tool we can eat the competition's lunch." Next, the situation is analyzed to study the feasibility, verify the approach, and identify the requirements. Then, if the project is deemed feasible, the approach is found correct and most of the requirements have been identified and described, the project can proceed to design and implementation.

Once a need or opportunity for improvement is established, an analysis, that will help answer some of the questions listed in the previous section should be performed. This process breaks into three steps: organizational analysis, task analysis and trainee analysis (Gordon, S. E., 1994, pp. 11-13). Organizational analysis looks at the target organization of a possible training program to ensure that instilling new or better SKAs in the personnel is the correct approach rather than providing, say, better equipment or more window offices. Task analysis looks at the specific need or opportunity to identify what, exactly, is to be taught, and whether, within the context of the organization, it is adequately defined. Trainee analysis seeks to describe the target population for a training program with respect to age, gender, numbers, professional experience, relevant level of knowledge and experience in the area to be covered, what the trainees themselves consider important, and what their attitude is towards training programs in the subject area. These analyses should confirm the need for the training program, identify and define many of its requirements, describe its audience, and provide some indication of its probable success.

With the initial analysis complete, a few questions can be answered. Should we proceed with developing a training program? How should the training be provided? How can the results of the training program be assessed? These are the first steps in the design process. The first question is a simple go/no-go decision. If the perceived benefit to the organization from the new/improved SKAs of its members is outweighed by the cost of training them, then why proceed? If, however, the benefits are sufficiently greater than the required expenditure of time, effort and money, implementing the training program can become an organizational priority. Alternatively, a training program may be mandated by the nature of the business or by some external entity (i.e. government), in which case cost/benefit calculations become irrelevant because the training must be undertaken regardless of estimated costs and benefits (Phillips, J. J., 1991, p. 70).

Deciding how to provide the training, informally through provision of books, other reading material, on-line help/instruction systems, or ready access to expert advice and mentoring; or more formally in a classroom/"instructor- led" (Kimmerling, 1993, p. 5) setting provided by the company internally, through a manufacturer or "third party" training firm, from a trade/technical school, or college/university, is dependent upon a number of factors: the type of SKA to be imparted, the audience to be trained, the resources available (time, money, equipment, and personnel) to the organization, the cultural biases of the organization for and against providing training, and various outside influences.

If the SKA to be imparted is in a specific area for which the organization has existing, available, expertise, then producing the program internally may make sense. If, however, the SKA is totally new to the organization (a new CASE tool perhaps), or beyond the scope of most internal training departments (instilling the knowledge and skills required for analysts to become program managers for example), then finding an outside provider of training or education is indicated. Assuming that resources are available, and that the scale and content of the program provides a fit with organizational training ability, then the training program design and implementation process can continue. Note, however, that even if training is to be out-sourced, an abbreviated design and implementation process needs to take place so that the external assets used may be evaluated and their future use/selection improved.

Finally, evaluation of training program results should be geared to measure the impact of the training program on those areas of the business for which the instigating need or opportunity was identified. This evaluation should be designed to focus on the specific objectives to be met by the training effort. Measures should be defined that assess both overall objectives for the program and specific milestones along the way. Keeping in mind that the ultimate goal of all training programs is to improve an organization's competitiveness, then evaluation should be able to show a positive tie-in between the effort and business factors that top management identifies as important to competitiveness.

Whether the training program is to be developed internally, or purchased from an outside supplier, the program design process should continue with the statement of clear, testable objectives that can be related to the identified need or opportunity. Some method for obtaining a baseline assessment of these objectives should be established. In some cases this might take the form of productivity or profitability measurements; in others a simple pretest of the material to be trained. The key is to get the baseline so the impact of the training program can be gauged. Once objectives are established and a method of baselining determined, the process of selecting a vendor whose program best meets the needs expressed in the objectives, or of designing a custom training program can continue. The process of selecting an outside vendor, and/or designing a training program is beyond the scope of this paper. Suffice it to say that the primary concern in selecting a vendor is that the program does a quality job of meeting the training objectives within the resource limitations of the organization. In the case of actually developing the training program in-house, the process entails identifying the skills, information and/or attitudes required to meet the established objectives, arranging the material and/or practice in a logical, understandable order at an appropriate level of detail, and creating presentation materials, lesson plans and exercises to teach the SKAs. And it is important that the training program, even a one- time effort, be evaluated to see that it has performed as intended.

Evaluation: Assessing and Improving Training

Given that we are to undertake a training program, the profit motive in business, the push towards efficiency in government, and the desire on the part of the trainer that the program be effective mandate the need for evaluation. The purpose of evaluation, in this context, is to develop a measure of the quality of the training program with respect to training objective accomplishment, cost-efficiency, and program improvement/update. To meet these evaluation goals four criteria areas should be investigated: trainee reaction; learning of principles, facts,techniques and attitudes; trainee behavior relevant to job performance; and results of the training program related to organizational objectives (Gordon, S. E., 1994, pp. 375- 377).

Trainee reaction is primarily a measure of whether participants liked the program (Phillips, J. J., 1991 p. 47). This data can be key for HRD because how much the trainees liked the program may influence the degree to which they accept the material, and whether it will continue to be offered. Trainees may also provide suggestions on how the program can be improved.

Was there a successful transfer of knowledge? Did trainees learn the principles, facts,techniques and attitudes that comprised the material and objectives for the program? Measuring this is key to understanding whether the program is providing the needed information in the right format for learning to occur. If not, trainees will never get to apply the SKAs on the job (Robinson, D. G., and Robinson, J. C., 1989, p. 187).

The observation of behavior relevant to job performance is key to assessing whether SKAs taught in the training program are being successfully transferred to the work environment. Essentially, can the trainees be observed to use new skills and knowledge acquired from training, or can their work products and performance be shown to have been impacted by the training (Gordon, S. E., 1994, p. 376)?

Does the training program result in improvement in/ progress toward organizational objectives? Over time do measures of productivity, morale and bottom line performance show a positive correlation to the level and type of training efforts undertaken? To measure these criteria there must be some baseline to which to compare post training values over time (Philips, J. J., 1991, p. 44).

How, when, and where should the these criteria be measured? Trainee reaction and initial learning should be measured at the site of instruction with tests and evaluations. And later, after a period of time suitable to the frequency with which the SKA is used, additional checks of reaction and learning should (ideally) be carried out on the job (Gordon, S. E., 1994, pp. 377-378).

Post training checks on job behavior and business results take place at work site. Ideally these measures should be of an on-going nature, both proceeding the training event and following it. Observation and interviews should be used prior to training to help determine how SKAs may best be added to or enhanced, and after training to see if the program has had an impact on behavior (Phillips, J. J., 1991, pp. 106-107). Similarly, business performance and process metrics should be collected and analyzed over time to identify how and if the organization is benefitting from training programs (Gordon, S. E., 1994, p. 377).

Two key points to keep in mind when evaluating training program results are the validity of the experiment design and the degree to which individuals taking part in the program are likely to benefit. In the case of experiment design training programs should, ideally, have experimental and control groups for comparison so that the effects of on the job learning can be sorted out from the impact of training (Gordon, S. E., 1994, pp. 373-375). Three experimental models are diagrammed below, each uses an experimental group and a control group selected at random from among potential trainees. The models A)-C) show progressively greater levels of criteria evaluation over time: A) provides only single post event checks of criteria; B) looks to see how the criteria vary over some period following the training; and C) adds the gathering pre-training data about the criteria.

        Experiment design, best models from behavioral science:

               (R)     X      T
        A)     ----------------
               (R)            T


               (R)     X      T1      T2... Tn
        B)     -------------------------------
               (R)            T1      T2... Tn


               (R)     T1     X       T2     T3... Tn
        C)     --------------------------------------
               (R)     T1             T2     T3... Tn


               (R)            = random personnel assignment
               X              = training event
               T(1,2, n)      = criterion measures taken

               (Gordon, S. E., 1994, pp. 373-374)


The other factor that may confound results is the case where individuals within the study group have reached, what W. Edwards Deming termed, a "state of statistical control" (1986, pp. 250-251). Deming used two golfers as an example: one has little experience and thus few ingrained bad habits but lacks any consistency in his scores; the other has been playing for years and while he is not satisfied with his performance his scores tend to be grouped fairly tightly around his average. In Deming's terms the experienced golfer has reached a state of statistical control. After training the beginner shows improvement as he incorporates what he has been taught in his game, while the experienced golfer's scores actually show a greater variance as a result making changes to his technique. Over time the experienced golfer may eventually benefit from lessons but, in general Deming thinks not.

        Importance of selecting the right people to train:

        Deming on golf lessons:

              Beginning Golfer       Experienced Golfer
              BT       AT            BT        AT

        Scores:
                * 
              *
        ULC   -----*------------     -------------*---------
                 *   *                              *
                       *               *       *        *
        Mean  ------------*---*-     ----*---*--------*---*-
                            *              *     * 
                *       * 
        LLC   ------*-----------     -----------------------


                       BT   =  Before Training
                       AT   =  After Training
                       ULC  =  Upper Control Limit
                       LLC  =  Lower Control Limit

                   (Deming, W. E., 1986, pp. 252, 254)



Deming's view of the impact of training on the SKAs of workers whose behavior, within the area addressed by the training, is consistent, is that it will have little value. This puts a premium on using an experimental model like C), above that provides for pre-training checks.

Conclusions and Recommendations: Questions Answered

In the introduction of this paper four questions were posed:

        a)     Is there a pay-off for the organization in
               terms of competitive advantage from
               supporting an on-going training/education
               program, and can it be measured?

        b)     What types of training should be supported
               and what are the best ways to provide them
               (in-house, third party, trade/technical
               schools, colleges and universities)?

        c)     What are the costs in terms of technical/
               administrative staff time and cash outlay?

        d)     What can the organization do to reduce the
               chance that newly trained staff members will
               leave, taking their new/updated skills with
               them?


Providing generalizable answers to these questions is now possible at least in part.

Does training pay off in terms of competitive advantage, and can it be measured? Yes, a well-planned and executed training and education program, aimed at specific business needs and opportunities can lead to an advantage in the market place as it expands overall corporate capability. And, yes it can, with good design, be measured.

What types of training should be supported? The answer to this depends upon the priorities set for the needs and opportunities identified as potential areas for training. Each method of training is best suited to a different set of circumstances. Material to cover, resources available, training audience, and numerous other factors need to be considered when selecting from among these options.

What are the costs? While a detailed investigation of training costs has proven to be beyond the scope of this paper, among organizations reporting to the American Society for Training and Development's Benchmarking Forum Kimmerling reports a "typical value of 3.2 percent" (1993, p. 8) of payroll is spent on training. One should note, however, that these are companies who place enough of an emphasis on training and educating their staff that they have made a two year, $15,000 (Kimmerling, G., 1993, p. 1) commitment to the Forum.

How does one retain staff once they have been trained? This is another question beyond the scope of this paper, but it seems that an organization need only provide fair and competitive compensation, appreciation of hard work, interesting work tasks, and a good work environment to keep most employees happy.

To initiate a successful training program it is recommended that HRD staff do more than just provide training on demand. Training programs must be focused on the real training needs and opportunities of the organization. Emphasis must be placed on analysis of the program's objectives, audience, resources, and evaluation so that training efforts can be mapped to business results.



References:

Brinkerhoff, R. O. (1987).  Achieving results from
        training: How to evaluate human resource
        development to strengthen programs and increase
        impact.  San Francisco, CA: Jossey-Bass, Inc.


Deming, W. E. (1986).  Out of the crisis.  Cambridge,
        MA:  Massachusetts Institute of Technology,
        Center for Advanced Engineering Study.


Gordon, S. E. (1994).  Systematic training program
        design: Maximizing effectiveness and minimizing
        liability.  Englewood Cliffs, NJ: P T R Prentice
        Hall, Prentice-Hall, Inc.

Kimmerling, G. (1993, Sept).  Gathering best practices.
        Training & development. 28-? (8 pages (10 in on-line
        text format)).

Phillips, J. J. (1991).  Handbook of training
        evaluation and measurement methods.
        Houston, TX: Gulf Publishing Company.


Robinson, D. G., and Robinson, J. C. (1989).  Training
        for impact.  San Francisco, CA: Jossey-Bass, Inc.