New tricks for an old dog: visualizing job analysis results

Citation metadata

Date: Spring 2009
From: Public Personnel Management(Vol. 38, Issue 1)
Publisher: Sage Publications, Inc.
Document Type: Report
Length: 3,634 words
Lexile Measure: 1380L

Document controls

Main content


For the past 30 years, job analysis researchers and practitioners have relied on the same job analysis methods for their work. While these methods have proven to be effective for describing jobs, they do not take advantage of advances in technology and data analysis. The present study applies visualization and network analysis techniques to job analysis data. The result is a simple graphical presentation that facilitates effective and efficient communication of results to individuals who are not job analysis experts, such as organization managers.

Full Text: 

For the past 30 years researchers and practitioners have relied on the same job analysis methods for their work. Most often, the projects begin with interviews of subject matter expert (SME) groups and conclude with task- or competency-based surveys. Regularly, the resulting product is a report consisting of 20 pages of text and 200 pages of tables. With the exception of two noteworthy advances, little has changed in job analysis since the 1970s.

The first advance has been competency-based job analysis. (1) There is some controversy, however, over how competencies and the traditionally used knowledge, skills, abilities and other characteristics differ. (2) The second important advance has been conducting Web-based surveys.

Even though there has been little change in job analysis, the tried-and-true methods have proven effective for characterizing jobs. Some would say, "If it's not broke, don't fix it." However, advances in data analysis and visualization techniques offer opportunities to incorporate new tools into job analysis and to build upon the success of past practices. Data analysis techniques that incorporate visualization offer an opportunity to better communicate job analysis results, thus making study results more influential (and probably improving the efficiency with which study results are communicated). It is time for job analysts to look outside their insular world and adopt technological advances and embrace information visualization and knowledge visualization techniques.

Information visualization and knowledge visualization are young interdisciplinary fields that draw heavily from cognitive science, visual perception, and computer science. Information visualization is the representation of selected features or elements of abstract and complex data. Whereas information visualization requires the use of computer-supported tools to analyze large amounts of data, knowledge visualization involves the transfer of knowledge among persons. (3) Both methods allow researchers to present data or information in nontraditional forms, using, for example, 2-D or 3-D color graphics or animation to show the structure of information. Information visualization and knowledge visualization also allow data users to navigate through the collected data and modify its presentation to explore, discover, and learn. Although the disciplines are in their relative infancy, information visualization and knowledge visualization each offer tools and methodologies that may be well suited to job analysis research.

Network analysis is another tool that can be useful for job analysis. This is a burgeoning field of study, and its methods have been applied to such diverse topics as the analysis of the U.S. power grid, the relationships among movie actors, and neurobiology. (4) Network analysis is important because job analysts often want to examine the relationships among a number of jobs (or a network of jobs) and network techniques emphasize the relational aspects of data.

Network analysis, when used in conjunction with visualization techniques, gives a job analyst a visually efficient way to present complex data to end users. However, behind the simplicity lie highly sophisticated tools that facilitate the elegant presentation of data. Thus, the complexity of job analysis data is made understandable to the end users.

A picture is worth a thousand words" may be a cliche, but it is absolutely true. The human eye and mind are particularly well suited to interpret images, forms, and patterns. Simply put, presenting data visually allows viewers to grasp the interrelationships of the data points without performing or understanding complex mathematics. (5) In other words, it allows data users to easily sort through and understand large amounts of data quickly. While the physical and engineering sciences have dealt with increasing data complexity by using visualization techniques, the behavioral sciences have been slower to adopt such tools. (6) In this article, we describe how human resources and training data collected as part of a job analysis project were presented using information visualization and knowledge visualization techniques.


Job Analysis Data

The data described in this article were collected as part of an agencywide job analysis for a large federal government organization. To present a concise and understandable visualization example, we focus here on human resources (HR) and training and development (TD) jobs.

About a year before the job analysis, the agency's HR and TD departments, which had been two separate organizational units, merged into a single business unit named Human Development (HD). Our analysis focuses on 10 jobs that were associated with the separate HR and TD organizational units that were placed under one leadership structure.

To begin the job analysis, a group of SMEs identified skill and knowledge requirements for each job, using prior data and expert opinion. The requirements served as the basis for a survey that was completed by a representative sample of job incumbents. Only knowledge and skills that were supported by the survey results were retained as components of the official knowledge and skill sets for each job.

Next, the similarity of the 10 jobs was determined. To do this, we used each job's skill and knowledge sets and performed a Jaccard analysis. (7) A Jaccard analysis determines the degree of similarity between two jobs, using the formula SJ = a/(a + b + c) * 100, where SJ = Jaccard similarity coefficient, a = number of elements shared by all groups, b = number of elements unique to the first group, and c = number of elements unique to the second group. A Jaccard analysis, performed with binary variables, excludes joint absences from both the denominator and the numerator and equally weights matches and nonmatches.

The similarity scores among jobs from the two groups ranged from 0.00 to 1.00, with smaller numbers indicating that compared jobs have less similar skill and knowledge requirements. The result of the analysis was a 10 * 10 similarity matrix, in which the lower half was an identical reflection of the upper half, so only half of the matrix indicating 45 similarities was important. Because each job was perfectly related to itself, the diagonal values were 1.00 and could be ignored.

We used two additional pieces of information to produce our visualizations (i.e., graphical visualizations). First, we obtained information on the number of incumbents in each job. Second, we determined each job's main organizational designation within the HD organization. Technically, a job could be located in any HD organization, but the jobs tended to be associated with specific organization units.

Visualization of Job Analysis Results

The similarity data, incumbent data, and organizational designation data were imported into the network analysis and graphing program Pajek. (8) The program was designed to allow researchers to analyze large data sets (up to a million vertices). Pajek has the added advantages of including sophisticated network graphing tools and powerful mathematical tools for performing statistical analyses. Pajek can be downloaded and information about the program can be found at

Once HR and TD job analysis data were imported into Pajek, we could explore the data relationships visually. Even though we used an automatic graphing algorithm, graphing a network is often an iterative process that requires several passes until a meaningful and visually appealing representation emerges. (9)

The first graph we generated showed all similarity data. That meant 45 lines or links were connecting the 10 jobs. The result was a dense graph with clutter hiding the important links and the underlying structural relationships of the jobs were not readily apparent. Therefore, we had to systematically reduce the number of lines, or links, in the graph.

Link Reduction

There are two basic ways to prune the number of links in a graph and make a graph more understandable--the threshold approach and topology-based approaches. (10) The threshold approach is the simplest. Using this approach, all links that fail to reach a specific level of similarity are removed from the graph. Thus, only the strongest links remain in the graph, highlighting the most important relationships.

We wanted to have the smallest number of links that still resulted in a meaningful graph showing the interrelationships of all HR and TD jobs. Given that all the jobs we were analyzing were within a single organization, we felt it was important to have a completely connected graph. Our goal was that an agency manager or leader who looked at our graph could identify the similarities between an HR job and a TD job by tracing the lines on the graph; no job or groups of jobs would stand-alone. To produce such a "user-friendly" graph, we began by removing the weakest links in the data matrix. After several passes, we found that 25 links resulted in a completely connected graph. Any fewer links, and the graph had stand alone jobs. Any more links added clutter, obfuscating the most important job similarities.

In comparison to threshold approaches, topological approaches to culling network links rely on the identification of deeper intrinsic properties of data. The topological approach we used was Pathfinder network scaling, which was originally developed by cognitive psychologists. Specifically, we used similarity data (i.e., the Jaccard similarity scores) to identify the most efficient connections between HR and TD jobs. (11)

The Pathfinder algorithm is fairly complex, requiring the user specify two values, q and r. When q equals the number of nodes minus 1 and r is equal to infinity the result is a completely connected graph with a small number of links. For our graph we used q = 9 and r = [infinity].


To determine the placement of the HR and TD jobs on our graph, we used the Fruchterman Reingold approach because it tends to separate parts of a data network better than other approaches. (12)

Automated graphing procedures depend less on the preconceived notions of the researcher than do manual graphing procedures. Because the eye and mind are so good at recognizing patterns, researchers must be careful not to create relationships where there are none. Since the purpose of displaying data point relationships visually is to facilitate the exploration and understanding of the structure of the data, it is all too easy for a researcher to inadvertently construct a graph that demonstrates exactly what he or she had previously believed.

To avoid this mistake, we used a spring algorithm to determine the placement of jobs on our graph. A spring algorithm minimizes the variation in line length by pulling and pushing the job vertices until they are in a state of equilibrium--just as springs would do. Imagine that the lines between jobs in Figure 1 and Figure 2 are springs that exert attractive and repulsive forces. The force between any two jobs is represented by the similarity line between the jobs.

Visualization Interpretation

Figure 1 and Figure 2 show the final visualizations of the similarities among the HR and TD jobs. Figure 1 is the result of our use of the threshold link reduction method. Figure 2 is the result of our use of Pathfinder analysis for link reduction. Each figure was generated using the same spring graphing algorithm.

Each circle represents a job. A circle's size was determined by the number of job incumbents, and its shade was determined by the organizational component under which it fell. The thickness of the line connecting two jobs was determined by the skill and knowledge similarities. A thicker line indicates greater similarities. The absence of a line between two circles indicates that there was little overlap between the skill and knowledge requirements for the two jobs.

Traditional presentations of the data shown in Figure 1 and Figure 2 would require a user to sort through several tables. Typically, there would be a table showing the relative size of each job, another table showing the organizational designation, and a third table showing similarity data. The similarity data would typically be presented as a lower-half matrix. With 10 jobs, the user would need to view and understand the interrelationships of 45 cells of similarity data simultaneously. Clearly, the simple pictures of Figure 1 and Figure 2 present a wealth of easily interpreted information to the end user.

The figures highlight important information that could easily be missed in traditional data tables. First, the figures show that there seem to be two main groups of jobs--training and human resources. On the left sides of both graphs, Academic Administrator and Training and Education Instructor are clearly separated from the HR resource jobs that are clustered on the right sides. This is not surprising, given the agency's recent merger of its HR and TD units. This suggests that either integration has not taken place, or that the jobs are truly different and require very different skills and knowledge.

Another important point illustrated by the graphs is that the job HD Consultant seems to the bread and butter position in the organization. It is the largest job in terms of incumbents, and it has the most in lines attached to it. In network terms, this means that HD Consultant has high centrality. If desired, we could have fixed the HD Consultant job in the center of the graph and allowed the graphing algorithm to balance the rest of the jobs around it. This may have been a good visualization option if the centrality of the job was the cornerstone of the organization's strategy.

To an individual HD Consultant, high centrality means that he or she has a lot of options for moving into other HD jobs. The job's centrality also suggests that HD Consultant may be a generalist role and that incumbents need to have knowledge of all surrounding jobs. Knowing that HD Consultant is a central job within the organization is important for the development of career paths and for developmental rotations. For instance, the figures show that it is easier for an HD Consultant to become a Strategic Workforce Planner than for a Training and Education Instructor to become a Strategic Workforce Planner because the skills and knowledge requirements for HD Consultant are more similar to those required for Strategic Workforce Planner. This does not mean however, that a trainer could not be a good planner. It just means that the skills and knowledge requirements for those jobs are different. Thus, the graphs can also be powerful tools for restructuring, suggesting what jobs people can be easily moved into and how jobs can be grouped organizationally.

Figure 1 and Figure 2 also show that jobs in the same organizational units tend to have the same skills and knowledge requirements, illustrated by the fact same shaded jobs tend to be close to each other. This clustering could be good or bad depending on an organization's goals for human resource management. Some organizations set up their operating units as cross-disciplinary teams. When this is the case, jobs within the same unit can be substantially different and will not necessary be close to each other on a job analysis graph. Another type of organization may require workers to act on a product and then pass the product on to other workers in a different unit. The two individuals may not need to not know or understand each others' contribution. In this situation, jobs coded with the same shade will be clustered together..

Our visualizations are just a couple of simple examples of how the relationships between jobs can be shown. Representations of other important job information could be included, and interactive components could be added to give the end user control over what information to display. For example, with a little bit of programming, a graph could allow the user to click on two jobs to show the shortest path between them. Alternatively, a graph could be augmented with jobs from other organizational units. Interactivity could allow a user to display only jobs from a single unit or from multiple units. In this way, management could gain an understanding of how all of the jobs in an agency are related.

Jobs need not be the focus of the visualization. Skills or competencies could also be displayed with a graph. A graph could show how all of an organization's or occupation's skills are related. Interactivity could allow a user to select a job in order to highlight the skills that are important for performing that job. This type of display could be useful for illuminating professional development and training paths.

These are just a few of the possibilities. Each job analysis is unique, with different goals, requirements, constraints, and populations. Thus, a job analyst must work with organizational representatives to produce a data set and report that meets the organization's needs. The approach, look, feel, and information contained in the job analyst's report could be substantially different from those shown in this article. In fact, even auditory and other sensory representations are possible.

Clearly, the use of visualization techniques offers promising methods for communicating complex data such as job analysis results. And when it comes to job analysis results, communication must flow in two ways, from SMEs to job analysts (data collection) and from the job analyst to the end users (presentation of results). Traditionally, the main way SMEs communicated with job analysts has been through surveys. Web-based surveys have been a step forward, but they still do not take full advantage of the possibilities that today's technology offers. Interactive graphical data collection tools could be designed. This would allow SMEs to cluster jobs or skills based on perceived conceptual similarity. Jobs or skills believed to be highly similar would be placed closer together, and those believed to be distinct would be positioned further apart. This is an entirely new means of collecting job relationship data that, to our knowledge, has not been used. The exciting thing about capturing job relationship data in this manner is that it allows for network data analysis; something that has not been applied to job analysis. Network analyses could reveal job clusters, serve as the basis of skill maps used for training, and help managers understand how work is structured in their organizations..

In summary, visualization of job analysis information is an effective and efficient method for collecting and presenting data. Unfortunately, little work has been done in this area. Instead, researchers and practitioners have used the same methods for the past 30 years. Although those traditional methods have proven effective for understanding jobs, visualization techniques may prove to be more effective and more efficient for conducting and communicating job analyses. It is time for job analysts to look outside their insular world by adopting technological advances and embracing information visualization, knowledge visualization, and network analysis techniques.


(1) Spencer, L. M., & Spencer, S. M. (1993). Competence at work: Models for superior performance. New York: John Wiley & Sons, Inc.

(2) Shippman, J. S., Ash, R. A., Battista, M., Cart, L., Eyde, L. D., Hesketh, B., et al. (2000). The practice of competency modeling. Personnel Psychology, 53, 703-740.

(3) Burkhard, R. (2004, July). Learning from architects: The difference between knowledge visualization and information visualization. Paper presented at the 8th International Conference on Information Visualization (IV04), London.

(4) Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of "small world" networks. Nature, 393, 440-442.

(5) Cleveland, W. S. (1993). Visualizing data. Murray Hill, NJ: AT&T Bell Lab.

(6) Bulter, D. L. (1993). Graphics in psychology: Pictures, data, and especially concepts. Behavior Research Methods, Instruments, and Computers, 25, 81-92.

(7) Jaccard, R (1912). The distribution of flora in the alpine zone. New Phytologist, 11(2), 37-50.

(8) Nooy, W, Mrvar, A., & Batageli, V. (2005). Exploratory social network analysis with Pajek. New York: Cambridge University Press.

(9) Ibid.

(10) Chen, C. (2004). Searching for intellectual turning points: Progressive knowledge domain visualization. Proceedings of the National Academy of Sciences, 101, 5303-5310.

(11) Schvaneveldt, R. W. (1990). Pathfinder associative networks: Studies in knowledge organization. Norwood, NJ: Ablex Publishing Company.

(12) Fruchterman, T. M. J., & Reingold, E. M. (1991). Graph drawing by force-directed placement. Software--Practice and Experience, 21, 1129-1164.

Thomas A. Stetz, PhD

National Geospatial-Intelligence Agency

287 Hibiscus Street #102

Honolulu, HI 96818

Scott B. Button, PhD

C2 Technologies, Inc.

1921 Gallows Road, Suite 1000

Vienna, VA 22182-3900

(703) 448-7945

W. Benjamin Porr, PhD

Federal Management Partners, Inc.

1500 North Beauregard Street, Suite 103

Alexandria, VA 22311-1715

(703) 671-6600

Dr. Thomas A. Stetz is currently employed with the National Geospatial-Intelligence Agency. He received his PhD in industrial and organizational psychology from Central Michigan University. He also has an MS in management from the Walsh College of Accountancy and Business Administration. He has more than 10 years of professional experience and has written technical reports on applied projects, published peer-reviewed journal articles, and presented research at numerous professional conferences.

Dr. Scott Button is a principal scientist and leads the Human Capital Consulting Practice for C2 Technologies. He obtained his PhD from The Pennsylvania State University. Button has 15 years of experience in applied research and human capital consulting with public- and private-sector clients. During his career, Button has written detailed technical reports on a range of applied projects, published peer-reviewed journal articles, and presented his work on numerous occasions at national conferences.

Dr. Benjamin Porr has seven years of professional and educational experience in personnel assessment and strategic human resource management. He is currently employed by Federal Management Partners, Inc. He received his PhD in industrial and organizational psychology from George Mason University. Port has presented research findings at various events, such as the American Psychological Association (APA) and Society for Industrial and Organizational Psychology (SLOP) annual conferences.


Please note: Illustration(s) are not available due to copyright restrictions.

Source Citation

Source Citation   

Gale Document Number: GALE|A198412919