Current Issue
Past Issues
Search
Subscribe
Unsubscribe
Developing an Evaluative Instrument for E-learning Presentation

Developing an Evaluative Instrument for E-learning Presentation
Richard Kordel

Abstract

For better or worse, computers have become a ubiquitous part of the learning environment. Whether the topic is course management, information presentation or class –related research, the computer has assumed, and been granted, a privileged position in the processes of thought. The venue may be an institution of higher learning, a high school or a grammar school, or it may be located in a corporate training environment. The one commonality is that crosses all these boundaries is that far too often the computer environment “is” what it is, and people learn to adapt themselves to how the computer works, rather than using it as a tool and making it conform to their work processes. Too often no one questions what the computer does, how well it does it, or what it should do instead. In discussions of “power” there seems to be some reluctance to acknowledge the power the computer, essentially a media delivery mechanism, has to shape the message it should simply be transmitting.

The underlying assumption of this article is that web pages that are created to teach something can and should be evaluated on their ability to present clear and unambiguous content structured to facilitate comprehension and retention. That ability is dependent on how well several previously unassociated and underlying components of educational presentation are woven into the page. The purpose of this article is to look at the details of the computerized environment with an eye toward evaluating its effectiveness as a teaching and learning tool. To that end, it will provide a prototype evaluative instrument which may provide some criteria for criticism and, if necessary, modification.

The purpose of this paper is to initiate a discussion of how media can exert unconscious control of the message being transmitted in classrooms, and to look at the details of the computerized environment with an eye toward evaluating its effectiveness as a teaching and learning tool. Since my specific focus is in adult education, the focus here will be on the uses of technology as it relates to adults, specifically in a higher education setting. However, the general principles to be discussed can be applied to any educational setting from primary school to corporate training. As an entry point into that larger discussion, the article will present a draft methodology for evaluating learning based web pages using four criteria, and will consider an appropriate evaluative model that can be used when considering the effectiveness of existing learning based web pages, or planning for new designs. An underlying assumption here is that web pages that are created to teach something can and should be evaluated on their ability to present clear and unambiguous content structured to facilitate comprehension and retention. That ability is dependent on how well several previously unassociated and underlying components of educational presentation are woven into the page. To that end, it will provide a prototype evaluative instrument which may provide some criteria for criticism of existing web pages, and, possibly, for the modification of those pages. The need to look at the presentation of information on the computer was inspired by the need to evaluate web pages as part of ongoing research into the effects of computerized education. While the details of that overall project continue to evolve, this exercise, as a component of that research, can stand alone.

Reach and Integration

A popular and widely distributed learning management system in use in a number of large universities, including my own, provides facilities for taking part in on-line discussions. As a topic is scanned, windows open to reveal the general topic, and then the more specific subtopic within that heading. When a student or teacher desires to post a comment, a small link is clicked and the comment screen is opened. However, as you can see from this example, there is a lot more happening on the screen than the relatively simple act of posting a comment.

The box provided for comment itself is small; approximately 8% of the total screen area. The other 92% of the screen is used for other purposes. The content of the original post that inspired comment is not visible and context has been reduced to a string of “re:’s” in the middle pane. In fact, the general topic located in the far left pane seems to concern itself with getting started, and the more specific discussion thread in the center pane concerns the final exam. This is an unfortunate byproduct of the fact that the scrollbars work independently, but whatever the explanation, the link has been lost between the general and the specific.

But there is still more happening, or not happening, than what is in the center of the screen. There is 1 line along the bottom of the screen devoted to the state of the computer itself: that is, a list of running programs, a button to access other programs, and the current time, among others. On the top of the screen five lines are devoted to navigation or information about the state of the web browser. There are the standard and generic forward, backward and refresh buttons, (although these navigational aids at the top of the screen are not guaranteed to work within the learning environment below them). There is also a link available to favorite sites, 3 different search engines, and 2 tabs that can display totally different and unrelated web pages.

Further down the screen, inside the actual learning system, additional navigation is available. Some of it runs horizontally across the top of the window, others vertically to the left. As an example, there are two links to the course syllabus (presumably these links go to the same syllabus). Below the space allotted for adding to the discussion are buttons relevant to the task of writing and posting a comment – a spell check and the ability to edit the text in HTML, and a button to submit it. Although not a visible element, one quickly becomes aware of the fact that if one takes too long in the composition of a comment the system times out and the entire comment is lost. There is no warning this is about to occur.

It is important to note that I did not deliberately attempt to create a messy interface to illustrate my points, rather, this screen is the end result of several competing programs and vendors installing software on my system, and the simple act of accessing the Penn State learning system to obtain an example. When viewed in the context of the stated purpose of adding a comment to a class discussion, it seems not only ill-suited to the task, but perversely obtuse and actually opposed to this function.

What may be more interesting than the poor design of the screen or the poor interaction of the computer system is the curious reaction of students and teachers. Users who are intelligent and otherwise demanding within their fields of expertise simply shrug in acceptance when poor screen design is pointed out. It is as if this is the one area of life that allows no alternative or protest . Simply put, there is an attitude that says nothing that can be done about it so why even discuss it.

It astonishes that in fields of study in which relationships of power are subjected to the most detailed scrutiny, the significant power of media to shape and control communication is accepted without question. This system for posting comments to a class is just one very small example of the many computerized functions a student will now encounter in higher education. If we start with registration, many institutions now offer on-line course catalogs and schedules, and provide for on-line registration. Once registered, students will find their class materials distributed via a centralized course web page. Typical features of this page include the syllabus, selected on-line articles, locations to “hand in” assignments, and some version of the previously noted interface for conducting discussions with the instructor or other students. If the class is held on-campus the instructor may use PowerPoint to prepare lecture notes and provide students with copies of those notes to facilitate the class and so that students will not need to spend class time taking notes. In a fully on-line class all instruction may take place via some on-line presentation method. The instructor may distribute presentations, direct students to an alternate instructional web page, or provide links to instructional web pages prepared by some third party. In some cases video-taped lectures are included as part of the course materials linked to these sites.

When the library is considered, standard procedures in most libraries is to research topics and titles in the library database system, with results returned which offer the locations of books. In the case of periodicals, large numbers provide links to the full-text page views of the relevant articles. Frequently this research is conducted remotely through en electronic portal into the library database. While beyond the immediate scope of this article, it is of note that this access to educational databases and search capabilities usually associated with membership in university communities are not mentioned in discussions of the “digital divide,” a term otherwise used to discuss power issues with computer use.

While many of these individual learning applications and transactions have been studied, the actual details of the presentation of information through the medium of the computer as an element of the environment are treated as too small for study. It is beyond the scope of this article to examine all of these educational applications, however, there are some common points that do admit entry to a discussion that is narrower in scope, yet larger in its ultimate reach. It is a discussion that can encompass many areas of how we use computers in education and how we learn with computers, and which can contribute meaningfully to a consideration of the real digital divide. (While “digital divide” is often used as a term to discuss access to the internet, the digital divide here will be based on a discussion that reexamines of the idea of literacy, that is, what does it mean to be “literate” when text moves from paper to the computer screen?) As we will see as this examination evolves, the need for a definition of digital literacy as something separate from traditional paper literacy becomes apparent. That definition, the implied hierarchy of abilities, how that hierarchy addresses the ability to decode information on the computer screen, and how each level of increasing ability within that hierarchy may grant power to the individual should be of concern to adult educators.

The Nature of the Evaluation

There have been a number of criteria proposed for the evaluation of web pages. A good summary of the generic criteria is provided by Kapoun (1998), who suggests evaluating each page against five criteria: accuracy, authority, objectivity, currency and coverage. While good suggestions for anonymous web pages that might be accessed by students doing open-ended research on the internet, they do not address the more restricted environment that students find within their learning management system. It should be safe to assume that the criteria of accuracy, authority, objectivity, currency and coverage must be addressed by a page created by a member of the faculty and accessed only by a registered student in a class. How then should such a limited access web page be evaluated?

While there are many different criteria that might be used to evaluate web pages on their ability to teach, I propose organizing those criteria into four general areas that seem to encompass the necessary elements of the discussion: Graphic Clarity, Readability, Usability, and Learn-ability. Each of these general areas touches a different aspect of the page to be evaluated, but all need to be considered before coming to a conclusion.

Graphic clarity is the term used to encompass how the page presents information. It should not be confused with a consideration of whether the page looks good, but rather, whether the page conveys information in a useful and elegant way. This discussion is based on the work of Tufte (1990, 1997, 2001), Card, Schniederman and Mackinlay (1999) and other design specific authors. It will seek to derive general principles regarding good design and apply them to the illustrations used by the course designer to communicate information.

Digital Readability is a term that encompasses traditional measures of readability such as the Flesch Reading ease and the Flesch-Kincaid grade level (now available as a tool within many word processors), along with an acknowledgement of the “new” readability, a phrase that describes the need for a user to decode and understand the various alternate text presentations and text links, the number of those links, the use of graphic hot spots, and other non-traditional methods of encoding information on the page. This section has been inspired by Heath (2000) and others who note this new literacy without giving it a name.

Usability is a term that is much used in computer circles and while it can be a broad term concerned with the usability of web pages in general, in this context it will evaluate the learning system interface on general usability principles, as well as educationally specific principles. It will consider how well the interface promotes the task of learning, and will consider such things as how easily a student can perform standard learning related tasks. Some of the usability proponents who have influenced this work are Flanders (1998), Nielson (1999) and Watson (2001).

Learn-ability is a new term, one based on a variety of works by non-educational authors, including those of McLuhan (1964), Arnheim (1969) and Elkins (2003). It is an attempt to consider how well the page promotes thinking and learning. It concerns such areas as how people relate affectively to the content and presentation of a page, and contains elements of media and art studies.

Graphic Clarity

The inspiration for looking at the graphic design of learning materials is in the work of Edward Tufte. In his books on information design (1990, 1997, 2001) he uses the illustrative graphics of the past 500 years as his canvas, arguing passionately that clarity of thought and clarity of presentation are related. The timelessness, importance and media independence of this idea is demonstrated by the examples that Tufte cites. When logical thought and clear presentation are combined, ideas can be grasped instantly and intuitively.

Tufte cites numerous examples to illustrate this point; perhaps the one most notably associated with him is Minard’s graphic illustration of the ineffectiveness of Napoleon’s attack on Russia, described as “the best statistical graphic ever drawn (Tufte, 2001, p.40). It might be argued that “clarity” is simply a personal preference for one method of data presentation over another; Elting, Martin, Cantor and Rubenstein (1999) make this point in their study of clinical trials. It is when one looks at Minard’s graphic, however, that the power of the image to communicate complex ideas immediately and better than any other method of presentation becomes evident. There are six data elements communicated in the graphic, Tufte (2001) spends time examining these various data elements, but consider two only, the size of the army as depicted by the large tan line moving toward Moscow and the subsequent narrower black line moving away, and the temperature during the retreat, depicted by the descending line at the bottom of the graphic which matches the location of the army and bottoms out at 30 below zero.

The actual design principles that Tufte’s cites as characteristic of clear graphics might be ascribed to common sense, if only they were not so uncommon. While there are many individual components that can add or subtract from a graphic, the one theme that remains consistent across all his observations and suggestions is that graphic design is a form of communication; anything that promotes better communication is good, and anything that detracts from it is bad. A number of general principles are proposed, those principles derived from actual observed and demonstrated characteristics of characteristics that either illustrate and clarify information, or obscure it. We should consider the elimination of “chart junk,” the decorative but meaningless detritus of graphic illustration, often composed of unrelated and un-illustrative noise in a graphic design. We should maintain a consistent scale in statistical graphics. Perhaps the single most important lesson from Tufte is simply the idea of keeping the intelligence of the audience foremost in the mind of the designer. The consistent goal is to promote graphics that communicate accurate, precise and concise information.

Tufte’s books contain numerous examples of excellence in graphic design culled from illustrations that span centuries. It is not the purpose here to attempt to extract the concentrated essence of the books, but to direct the reader to them, and to use them as a guide as to how to look very carefully at what is being communicated through the use of graphics. The question that remains for those creating and evaluating learning-centric graphics remains, does there exist on the page a clear presentation of information that will provide the learner with a clear understanding of that information, or is the information muddy, and the graphics mere decoration?

Readability

It is within the context of instructional software and message design that Heath (2000) makes the point that the concept of literacy is expanding. The idea is worthy of extended consideration when looking at how web pages are used for teaching and learning. Reading was once limited in scope to the text on a page, but on the web it is evolving into something else. Information is coded in multiple layers, with content distributed across the standard and familiar text, but also now including concepts of hypertext: text that provides links to other documents or to other locations within the same document, and hover-text: text that appears in a floating box on the page when the mouse is “hovered” over a particular location. These links and annotations can provide deeper, more robust explanations, alternate points of view on a particular topic, or links to the personal web pages of cited authors. These links sometimes navigate away from the original page and sometimes open a completely new browser, and other times create a new tab in the current browser. There is no standard. It only takes a cursory exploration of current web pages to see the wide variety of citation styles and text usage currently available.

On the page graphic images may exist in their traditional form, as illustrations of the text, or they may exist as alternate hyperlinks to graphic content that expands the linked idea with deeper or more robust explanation. They may also employ the previously mentioned idea of hover-text, but here bringing forward graphics to further explain concepts in the text. One additional form not usually considered in terms of literacy is the use of graphics as icons, single buttons that provide a specific function within the page or more commonly, within the software viewing the page. While ostensibly striving to be clear and to provide a simple shortcut to content or capability, they are often exactly the opposite. This has the unintended effect of obscuring the functions available through the addition of these graphic short cuts. It is analogous to adding undefined Chinese kanji to a page and expecting everyone to immediately understand the meaning.

Another software capability that has not been considered within an educational literacy context is a consideration of the “state” of something. A book exists as a static entity, the text and graphics wait to be read and understood. Once it is printed and bound, page 27 will always follow page 26 and precede page 28. On a computer screen however, there are a number of informative state changes that occur while a page is being viewed. Pressing the refresh button may update the content of the page. The image of an arrow may change to the image of a hand, which may change to the image of a cursor, all of which are linked to the action of an external pointing device and which may have any of several meanings dependent on which button is pushed. Despite the claims of the software developers, the comprehension of these state changes is not intuitive and the unannounced use of such devices may compromise the learning environment.

Within this already confusing array of symbols, text variations, actions and states, we must add the idea of media elements. Animations, moving images, and audio can all be embedded within a page and invoked actively, by explicit action taken by the reader, or passively, simply because someone entered a page, or an area within a page. Navigation may be signaled by an audible click, which becomes part of the reading experience when waiting for a page to load.

Within this context, how do we define functional literacy? Is it the ability to use all the elements on the page, or simply the ability to recognize them? It would seem to benefit an instructional designer little to place information on a page that could only be accessed by a small percentage of people who viewed that page. If information is placed on an instructional page the information can fit in any number of categories (required information, background information, examples or explication), but it should not be there if it can only be accessed by some small percentage of the target population who possess superior computer navigational skills. The challenge in the context of this article is to find a way to enumerate and evaluate the multiple forms information may be accessed from a web page, and to determine how clear that access is within a learning context.

Usability

There are numerous ideas of general web usability (Nielsen, 2000, Flanders and Willis, 1996, Lynch and Horton, 2001) and of specific learning centered usability (Horton, 2000). While the study of usability, often considered under the umbrella of human factors engineering can grow quite large and extensive, the purpose of this section is narrower and more focused. When a student views a web page, the primary question is whether that page is usable for their purpose as a student? Can a student, without too much trouble and effort get to the pages that they need to get to? Can they tell which pages they need to get to? Can they do what they need to do once they get there?

Several notable red flags exist to warn of bad usability decisions. Among them are the inappropriate use of color – does the web page commit the classic violation of using light text on a dark background? Is the font appropriate to the content? While the concern here primarily of legibility, there is also an element that considers matching the font to the message; consider the effect of combining block lettering in simulated crayon with any academic topic.

Is the navigation clear and consistent? Poorly designed navigation is so commonplace Flanders and Willis (1996) have coined the phrase “mystery meat navigation” to describe web pages that seem to provide no clear method or purpose for moving through a site. Does the page load in a reasonable amount of time? Students may not be accessing the page over a broadband connection and the inclusion of complicated graphics may slow the page unacceptably.

The overall purpose here is to view how well the site can be used by students for the purpose of learning. The purpose of the navigation and content on an educational web site it to allow students to learn, not to demonstrate the misplaced creativity of the web designer.

Learnability

The final element that will be considered as part of the evaluation will also be the one most difficult to isolate. The computer as a learning platform should be more than a media rich environment that presents information and hopes for the best. The idea of “learnability” is significantly different from the ideas covered by the topics of usability, readability or graphic clarity. It is instead an attempt to consider how having various types of media simultaneously presenting information to the learner aids in the process of creating a viable and complete mental model of the subject of study. * In many ways the ability to judge this category will depend on the subject matter expertise of the reviewer.

The consideration starts with the work of McLuhan (1964) and his work analyzing media. The core idea is that the act of moving content into a context with the capabilities of a computer screen changes the content, and the perception of the content in the mind of the viewer. In reviewing work on learning with computers we start to find such terms as “external cognition” (Scaife & Rodgers, 1996) and “distributed cognition (Hewitt & Scardamalia, 1998).” A brief consideration reveals that neither of these terms are revolutionary – a pencil could reasonably be considered an aid to external cognition, but when they are used in an examination of the problems and potentials of a computerized learning environment we need to open consideration into what might be the most important concept behind the technology, that “the medium is the message.” McLuhan writes, “… the message of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs (p.8).” It is difficult to consider the changes web presentation has introduced into the reading and viewing habits of the general public. Moving the focus to current educational processes can only magnify those changes.

Among the many places that consideration can extend, there are two threads that interweave and which are of interest to this exercise. The first is how people interact with other people across the net and the second is how people interact with information when it is presented within the confines of a computer screen. Lin, Cranton and Bridglall (2005) explore this first idea with an eye toward how the computerized environment interacts with personality type. The key finding of their study was not that different personality types interact better or worse within a computerized environment, but rather that within that computerized environment different personality types will process and react to information differently. Those differences seem to remain constant within type.

Consideration of the second begins with McLuhan and extends forward into a consideration of the effect of the computer on the conceptualization and presentation of knowledge and information. While the study of these elements could easily become the exclusive focus of an analysis of web based learning, the purpose here is to examine the environment to see how it promotes learning. The challenge will be to look at web pages and determine whether they simply move previous pedagogical practice and materials from one medium to the next, without alteration, or if they take advantage of the unique features of the computer environment to encourage student participation and learning.

While much of this is valuable background, it may not help in the actual assessment of a learning web page. The thought is that something more concrete would help. One such concrete model is to judge the page on its ability to create a mental model that helps the student learn. Mayer (1998) lists the qualities of such a model as:

Complete. Good models contain all of the essential parts, states, or actions of the system as well as the essential relations among them, so that the learner can be able to see how the system works.

Concise. Good models are presented at a level of detail that is appropriate for the learner. Rather than provide so much detail that the student is overwhelmed, good models summarize and epitomize the system they seek to explain…

Coherent. Good models make intuitive sense to the learner so that the operation is transparent; the model or analogy used is a logical system that contains parts and rules for how the parts interact.

Concrete. Good models are presented at a level of familiarity that is appropriate for the learner, including physical models or visual models.

Conceptual. Good models are based on material that is potentially meaningful, that is, on material that explains how some system operates.

Correct. Good models correspond at some level to the actual events or objects they represent. The major parts and relationships in the model correspond to the major parts and relationships in the actual object or event.

Considerate. Good models are presented in a manner that is appropriate to the learner, using learner appropriate vocabulary and organization. (p. 59-60).

The nature of this evaluation

One stated goal of this article is to consider appropriate evaluative criteria for existing or proposed learning based web pages. In order to do this properly the first thing that should be determined is what type of evaluation is under consideration. Reeves and Hedberg (2003) provide a good summary of the various types of evaluation, along with the relationship to various aspects of the development process. While their analysis considers the full spectrum of evaluation, including components that occur during planning and other project phases beyond the reach of a simple page analysis, in this exercise it will only be necessary to consider those parts that can be applied to existing learning centered web pages. Of the types of evaluation they reference there are three that might be considered relevant to the current discussion: formative evaluation, effectiveness evaluation and impact evaluation. Unfortunately, none of these are a perfect fit for the current purpose. I will therefore propose to use certain aspects of each to develop a new evaluative model suitable for that purpose.

Reeves and Hedberg (2003) describe formative evaluation as occurring during the development phase of a project and serving the purpose of providing “information to guide decisions about creating, debugging, and enhancing an interactive learning system as various stages of its development (p.60-61).” The evaluation that I am proposing may be considered formative as it relates to using web pages to develop curriculum and to future projects that will be developed by those who use the tool. Most of the time, however, the pages, or significant portions of the pages, we will be looking at are already finished and static, so the description as “formative” is not quite accurate.

Effectiveness evaluation occurs in the implementation phase of a project. Reeves and Hedberg’s description of it is to determine “whether the interactive learning system accomplishes its objectives within the immediate or short-term context of its implementation (p. 61).” On the surface, this is closest to the purpose we are seeking. One caution is that the descriptions they use place effectiveness evaluation within the purview of the program developers and address questions more appropriately addressed, in purpose and time, between formative and impact evaluation.

The final category of evaluation described by Reeves and Hedberg is that of impact evaluation, which occurs during what he describes as the institutionalization phase of a project, when the learning is examined to see if it results in any long term change in behavior. While this is probably the furthest from the mark in terms of the stated purpose here for this evaluation, the description does serve to allow us to maintain focus on the fact that learning web pages should have some focus on learning. As they state, the purpose is “to determine whether the knowledge, skills and attitudes learned in the context of instruction transfer to the intended context of use (p. 62).”

The purpose of including these descriptions is not to try to force a fit between existing categories of evaluation as much as it is to demonstrate that the currently proposed evaluation does not fit neatly into any existing category. Those that were noted, the formative, effectiveness and impact evaluations described by Reeves and Hedberg, provide a “best fit” within existing methods, but do not provide a suitable framework to allow existing web pages to be viewed as learning tools. It is hoped that the proposed instrument will enable the users of electronic learning tools to look at those tools as they are and to determine how best to use the things that cannot be changed, to adapt those that can, and to fit instruction within the environment provided.

The Instrument

The instrument presented here has been developed as part of an ongoing research project considering how students and faculty interact and learn from materials presented on a computer. It attempts to incorporate each of the criteria discussed in this article and to use them to assess the overall ability of a page to teach.

It should be stressed at the outset that this instrument is subjective in nature, and there may be wide areas of disagreement. In order to create a number that is intuitively understandable, I used a scale based on 100; within each category there is a 5 point span to allow for degrees of applicability. It may take some time to learn how to step back and review pages based on their ability to adhere to an idea, for example, “usability” might not be something that the reviewer will be able to isolate immediately from the background noise of yet-another-web-page, but by focusing on just this element of any web page, errors and omissions on a page will begin to grow larger and easier to recognize. With that said however, it is hoped that in much the same way that the ability to judge wine based on set criteria is a skill that can be learned, the use of this tool can assist instructors in developing the skills needed to judge learning based web pages. This first iteration will be presented with explanation of each of the entries and criteria, then as an appendix in simpler and more usable form.


1. Overall Graphic impression.

When you view the page as an integrated unit what is your first impression? Do the graphic elements provide relevant information in a manner that clearly and unambiguously advances the educational purpose of the page? Do they assist in the creation of a mental model of the proposed topic, or are they “just there” to fill otherwise empty space?

Points

Rationale

16-20

Graphics are informational and necessary. Page is well structured and balanced, graphics are well executed and integrated into the text.

11-15

Graphics are informational, but some items appear out of place or detract from the overall purpose of the page.

6-10

Graphics are informational, but now most items appear out of place. Graphic elements on page are now clearly losing purpose, execution is shoddy.

1-5

Graphic elements provide information, but context is no longer clear or meaningful..

0

Graphics are non-informational and unnecessary. Graphic elements now detract from the ability of the page to present information.

Points: ______

2. Data Integrity

The focus here is on the specific information being communicated. Is the user treated as intelligent and able to draw conclusions from the information, or is the information “dumbed-down?” One criterion here is the use of what Tufte calls “data ink,” that is, what percentage of the characters, lines, charts and graphics are used to convey information? Is the author of the page using page elements to obscure the message rather than reveal it?


Points

Rationale

16-20

Information is presented in manner that facilitates the creation of a mental model. User is treated as intelligent co-creator of knowledge on page.

11-15

Information is presented in manner that facilitates the creation of a mental model, however, some elements of page seem misplaced or unrelated to topic.

6-10

Information is presented in manner that does not facilitate the creation of a mental model, although this is still possible; most elements of page seem misplaced or unrelated to topic

1-5

Information is aggregated on page in manner that impedes the creation of mental model of subject. Lots of data, but little information.

0

Information is presented in manner that impedes the creation of a mental model. User is treated as un-intelligent consumer of incomplete or biased information.

Points: ______

3. Specific use of page elements

Consider how various specific elements on the page are used. Is there an appropriate use of font, line, space? Is the background in the background, or does a bold primary color make it the most important part of the picture?

Points

Rationale

16-20

Page elements work together to present useful and valid information. Font, line and color all mesh to present information transparently, without drawing attention to itself.

11-15

Most page elements work together, but some unbalance is present. This unbalance draws attention away from content or educational purpose of page

6-10

Unbalance in use of page elements now a prominent feature of page, User must work past this unbalance to extract information from page

1-5

Unbalance is now the most prominent feature of page, most users will need to work to get past poor design.

0

Page elements are most prominent feature of page at the expense of content. Font inappropriate to content, color misused, background is noticeable. Content lost in background

Points: ______

4. Usability

In addition to general usability issues, i.e. do the buttons all work in the anticipated way, there is an additional element of educational usability; is this page “usable” for the purpose of education? Consider the graphic of the discussion page at the beginning of this article with mismatched topic panes. Questions to consider here are does the arrangement of navigational elements on the page seem to facilitate the educational purposes of the page? Are they neutral? Do they get in the way of the educational purpose of the page? Do you need to scroll down to get to necessary navigation or see critical elements?

Points

Rationale

16-20

The page works well. All necessary functions are well thought out, clearly marked and easy to use. Response times are fast and there are no odd or unanticipated results.

11-15

Most of the page works well, although some elements are not well thought out. Response times may begin to slow, and there are occasional odd or unanticipated results.

6-10

Page displays some usability problems to the general user. Odd or unanticipated results are noted by some users

1-5

Page displays usability problems. Links are broken, buttons go nowhere. Odd or unanticipated results are noted by most users

0

The page does not work well. Necessary functions are obscured or broken and difficult to use. Response times can be slow and there are odd or unanticipated results.

Points: ______

5. Learn-ability

The issue to be addressed here is “does this page help people learn?” Relevant criteria to base this judgment are whether the page presents new material in ways that take advantage of technology and encourage students to learn, or does it simply move previously developed materials “as-is” to a new delivery format (think of dry and informationally-thin PowerPoint slides moved to web presentation without benefit of instructor expansion or explication). Does the page assist in the development of a mental model, (or alternately, does it present a valid challenge to an existing mental model) or is it simply an aggregation of facts which leaves any hoped-for organization to the student. This analysis will be informed by the analyst’s knowledge of the subject matter.


Points

Rationale

16-20

The page creates a robust and transferable mental model of the subject matter. All previously considered aspects are melded together for the purpose of instruction

11-15

The page creates am adequate mental model of the subject matter. Some previously considered aspects are melded together for the purpose of instruction, but more could be done on the page.

6-10

The page does not create a useful or unique view of the subject matter, but is adequate for the purpose of instruction. Previously considered elements are used haphazardly – there is not unified theme on the page.

1-5

The page does not create a useful or unique view of the subject matter – it uses embedded PowerPoint as a presentation tool for the subject matter

0

The page does not create a useful view of the subject matter. The learning based presentation is confusing and counterintuitive. It does not lead to a clear and usable model of the subject.

Points: ______

6. Readability

This combines traditional readability indexes with web based indexes measuring a new type of readability. When considering this, only the learning content of the page should be considered. To obtain the reading level of text import into a word processor that supports grade level analysis such as Microsoft word. The various links are counted to determine ease of navigation.

Readability

Flesch Reading Ease:

Flesch-Kinkaid grade level

.

Text based links on page

Graphic based links on page

Total links on page


References

Elting, L. S., Martin, C.G., Cantor, S.B. & Rubenstein, E.B. (1999). Influence of data display formats on physician investigators' decisions to stop clinical trials: Prospective trial with repeated measures [Electronic Version]. British Medical Journal, 318, 7197, 1527.

Flanders, V. & Willis, M. (1996). Web pages that suck: Learn good design by looking at bad design. Sybex, San Francisco.

Heath, S. B. (2000). Seeing our Way into Learning [Electronic Version]. Cambridge Journal of Education, Vol. 30, No. 1, 121-132.

Hewitt, J. & Scardamalia, S. (1998). Design Principles for Distributed Knowledge Building Processes [Electronic Version]. Educational Psychology Review, 16 (1), 75-96.

Horton, S. (2000). Web teaching guide: A practical approach to creating course web sites. Yale University Press, New Haven, CT.

Kapoun, J. (1998). Teaching undergrads WEB evaluation: A guide for library instruction. C&RL News, July/August 1998, Volume 59 No. 7

Lynch, P. J. & Horton, S. (2001). Web style guide: Basic design principles for creating web sites. Yale University Press, New Haven, CT.

Nielson, J. (2000). Designing web usability. New Riders Publishing, Indianapolis, IN.

Reeves, T.C. & Hedberg, J. (2003). Interactive Learning Systems Evaluation. Educational Technology Publications, Englewood Cliffs, NJ.

Reiser, R. A. (2001). A history of instructional design and technology: Part 1: A history of instructional media [Electronic Version]. Educational Technology, Research and Development, 49 (1), 53-64.

Reiser, R. A. (2001). A history of instructional design and technology: Part II: A history of instructional design [Electronic Version]. Educational Technology, Research and Development, 49 (2), 57-67.

Tufte, E.R. (1990). Envisioning Information. Graphics Press. Cheshire, CT.

Tufte, E.R. (1997). Visual Explanations. Graphics Press. Cheshire, CT.

Tufte, E.R. (2001).The visual display of quantitative information, 2nd edition. Graphics Press. Cheshire, CT.

Tufte, E.R. (2003) The cognitive style of PowerPoint. Graphics Press. Cheshire, CT.



* It should be noted that the concept of mental models is a subject worthy of further consideration. Such consideration would expand this effort well beyond the ability of any journal to contain it and has been deferred until a later date.


Copyright 2003-2007. https://www.rcet.org