Simulation & Gaming:
An Interdisciplinary Journal of Theory, Practice and Research
http://sag.sagepub.com Structured abstracts
|
Structured abstracts For many years, we have for S&G adopted structured abstracts for Ready-To-Use Games (RTUGs). It is time to extend this to (almost) all articles. Pls use a structured abstract if at all possible. In many cases, an ordinary abstract can be made into, or written directly as, a structured abstract. Structered abstracts have a number of advantages over traditinal abstracts, including:
Below, you will find a number of articles, from various sources, that outline the why, what and how of writing structured abstracts. Permission to reproduce here is pending on some items, but their source is indicated. For S&G, pls use a structured abstract whenever possible, which will be for the majority of articles. Adapt the structure of the abstract to your type of article and to you article. An experimental article will have a structure that is different from a conceptual article. Your conceptual article may have a different structure from another conceptual article. Use logic, common sense, clarity. Think of the reader; you the writer are unimportant!. The abstract is designed for at least two main things:
The format of your structured abstract should follow a pattern that is suited to (a) the objectives and (b) the type of article. Below you will find extracts from various web sites that give advice on structured abstracts. You should select the structure and items from among those below, or supply your own, in a manner that makes the content immediately accessible to readers. You will need to strike a happy a balance between succintness and completeness. Rememebr that you, the writer, are not important; the reader is! Italicize headings (not bold), embolden key words. Indent as below. (Text adapted from JEP.)
You will find some useful guidance in the following documents available for download. https://www.hfes.org/web/pubpages/Abstract_ReviewArticle.pdf http://www.elsevier.com/__data/promis_misc/apmr_inststrabs.doc http://www.audiology.org/resources/journal/Documents/structuredAbstract.pdf You are also encouraged to provide a graphical abstract. Also consult the documents reproduced below.
http://hlwiki.slais.ubc.ca/index.php/Structured_abstract A structured abstract is a commonly-used form of scientific communication used to report research findings to the scientific community. The structured abstract can be used in librarianship. The format is widely-used in biomedicine, allied health and health librarianship to present research quickly and concisely. Structured abstracts typically follow a standard boilerplate design where the goals and objectives of the research are listed, the methods that were used to carry out the investigation and brief sections on discussing the findings of the research (and, in a conclusion, their implications). Structured abstracts help authors to organize their ideas, and to present them with clarity and in an organized manner. Parts of a structured abstractHere are nine (9) possible sections of a structured abstract which can be adapted for your purposes and research:
Health librarians' researchA summary of the advantages of structured abstracts appeared in a Summer 2001 issue of Hypothesis, and by Bayley and Eldredge in 2003. The evidence to demonstrate the value of structured abstracts clearly points to advantages for searching and quickly extracting needed information from summaries, regardless of the exact headings used by a abstracting and indexing service. A 2003 MLA Annual Meeting strongly recommended the use of structured abstracts for members wishing to present papers or posters at MLA Annual Meetings. At their most basic form, structured abstracts organize and summarize a paper; here are some of the more commonly-used sections:
References
http://msc.emeraldinsight.com/Authors/guidelines_page_5.html Use of structured abstracts ensures that better information is supplied and
that there is more consistency across the journals and database. Ultimately,
readers and researchers searching the database are more likely to access the
paper when the abstract provides useful information. In the past, author-written
abstracts were very variable both in terms of content and quality. Structured
abstracts ensure we no longer have this problem.
A sample structured abstract Purpose The purpose of this article is to assess the concept of grade inflation in higher education institutions in an effort to determine its prevalence, causes, and strategies which can be implemented to curtail it. Design/methodology/approach A literature review of the problem is presented along with several strategies as possible solutions to restraining the problem of escalating grades in the college classroom. Findings The problem of grade inflation has been a topic of concern for over a century and there are no quick fixes or simple methods of reversing this trend but there are several alternatives presented which could help curtail this trend. Research limitations/implications Most of the research is based on anecdotal research. Very little has been written on how to fix this problem. Practical implications This paper brings this issue to the forefront in an effort to engage the reader, college administrators and educators. Social Implications This paper is the building block for future research on this topic. The culture of the college classroom, teaching and learning could be affected by this issue. The hiring, training and evaluation of college instructors could be impacted if colleges and universities choose to investigate the issue of grade inflation at their institutions. Originality/value The paper begins with an overview of previous research in this area and then moves on to what is currently being implemented to curb grade inflation. The authors then propose several methods and possible solutions which could be implemented to deal with this problem. Why write a structured abstract?
from http://informationr.net/ir/hartley2.html
MeasuresTwo sets of objective computer-based measures, and two different subjective reader-based measures were then made using these two sets of abstracts. The two sets of computer-based measures were derived from (i) MicroSoft's package, Office 97, and (ii) Pennebaker's Linguistic Inquiry and Word Count (LIWC) (Pennebaker, Francis and Booth, 2001). Office 97 provides a number of statistics on various aspects of written text. LIWC counts the percentage of words in 71 different categories (e.g., cognitive, social, personal, etc). (Note: when making these computer-based measures the sub-headings were removed from structured versions of the abstracts.) The two reader-based measures were (i) the average scores on ratings of the presence or absence of information in the abstracts; and (ii) the average scores on ratings of the clarity of the abstracts given by authors of other articles in the JEP. The items used for rating the information content are shown in Appendix 1. It can be seen that respondents have to record a 'Yes' response (or not) to each of 14 questions. Each abstract was awarded a total score based on the number of 'Yes' decisions recorded. In this study two raters independently made these ratings for the traditional abstracts, and then met to agree their scores. The ratings for the structured abstracts were then made by adding in points for the extra information used in their creation. The ratings of abstract clarity were made independently by 46 authors of articles in the JEP from the year 2000 (and by 2 more authors of articles in other educational journals). Each author was asked (by letter or e-mail) to rate one traditional and one structured abstract for clarity (on a scale of 0-10, where 10 was the highest score possible). To avoid bias, none of these authors were personally known to the investigator, and none were the authors of the abstracts used in this enquiry. 48 separate pairs of abstracts were created, each with a traditional version of one abstract, and a structured version of a different one. 24 of these pairs had the traditional abstracts first, and 24 the structured ones. The fact that the abstracts in each pair were on different topics was deliberate. This was done to ensure that no order effects would arise from reading different versions of the same abstract (as has been reported in previous studies, e.g., Hartley and Ganier, 2000). The 48 pairs of abstracts were created by pairing each one in turn with the next one in the list, with the exception of the ones for the two research reviews that were paired together. ResultsTable 1 shows the main results of this enquiry. It can be seen, except for the average number of passives used, that the structured abstracts were significantly different from the traditional ones on all of the measures reported here.
DiscussionTo some extent these results speak for themselves and, in terms of this paper, provide strong support for structured abstracts. But there are some qualifications to consider. Abstract lengthThe structured abstracts were, as expected, longer than the traditional ones. Indeed, they were approximately 30% longer, which is 10% more than the average 20% increase in length reported by Hartley (2002) for nine studies. It is interesting to note, however, that the average length of the traditional abstracts was also longer than the 120 words specified by the APA. Eighteen (i.e., 75%) of the 24 authors of the traditional abstracts exceeded the stipulated length. Hartley (2002) argued that the extra space required by introducing structured abstracts was a trivial amount for most journals, amounting at the most to three or four lines of text. In many journals new articles begin on right-hand pages, and few articles finish exactly at the bottom of the previous left-hand one. In other journals, such as Science Communication, new articles begin on the first left- or right-hand page available, but even here articles rarely finish at the bottom of the previous page. (Indeed, inspecting the pages in this issue of this journal will probably show that the few extra lines required by structured abstracts can be easily accommodated). Such concerns, of course, do not arise for electronic journals and databases. More importantly, in this section, we need to consider cost-effectiveness, rather than just cost. With the extra lines comes extra information. It may be that more informative abstracts might encourage wider readership, greater citation rates and higher journal impact factors - all of which authors and editors might think desirable. Interestingly enough, McIntosh et al. ( 1999) suggest that both the information content and the clarity of structured abstracts can still be higher than that obtained in traditional abstracts even if they are restricted to the length of traditional ones. Abstract readabilityTable 1 shows the Flesch Reading Ease scores for the traditional and the structured abstracts obtained in this enquiry. Readers unfamiliar with Flesch scores might like to note that they range from 0-100, and are sub-divided as follows: 0-29 college graduate level; 30-49 13-16th grade (i.e., 18 years +); 50-59 10-12th grade (i.e., 15-17 years) etc., and that they are based on a formula that combines with a constant measures of sentence lengths and numbers of syllables per word (Flesch, 1948; Klare, 1963). Of course it is possible that the finding of a significant difference in favour of the Flesch scores for the structured abstracts in this study reflects the fact that fact that the present author wrote all of the structured abstracts. However, since this finding has also occurred in other studies where the abstracts have been written by different authors (e.g., see Hartley and Sydes, 1997, Hartley and Benjamin, 1998) this finding is a relatively stable one. The Flesch Reading Ease score is of course a crude - as well as dated - measure, and it ignores factors affecting readability such as type-size, type-face, line-length, and the effects of sub-headings and paragraphs, as well as readers' prior knowledge. Nonetheless, it is a useful measure for comparing different versions of the same texts, and Flesch scores have been quite widely used - along with other measures - for assessing the readability of journal abstracts (e.g., see Dronberger and Kowitz, 1975, Hartley, 1994, Hartley and Benjamin, 1998; Roberts, Fletcher and Fletcher, 1994; Tenopir and Jacso, 1993). The gain in readability scores found for the structured abstracts in this study came, no doubt, from the fact that the abstracts had significantly shorter sentences and, as the LIWC data showed, made a greater use of shorter words. The LIWC data also showed that the structured abstracts contained significantly more common words and made a significantly greater use of the present tense. These findings seem to suggest that it is easier to provide information when writing under sub-headings than it is when writing in a continuous paragraph. Such gains in readability should not be dismissed lightly, for a number of studies have shown that traditional abstracts are difficult to read. Tenopir and Jacso (1993) for instance reported a mean Flesch score of 19 for over 300 abstracts published in APA journals. (The abstract to this article has a Flesch score of 26 when the sub-headings are excluded.) Interestingly enough, there were no significant differences in the percentage of passives used in the two forms of abstracts studied in this paper. This finding is similar to one that we found when looking at the readability of well-known and less well-known articles in psychology (Hartley, Sotto and Pennebaker, 2002). The view that scientific writing involves a greater use of passives, the third person and the past tense is perhaps more of a myth than many people suspect (see, e.g., Kirkman, 2001; Riggle, 1998; Swales and Feak, 1994). Indeed the APA Publication Manual (2001) states, "Verbs are vigorous, direct communicators. Use the active rather than the passive voice, and select tense or mood carefully". (5th edition, p.41.) Information contentThe scores on the information checklist showed that the structured abstracts contained significantly more information than did the traditional ones. This is hardly surprising, given the nature of structured abstracts, but it is important. Analyses of the information gains showed that most of the increases occurred on questions 1 (50%), 3 (83%), 5 (63%) and 12 (63%). Thus it appears that in these abstracts more information was given on the reasons for making the study, where the participants came from, the sex distributions of these participants, and on the final conclusions drawn. These findings reflect the fact that few authors in American journals seem to realise that not all of their readers will be American, and that all readers need to know the general context in which a study takes place in order to assess its relevance for their needs. Stating the actual age group of participants is also helpful because different countries use different conventions for describing people of different ages. The word 'student', for instance, usually refers to someone studying in tertiary education in the UK, whereas the same word is used for very young children in the USA. Although the checklist is a simple measure (giving equal weight to each item, and is inappropriate for review papers), it is nonetheless clear from the results that the structured abstracts contained significantly more information than the original ones and that this can be regarded as an advantage for such abstracts. Advances in 'text mining', 'research profiling' and computer-based document retrieval will be assisted by the use of such more informative abstracts (Blair and Kimbrough, 2002; Pinto and Lancaster, 1999; Porter, Kongthon and Lu, 2002; Wilczynski, Walker, McKibbon and Haynes, 1995). Abstract clarityIn previous studies of the clarity of abstracts (e.g., Hartley 1999a; Hartley and Ganier, 2000) the word 'clarity' was not defined and respondents were allowed to respond as they thought fit. In this present study the participants were asked to 'rate each of these of abstracts out of 10 for clarity (with a higher score meaning greater clarity)'. This was followed by the explanation: 'If you have difficulty with what I mean by 'clarity', the kinds of words I have in mind are: 'readable', 'well-organized', 'clear', and 'informative'. (This phraseology was based on wording used by a respondent in a previous study who had explained what she had meant by 'clarity' in her ratings.) Also in this present study - as noted above - the participants were asked to rate different abstracts rather than the same abstract in the different formats. However, the mean ratings obtained here of 6.2 and 7.4 for the traditional abstracts and the structured ones respectively closely match the results of 6.0 and 8.0 obtained in the previous studies. Nonetheless, because the current results are based on abstracts in general rather than on different versions of the same abstract, these findings offer more convincing evidence for the superiority of structured abstracts in this respect. Finally, in this section, we should note that several of the respondents took the opportunity to comment on the abstracts that they were asked to judge. Table 2 contains a selection from these remarks.
Concluding remarksAbstracts in journal articles are an intriguing genre. They encapsulate, in a brief text, the essence of the article that follows. And, according to the APA Publication Manual (2001), "A well-prepared abstract can be the most important paragraph in your article The abstract needs to be dense with information but also readable, well organized, brief and self-contained". (p.12.) In point of fact the nature of abstracts in scientific journals has been changing over the years as more and more research articles compete for their readers' attention. Berkenkotter and Huckin (1995) have described how the physical format of journal papers has altered in order to facilitate searching and reading, and how abstracts in scientific journal articles have been getting both longer and more informative (p. 34-35). The current move towards adopting structured abstracts might thus be seen as part of a more general move towards the use of more clearly defined structures in academic writing. Indeed, whilst preparing this paper, I have come across references to structured content pages (as in Contemporary Psychology and the Journal of Social Psychology and Personality), structured literature reviews (Ottenbacher, 1983; Sugarman, McCrory, and Hubal, 1998), structured articles (Goldmann, 1997; Hartley, 1999b; Kircz, 1998) and even structured book reviews (in the Medical Education Review). These wider issues, however, are beyond the scope of this particular paper. Here I have merely reported the findings from comparing traditional abstracts with their equivalent structured versions in one particular context. My aim, however, has been to illustrate in general how structured abstracts might make a positive contribution to scientific communication. NotesJames Hartley is Research Professor in the Department of Psychology at the University of Keele in Staffordshire, England. His main interests lie in written communication and in teaching and learning in higher education. He is the author of Designing Instructional Text (3rd ed., 1994) and Learning and Studying: A Research Perspective (1998). Originally published in Science Communication, 2003, Vol 24, 3, 366-379, copyright: Sage Publications. I am grateful to Geoff Luck for scoring the abstract checklist, James Pennebaker for the LIWC data, and colleagues from the Journal of Educational Psychology who either gave permission for me to use their abstracts, or took part in this enquiry. Professor James Hartley. Department of Psychology, Keele University, Staffordshire, ST5 5BG, UK; phone: 011 44 1782 583383; fax: 011 44 1782 583387; e-mail: j.hartley@psy.keele.ac.uk; Web site: http://www.keele.ac.uk/depts/ps/jhabiog.htm References
Appendix 1The abstract evaluation checklist used in the present study Abstract No. ________ 1. _____Is anything said about previous research or research findings on the topic? 2. _____Is there an indication of what the aims/purposes of this study were? 3. _____Is there information on where the participants came from? 4. _____Is there information on the numbers of participants? 5. _____Is there information on the sex distribution of the participants? 6. _____Is there information on the ages of the participants? 7. _____Is there information on how the participants were placed in different groups (if appropriate)? 8. _____Is there information on the measures used in the study? 9. _____Are the main results presented in prose in the abstract? 10______Are the results said to be (or not to be) statistically significant, or is a p value given? 11._____ Are actual numbers (e.g. means/correlation coefficients/t values) given in the abstract? 12 _____Are any conclusions/implications drawn? 13 _____Are any limitations of the study mentioned? 14 _____Are suggestions for further research mentioned? Note: this checklist is not suitable for theoretical or review papers but can be adapted to make it so. It would also be interesting to ask for an overall evaluation score (say out of 10) which could be related to the individual items.
|