Université de Nice Sophia-Antipolis: Accueil
Notes for courses by
D a v i d   C r o o k a l l

AreasCourses / Presentations / Job interviews / Teamwork / X-culture / Searching / Research / Publication / Communication / Projects / Meetings / Studying
CoursesInfoCom (ACL) L-3 • IUP Com SEDI L3 • IUP Job interviews • IUP Teamwork • IUP x-cult • MBFI • Telecom

Samples reports č • Up • Proposal • Topic • Lit Rev • Samples reports • Design • Qnr samples • Qnr design •




Up
Proposal
Topic
Lit Rev
Samples reports
Design
Qnr samples
Qnr design




Simulation & Gaming:
An Interdisciplinary Journal

+++

 

Samples reports • Up •  
• Proposal • Topic • Lit Rev • Samples reports • Design • Qnr samples • Qnr design •

Some articles & reports on student satisfaction
These are the kinds of documents that you will find in your literature review

To find more articles use the following search terms:

  • "student satisfaction" + university
  • "student satisfaction survey" + report
  • ...

The Student Satisfaction Survey  University of Greenwich
http://www.gre.ac.uk/students/affairs/sss/


UCE Student Satisfaction Surveys - The University of Central England in Birmingham
http://www.uce.ac.uk/crq/ucestudentsat.htm


San Francisco State University - SFSU Graduate Exit Survey
http://www.sfsu.edu/~acadplan/gradexits02-03.htm



From:  www.ingenta.com/isis/searching/Expand/ingenta?pub=infobike://csshe/cjhe/1997/00000027/00000002/art00006

Good Teaching and Satisfied University Graduates

 The Canadian Journal of Higher Education     1 June 1997, vol. 27, no. 2,   pp. 157-179(23)

Krahn H.; Bowlby J. W.

Abstract:

This paper examines the extent to which students' evaluations of university teaching and classroom dynamics contribute to overall satisfaction with their university experience. Data were collected from 1453 graduates of the University of Alberta who completed questionnaires following the 1993 spring Convocation. A multi-item index measuring students' evaluations of university teaching and classroom experiences was employed as the central predictor in a multiple regression analysis of overall satisfaction with the university experience. Positive perceptions of teaching had a strong impact on satisfaction, controlling on gender, age, faculty of enrollment, GPA, prior postsecondary experience, assessments of skill development, satisfaction with university learning-related resources, and several other control variables. The findings highlight the continued importance of efforts to encourage good teaching in universities.

Cette etude examine dans quelle mesure les evaluations faites par les etudiants de l'enseignement universitaire et la dynamique de la salle de classe contribuent a la satisfaction generale de leur experience universitaire. On a recueilli des donnees de 1453 etudiants diplomes de l'universite de l'Alberta ayant rempli des questionnaires apres la Convocation du printemps de 1993. Une grille creterielle mesurant les evaluations par les etudiants de l'enseignement et de leurs experiences a l'universite fut utilisee comme indicateur previsionnel, sur la base d'une analyse de regression multiple de la satisfaction globale de l'experience universitaire. Les perceptions positives de l'enseignement ont un grand effet sur la satisfaction. Cela est le cas en tenant compte du sexe, de l'age, de l'inscription en faculte, de la moyenne des notes, des experiences precedant le post-secondaire, des appreciations de developpement d'habilete, de la satisfaction avec ressources universitaires liees a l'apprentissage, et de plusieurs autres variables de controle. Les resultats mettent en lumiere l'importance continue des efforts pour encourager un bon enseignement dans les universites.

Language: English Document Type: Research article ISSN: 0316-1218
SICI (online): 
0316-1218(19970601)27:2L.157;1-
Publisher: Canadian Society for the Study of Higher Education


From  www.csse.monash.edu.au/~smarkham/techreports/sat_1202.htm

Student satisfaction - CSE 1202

Selby Markham

Dianne Hagan

 Introduction

The analysis of students' satisfaction with their course of study is an important research area within educational evaluation. With the growing concern for accountability in educational outcomes, the need for meaningful and stable measures has grown.

The conventional analysis of satisfaction has been based on the assumption that satisfaction is best seen in terms of student response to course components and the methods used by teaching staff. Much of this analysis has been focussed upon comparing mean trends in these components.

Some work has been done to define fitted models for student satisfaction. Malley (1998) has extensively reviewed this area and has shown that there is a need to carry out more research into structural models which can help explain the complexities of student satisfaction. The research which is being reported here is the application of an alternative approach to satisfaction which is derived from work based on customer satisfaction with products and services as developed by Fornell and others at the University of Michigan and extended into commercial applications through the work of the CFI Group.

Definition of Satisfaction

Satisfaction is defined as being a consequence of the expectations and experiences of the subject and/or course. The general schematic of the approach is shown in Figure 1.

Teacher performance, in this model, is seen as only one of a number of antecedents of satisfaction. In fact, it is seen as contributing only when students perceive that teacher performance has dropped below a critical level or when teacher performance surpasses student expectations. That is, the performance of the teacher will reduce satisfaction when student feel that they are not being given enough information on how to pass the subject but will only increase satisfaction when his/her performance stimulates students well beyond personal, arbitrary standards of "interesting teaching". The complexity of this relationship is shown in the likelihood that where a teaching performance brilliantly but fails to give students a sense of what is formally needed, then the effect will be overall negative on satisfaction.

An important point about this approach is that it is not a simple linear model running from expectations to outcomes. It assumes that, along with most expectancy-value models of behaviour, that the outcome perceptions have an implicit feedback loop back to expectations.

Methodology

The research to be reported here is a part of a wider study evaluating the impact of an educational support tool (called BlueJ) in a first year programming subject, CSE1202, in the Bachelor of Computing at Monash University. A longitudinal study was designed to collect data from students at various points during the academic year. Students were asked to identify themselves so that a continuous record could be created. The Monash University Ethics Committee approved the design under the conditions that there was voluntary participation with informed consent and that identifiable data was not available to teaching staff.

Web-based surveys were carried out at weeks 3, 6, 8 and 12 of Semester 1. The first survey collected a range of demographic data as well as data on the expectations of students about the subject and its outcomes. The second and third surveys were aimed at monitoring the impact of the educational support tool with both quantitative and qualitative data while the final survey was concerned with experiences and outcome evaluation for both the course and BlueJ, including the satisfaction measures.

From the definition of satisfaction it was hypothesised that satisfaction would significantly related to the experiences of the course including the experience in using BlueJ. Furthermore, satisfaction would be related to the outcome measures such as recommendation of the course and the university.

Measurement

Satisfaction was measured by three questions:

Results

Sample

Of the 345 students enrolled in CSE1202, 121 agreed to participate in the study under the conditions prescribed by the Monash University Ethics Committee. Of that 121, there were 101 responses to the first survey and 76 to the final survey. Due to the vagaries of student response patterns, the final data file had only 32 students who had responded to all surveys and 53 who completed the first and last surveys.

From comparisons with previous data collection exercises in CSE1202, the respondents do not differ in any marked way from what is assumed to be the underlying characteristics of student intakes into the course.

Problems faced

No significant t-tests between problem faced, their summation and satisfaction - this is the final survey problems data.

There was no consistency between survey 1 and survey 2 responses for quantity of problems.

Satisfaction

All satisfaction measures were significantly correlated as is shown in Table x. It is noteworthy that satisfaction with the subject is less strongly related to course and university satisfaction than the latter are to each other. An inspection of the distributions of the three measures shows that subject satisfaction has an uneven distribution (Table x) with a tendency to multi-modality at scale points 1, 3 and 5. The distributions for the other satisfaction variables are more evenly distributed with modes at about scale point 5.

Table Correlation between satisfaction measures

       

Subject

Course

Monash

Subject

     

Course

.437

 

.

Monash

.348

.640

 

 

Table Frequency for subject satisfaction

     

Frequency

Percent

1.00

12

7.3

2.00

5

3.0

3.00

13

7.9

4.00

9

5.5

5.00

22

13.3

6.00

16

9.7

7.00

7

4.2

   

Total

84

50.9

Satisfaction and expectations

Satisfaction with the subject was significantly correlated with initial expectations of passing the course for the 53 respondents who had completed both the first and last surveys (Table ). The correlation between satisfaction and initial expectation of passing the year and degree are not significant.

Table Correlation coefficients for satisfaction and expectations

Subject satisfaction

subject

year

degree

Subject satisfaction

1.000

     

Subject

.405

1.000

   

Year

.067

.385

1.000

 

Degree

.164

.461

.560

1.000

Expectations about the type of teaching (lectures, tutorials, discussion groups, problem solving groups) students expected to receive were not significantly related to satisfaction.

The data in Table indicates that there is only a low level of relationship between prior programming experience and/or training and satisfaction with the subject. There is no significant relationship with the other satisfaction variables.

Table Correlations between programming experience and satisfaction

 

Programming background**

   

Subject satisfaction

.269*

Course satisfaction

.181

Satisfaction with Monash

.112

* p<0.05 n=53

** Programming background was defined as students having studied or having experience with one or more programming languages before entering the course

Satisfaction and Performance Ratings

In the final survey students were asked to give performance ratings on

Having kept up with the work

Confidence in passing the exams

They were also asked to rate CSE1202 in terms of its performance requirements by comparing it with other subjects

The pace of the subject

The level of content of the subject

Overall comparison of difficulty against other subjects

These two areas proved to be marginally distinct when the five items were factor analysed, where the eigenvalue to extract a second factor was bordering on the standard cut-off value of 1.0.

Two regression were carried out against satisfaction with the subject. The first used the personal performance variables and it produced R2 of 0.629 and a significant ANOVA for regression versus residuals (F=72.97 p<=0.05 Df 2,83). Both independent variables produced significant beta coefficients but the primary contribution to change in satisfaction came from "confidence in passing the exams" (b=0.605 t=4.674 p<=0.05).

The second regression using the subject-oriented performance variables gave an R2 of 0.230 with the ANOVA being significant (F=9.473 p<0.05 Df 3,82) and the only significant contribution to the dependant variable coming from "Comparison with other courses".

Satisfaction and Recommendation of Course

The three satisfaction measures were regressed on the two recommendation measures. The model for satisfaction and recommending the course had an R2 of 0.691 and an ANOVA with F-ratio of 59.693 (p<0.05 3/72). Table x gives the model coefficients and from there it can be seen that three sources of satisfaction make significant contributions although it is satisfaction with the course which has the largest Beta coefficient.

Table Coefficients for recommend course regression model

           

B

S.E.

Beta

t

Sig.

(Constant)

-.196

.396

-.494

.623

SAT_SUB

.332

.068

.352

4.870

.000

SAT_CRS

.445

.102

.424

4.370

.000

SAT_MON

.264

.108

.240

2.453

.017

The regression of satisfaction on the recommendation of Monash had an R2 of 0.748 and an F-ratio of 73.139 (p<0.05 3/74). It is clear from the coefficients from the model that only satisfaction with Monash makes a significant contribution.

Table Coefficients for recommend Monash regression model

           

B

S.E.

Beta

t

Sig.

(Constant)

.322

.347

.930

.356

SAT_SUB

0

.060

.079

1.223

.225

SAT_CRS

0

.086

-.030

-.357

.722

SAT_MON

.921

.090

.853

10.212

.000

Satisfaction and BlueJ

The relationship between satisfaction and the BlueJ programming environment was rather complex because there were three stages in the data collection where students were asked to make evaluations of the software. Table gives the coefficients for satisfaction and the final evaluation of BlueJ and this was based on a model with an R2 of 0.555 with and F-ratio of 15.99 (p<0.05 6/77). It can be seen that the user interface makes the only significant contribution to the model. Neither of the other satisfaction measures generated a useable regression model.

Table Coefficients for regression of BlueJ evaluation on subject satisfaction

           

B

S.E.

Beta

t

Sig

(Constant)

0

.479

.042

.966

Overall rating

.280

.157

.247

1.790

.077

The interface

.365

.145

.298

2.525

.014

BlueJ and Java

.200

.134

.194

1.492

.140

Stability of the system

0

.151

-.067

-.636

.526

Down time

.130

.125

.103

1.041

.301

BlueJ and learning Java

0

.115

.095

.842

.402

Regression analyses were carried out on the two intermediate evaluations of BlueJ against satisfaction but none of the models was significant.

Satisfaction and course performance

The students who participated on the project had significantly better performance on all aspects of the assessment in the subject (see Table x) and the students who completed the final survey performed better (t=2.814 p<0.05 Df 113) than those who did not complete it (Table x).

Table Assessment performance - Overall mark

 

N

Mean

S.D.

Completed survey

65

72.4

17.7

Did not complete survey

48

63.6

20.7

Satisfaction ratings were regressed onto the final assessment. The model produced an R2 of 0.387 and the F ratio for the model was 12.22 which was significant at p<0.05 (3/58df). The model coefficients (table x) show that the primary contribution comes from the satisfaction with the subject.

Table Coefficients for regression of satisfaction measures on final assessment

B

S.E.

Beta

t

Sig.

(Constant)

54.499

7.379

7.385

.000

Subject satisfaction

6.762

1.167

.668

5.796

.000

Course satisfaction

-1.284

1.558

-.111

-.824

.413

Satisfaction with Monash

-1.300

1.439

-.111

-.903

.370

It cannot be assumed from these results that satisfaction has a direct causal relationship to performance. It is much more likely that continuing acceptable performance over the semester will produce a greater sense of satisfaction with the course and the performance and satisfaction jointly change.

Discussion

The results from the research provide continuing data on the complexity of student satisfaction. Of particular interest is the differences between the satisfaction with the subject and the satisfaction with the course and the university where there is a stronger relationship between the more global measures than with the specific subject measure. Within this particular subject environment, it is clear that the subject is viewed differently from the overall course and the university within the context of student satisfaction. This is in agreement with the basic structure of the satisfaction model (Figure 1) because there will be different saliences applied to different components of the higher education experience.

The differential nature of the students' expressed satisfaction is also reflected in the outcome measures. In the first place, it is only the satisfaction with the subject with contributes significantly to total assessment suggesting a specific behavioural link in the model. In the second, the recommendation of the course and of the university are linked to their appropriate specific source of staisfaction.

The nature of the distribution for subject satisfaction (Table ) raises some questions about the interpretation of the analyses. Those respondents giving a rating of 1 might be seen as outliers but this was not shown by an inspection of the Box-Plot for the variable. It could be argued that their responses probably reflect the reality of the experience of the programming support tool. A number of students had difficulty in installing and running the software and this might be assumed to impact on satisfaction.

References


From  http://www.csse.monash.edu.au/~smarkham/techreports/satisfaction_2203.htm

A Methodology for Subject Evaluation: Defining Student Satisfaction

Margot Postema and Selby Markham

February 2001

Abstract

The analysis of variables related to subject evaluation has traditionally focused on the mechanics of delivery and formal performance measures. Our concern is to take a different approach to subject evaluation by investigating student satisfaction with the subject. Evaluation through student satisfaction, if it focuses upon student behaviour, provides a basis for modeling the interactions between what students expect, what is delivered, performance and outcomes. We present a student satisfaction model, giving an application example and the results of that evaluation. The results from the study indicate the potential complexity of a behavioural approach to subject evaluation.

  1. Introduction

The analysis of variables related to subject evaluation - teacher performance and student outcomes - has, traditionally, focused upon the mechanics of delivery and formal performance measures. There is a wealth of research which is concerned with the way the teacher delivers his/her material, as there is on the factors which effect student performance [7]. Interestingly enough there is not a substantial body of work, which relates the two.

Our concern is to take a different approach to subject evaluation by investigating student satisfaction with the subject. The rationale behind this approach is that even though a subject may exist as an entity in a course handbook and may represent the academic interests of its creator, it is the student reaction to the subject, which determines the effectiveness of the course. Gone from most subject areas are the conditions where the academic determines input and disregards student reaction to that input. Most educational institutions operate in a market economy where competition is rampant.

Evaluation through student satisfaction, if it focuses upon student behaviour, provides a basis for modeling the interactions between what students expect, what is delivered, performance and outcomes. Through such modeling, it is possible to isolate which factors are most important to students and which relate most to direct outcomes such as the recommendation of the subject and/or course to others or an interest in doing another of this lecturer's subjects. Not only is the evaluator able to look at functional elements within his/her subject, but he/she is able to position the subject within the competitive market place.

2  A student satisfaction model

Malley [4] has carried out one of the few detailed surveys of research into student satisfaction, although his emphasis is upon the technical education sector. He concluded that a general model which takes into account student expectations of subjects, the student experience of the subject and outcome behaviours of those students, is the most appropriate methodology to be able to obtain a realistic picture of satisfaction behaviours.

It appears that there is, in fact, no operational model, which has been applied to student satisfaction behaviour. There is a limited history of attempting to develop models of student behaviour and to relate them to student outcome behaviour [5] but none have been found which work, specifically, with satisfaction. Consequently, we have adapted a model which has been used in customer satisfaction research [8] but which has an underlying behavioural model based upon the general area of expectancy-value theory [1].

The key to this approach is that it assumes that student perceptions of their educational environment are critical in determining their level of satisfaction and their outcome behaviours. Few university lecturers would deny that even when they are teaching a very dry subject, the prerogative lies with them to involve the student. Sitting at the feet of the master (no matter how boring the master is) is no longer seen as a viable pedagogical model in higher education.

The research reported gives mainly the qualitative data from the application of this type of approach to subject evaluation. The quantitative data will be reported later.

3  Research Design

The general design for researching satisfaction, within the framework of the previous section, has these steps:

  1. Collect information from the population under study on what is seen as the main parameters associated with being satisfied.
  2. Develop a questionnaire based upon this information and include any other questions associated with satisfaction. Include at least a general open-ended section to obtain user comments.
  3. Administer the questionnaire.
  4. Analyse the quantitative data using regression modeling.
  5. Check the quantitative results against the qualitative ones. See if anything was missed from the questionnaire and note it for further studies.

This process was implemented in Information Technology Project Management (CSE2203) by first asking a small sample of students to write down the important factors in being satisfied and dissatisfied with the subject. Then the questionnaire was developed which incorporated their comments and included other material of interest to the lecturer. It also included two open-ended questions, which will be the focus of this paper.

The questionnaire was administered on-line through a Web form. All subject tutors were asked to inform students of the URL for the survey and to allocate some time for its completion.

  1. Subject Background

CSE2203 introduces the fundamental principles, tools and techniques of software project management. The conceptual material presented in lectures is reinforced by practical application within the context of a software development project. Students work in project teams with roles allocated to each group member. The project is defined against a set process model. Project definition, estimation, and tracking and reporting techniques presented in lectures are employed during the course of the project. Real-life problems are injected into the project in the form of changes to user requirements, budget and time-lines. Emphasis is placed on the ability to provide up-to-date management information on the actual state of the project against established milestones: reports are requested on an ad-hoc basis. A project review phase is used to analyze and report on project estimates against actual time, cost and resource expenditure.

This subject presumes a knowledge background gained in the two other core Software subjects in the Bachelor of Computing: CSE2200 Systems Design and Implementation and CSE2201 Software Engineering Practice. Hence students are expected to use their software engineering and systems analysis and design skills in the context of project management. Students are expected to have learned and used the Personal Software Process [3], and are introduced to the Team Software Process [2] in CSE2203.

Due to intake from other courses (e.g. higher diploma), the students enrolled in CSE2203 do not always have the necessary pre-requisites. Some undertake the pre-requisite subjects simultaneously. To accommodate this factor, templates and examples are provided for components such as project, quality and test plans.

Students’ teams are assigned a small software development project that encumbers a graphical user interface. This semester, it comprised an ExpenseManager Application. Teams are expected to build their application in an object-oriented language and development environment that they are familiar with, and should choose to do their development in C++ Builder, Eiffel or JBuilder. Pre-requisite knowledge assumes teams are experienced in the chosen language, and CSE2203 does not focus on new programming language acquisition. There are no tutorials or scheduled help for programming problems. This reflects a real life simulation of projects where developers need to source their own solutions to problems. With this focus, student assessment is higher on project management skills than on product produced. To encourage well-designed and quality products, a competitive element is introduced. At the conclusion of the semester, best products are peer selected from each tutorial. These are presented at the final lecture for students to view other ideas and see what their peers have produced. They then vote on the best product for the semester, and the winners are presented with certificates and a small prize. This competition is well received and encourages effort to produce a good product.

This semester had a large number of student intake (200), and it was difficult to find enough suitably qualified tutoring staff. Many lecturers can appreciate this caused some problems in the efficient administering of the subject. This was evident during the semester and in the information collected from the population to design the questionnaire (section 3).

5  Results

5.1 Quantitative Data

Of the possible 200 students there were 110 useable responses. This is far below what was expected but it was noteworthy that a number of tutorial groups had very poor responses. This issue will be discussed later.

Gender

Male 68%

Female 32%

Full fee paying student

Yes 79%

No 21%

Employed in a computing related area

No 90%

Part time 8%

Full time 2%

Median lectures attended 10

Median tutorials 13

i.e. this was an extremely skewed distribution with 60% claiming to have attended 13.

The scales used to assess overall satisfaction with the subject produced the following results:

"My satisfaction with this subject" received a mean rating of 3.18 (all ratings were on a 5 point scale) with a standard deviation of 1.2.

"I enjoyed this subject" had a mean of 2.87 and a standard deviation of 1.2.

Both of these results suggest that students are, on the average, satisfied with the subject. What is important at the behavioural level is the statement of satisfaction is higher than that for enjoy. There is a correlation 0f 0.75 between them indicating that students rate satisfaction and enjoyment in a similar way but it may be the case that satisfaction is not premised upon enjoyment - as some cynics might say of the modern student.

5.2 Qualitative Data

Two open-ended questions were included in the questionnaire. Of the 110 survey responses, 73 included comments to these. The most common response to "What was the best thing about this subject" was the simulation of project management with deadlines in a business atmosphere, preparing students for industry (51%). This was a positive response to the subject aim. The second most common response was the knowledge gained of working as a team (19%). A few students viewed the best product competition as the best thing (8%), whilst a couple commented that high programming skills were not required (3%). A small percent commented that their tutor was the best thing about the subject (11%).

In response to "What was the worst thing about this subject", students most commonly responded with "too many management tasks, too much documentation, too much work, and takes too much time" (67%). The second most common response was the dissatisfaction with the development environment and no programming assistance (16%). The tool used for configuration management (WinCVS [9]) was also found to be difficult to use, exasperated by tutors having insufficient knowledge (11%).

Some comments (2%) indicated students viewed tutors as the worst thing in the subject. Other comments that were interesting to compare are that the subject should have more templates and sample documents (3%) whilst one student commented that too much work was similar in other subjects. There seems to be some conflicts here consistent with the fact that students have varying levels of background knowledge.

Specific comments (3%) on the lecture indicate that students would enjoy hearing more industrial examples. Whilst they experience an industry presentation in the second last week of semester (which was very well received), more examples and case studies could be included at all the lectures.

Individual comments included "marks should only be allocated for project management, i.e. the product should not be assessed" and the subject "needs more management emphasis, make the students do the work". These 2 comments are difficult to address. As mentioned earlier, the assessment emphasis is on project management, and the team development experience, with some assessment allocated to the product. Given that project management is about being self-motivated and a leader, it would be difficult for the subject leader to make students do the work. This is currently addressed by including penalty marks for documentation submitted late or done at an unsatisfactory level. Students are also required to submit peer evaluation of their team members with each product delivery. This ensures individual student marks are adjusted for those that do not contribute satisfactorily.

  1. Implications for Subject Improvement

With large student numbers and intakes from different courses, it is imperative to have suitably experienced qualified tutoring staff. As was mentioned earlier, some tutorial groups had very poor responses, indicative of the problems experienced with these tutors during the semester. These tutors didn’t seem to have the experience for the subject and didn’t appear to provide sufficient subject support. The instructions to allocate time for students to participate in the subject questionnaire were also not carried out.

The diversity of student background (79% full fee paying implying they have come into the course and don’t have the background knowledge expected) is difficult to address.

Some students complained that the material covered in CSE2200 and CSE2201 was repeated in CSE2203. A perception seems the use of knowledge (e.g. project, quality, test plans, PSP) learned in other subjects as duplication in the course. Other student without this background knowledge complained that not enough instruction, templates and examples were provided. This may be difficult to resolve, however more specific templates with less detail will be supplied.

As mentioned earlier, students should already have been familiar with the development environment that they have chosen; however it appeared many did not. A number of students mentioned that they only knew Visual Basic, as they came into the course from a different background. These students were encouraged to join in with other teams who had the pre-requisite knowledge, but they usually chose to stay within their own social circle. As students appear to require more support for the development language and environment, this will be restricted to one (e.g. JBuilder), and students will be given the instructional exercises provided in CSE2200 as a refresher to the tool.

6 Conclusion

The analysis of CSE2203 is a part of the development of a wider program within Monash University's Faculty of Technology, through its Computing Education Research Group, to investigate structured approaches to subject evaluation. This has led to ideas and implications for improvements to the subject. Some work has been carried out on another subject [6] while there are other studies yet to be reported on comparisons of subjects and on student motivation and satisfaction.

The results from the CSE2203 study indicate the potential complexity of a behavioural approach to subject evaluation. We believe that this is appropriate because the teaching/learning milieu is behaviourally complex.

References

  1. Gordon, J.R. (1995) Organisational Behavior NJ:Prentice Hall 5th Edition
  2. Humphrey, W.S. Introduction to the Team Software Process. (2000) Addison-Wesley.
  3. Humphrey, W.S. A Discipline for Software Engineering. (1995) Addison-Wesley.
  4. Malley, J. (1998)The Measurement and Meaning of Student Satisfaction in the VET Sector: A Review. Melbourne:ACER.
  5. Markham, S.J. A cognitive theory of student behaviour. Proceedings of the Third International Conference on the Improvement of University Teaching, University of Maryland/Newcastle upon Tyne Polytechnic, 1977
  6. Markham, Selby and Hagan, Dianne. (1999) Student satisfaction - CSE1202, CERG Technical Report 3/99 http://www.csse.monash.edu.au/~smarkham/techreports/sat_1202.htm
  7. Perry, R.P and Smart, J.C. (Eds) Effective Teaching in Higher Education: Research and Practice. NY:Agathon 1997
  8. Wittingslow, G.W. and Markham, S.J. (1999) Customer-driven model of satisfaction behaviour Australasian Journal of Market Research v.7, No.2 pp29-38
  9. WinCVS. http://www.wincvs.org November, 2000.

 

chapter 1  INTRODUCTION

The Centre for Research into Quality (CRQ) was commissioned to report on a survey of the student experience among undergraduates and taught postgraduates at Sheffield Hallam University (SHU).

The questions asked in this year’s survey were largely the same as those asked last year (and to a lesser extent the year before) to allow for comparisons over time to be made. New issues, however, were raised by students during the focus group-based research that preceded the design of the questionnaire and, where possible, these have been included. 

This report presents findings that include comparisons over time at school level for each item in the questionnaire.

Overall satisfaction with sheffield hallam university

Overall, satisfaction levels have risen by almost three and a half percent since last year, from 69.1% to 72.5% (Figures 1.1, 1.2). The trend is upward in all of the individual schools, with the highest levels of satisfaction being found in the school of Sport & Leisure Management (76.5%) (Figure 1.2).

Figure 1.1: Overall satisfaction with Sheffield Hallam University

 Figure 1.2: Overall satisfaction with SHU by school.

 Overall satisfaction with SHU rose this year, across the university as a whole and in each individual school.

 Expectations

The extent to which students’ expectations have been met has dipped slightly this year in two areas. After rising last year, the extent to which expectations of the course has been met dropped from 65.2% to 64.5%. Similarly, the extent to which expectations of the city of Sheffield were met, while still high, dropped for the second year in succession, this time from 76.5% to 75.2%. The extent to which expectations of Sheffield Hallam University itself were met, however, rose from 67.3% to 68.7% (Figure 1.3).

 Figure 1.3: Extent to which students’ expectations have been met

Recommendations

For the second year in a row, respondents’ willingness to recommend both their courses and Sheffield Hallam University as a whole to others increased. Students who would recommend their course leapt from 68% to 72.5%, while those willing to recommend the University as a whole increased from 75.2% to 77.2%. Willingness to recommend the city of Sheffield continued to decline slightly, although at 80.5% this is still very high (Figure 1.4).

 Figure 1.4: Recommendations to others

  Figure 1.5: Would you chose your course again?

  Figure 1.6: Would you choose SHU again?

Student choice

The extent to which students would choose their course again dropped slightly this year, from 70.8% to 69.7% (Figure 1.5). The proportion that would choose SHU again, however, has risen from 74.5% to 76.2% (Figure 1.6)

The ‘Satisfaction’ methodology

The Centre for Research into Quality (CRQ) has, for more than a decade, developed expertise in obtaining, analysing and reporting student views of their experience of post-school education. These views are translated into effective management action through the approach to student satisfaction that has evolved at UCE.

 The Stude+nt Satisfaction approach is clearly the market leader and has been emulated and adapted by a number of further and higher education institutions in Britain, New Zealand, Sweden, Australia and Poland. The methodology has been published, in self-help form, through the Open University Press as the Student Satisfaction Manual.

 The Satisfaction approach continues to be unique in combining the following elements:

·            subject-determined questions;

·            satisfaction and importance ratings;

·            management information for action;

·            feedback on action to participants.

 The approach has proved to be an adaptable research tool in a variety of contexts, including students on taught programmes, postgraduate research students, staff working in higher education institutions, graduates in employment and football supporters. Central to the Satisfaction approach is the integration of the management information gained from the survey into a cycle of continuous quality improvement. It is important that feedback on action is provided to respondents, so that everyone is made aware of what happened as a result of the survey.

 The report

The distinctive feature of Satisfaction Reports is the A–E Tables, which clearly identify areas of satisfaction and those in need of improvement. Average satisfaction ratings are combined with average importance ratings and translated into a letter, A–E. The letters provide a gradation of satisfaction from very satisfied (scored A), through satisfactory (B), to marginal satisfaction (C), to unsatisfactory (D) and very unsatisfactory (E). The case of the letter denotes the importance attributed to the particular issue by respondents, with highly rated areas designated by an upper-case letter, less important by a lower-case letter, and those relatively unimportant areas designated by a lower-case letter in parentheses (Figure 1.7).

 Figure 1.7: A-E grid for satisfaction and importance scores

 

Very unsatis-factory

Unsatis-factory

Adequate

Satisfac-

tory

Very satisfac-tory

 

Very

Important

 

E

D

C

B

A

7

 

Important

 

e

d

c

b

a

5.5

Not so important

 

(e)

(d)

(c)

(b)

(a)

5.0

 

1

2.75

3.75

4.25

5.25

7

                         

The ‘action’ messages implied by each lettered outcome in the grid are outlined in Figure 1.8.

 Figure 1.8: Grid values and action implications

 

Very unsatis-factory

Unsatis-factory

Adequate

Satisfac-tory

Very satisfac-tory

 

 

E

D

C

B

A

 

Very important

 

Urgent need for immediate action

Action in these areas has high priority

This area to be targeted for future improve-ment

Ensure no slippage, improve where possible

Maintain excellent standards

7

 

Important

 

Action to substantially improve this area

Target this area for improve-

ment

Ensure no slippage

Maintain standards

Avoid overkill

5.5

Not so important

 

Improve where resources permit

Ensure no further slippage

Restrict attention

Maintain standards where possible

No need for action here

5.0

 

 

1

2.75

3.75

4.25

5.25

7

 

                           

 

The Sheffield Hallam Student Experience Survey 2002

CRQ has more than a decade of experience in undertaking satisfaction surveys at a range of institutions. To be successful, the survey process must be open and transparent. Accordingly, the survey, as well as building upon the work carried out in 2000–2001, began in November 2001 with a qualitative phase consisting of focus groups of students to collect views about the important issues relating to the total experience of being an undergraduate or taught postgraduate student at SHU.

 The questionnaire

The questionnaire was divided into the following sections:

·            student profile;

·            course organisation;

·            information and administration;

·            learning centres;

·            computing facilities;

·            student services;

·            catering facilities;

·            students’ union and sports facilities;

·            accommodation provision;

·            financial situation;

·            university environment and facilities;

·            overall satisfaction.

 Students were asked several questions relating to the details of their course as well as their personal circumstances. These questions allowed the research team to undertake analysis of responses by a range of variables.

 In addition, the questionnaire also aimed to accommodate respondents who wished to make further comments on the various aspects of their experience at SHU. This enabled the capture of issues that may not have arisen during the initial in-depth interviews.

 The questionnaires were processed using an ‘intelligent’ scanner system to produce a data file for statistical analysis using SPSS software. Qualitative comments from the questionnaires will be logged and analysed and used to improve the student experience at SHU as well as to refine the questionnaire for future use. 

Distribution

Questionnaires were distributed to half the undergraduate and taught postgraduate students at SHU. The initial mailout was followed up with a reminder letter, and in the latter stages of the data capture process a second questionnaire was sent to addressees who had not thus far responded. To encourage responses, a summary of action on last year’s survey was enclosed with the questionnaire.  

The sample

The original fifty percent sample consisted of 9,368 students. Questionnaires returned because of incorrect mailing addresses or after the final cut-off date for analysis were excluded (n=233). This left an operational sample of 9,135. In all, 2381 useable questionnaires were returned, giving a total response rate of 26.1%. Last year all undergraduate and taught postgraduate students received one copy of the questionnaire. There were no follow up mailings. This approach generated 1883 useable responses from an operational sample of 17,649 (10.7%).  

The SHU student profile

Response rates have been broken down according to the profile categories included on the questionnaire. 

The largest proportion of the sample came from Cultural Studies (13.8%), Education (13.8%) and Business & Finance (13.5%) (Figure 1.9). 

Figure 1.9: Response by School

School

n

%

Business and Finance

321

13.5

Computing and Management Sciences

164

6.9

Cultural Studies

329

13.8

Education

328

13.8

Engineering

127

5.3

Environment & Development

189

7.9

Health & Social Care

255

10.7

Science & Mathematics

132

5.5

Social Science & Law

230

9.7

Sport & Leisure Management

302

12.7

Missing

4

0.2

Total

2381

100.0

 Exactly half (50.0%) of all responses are from students based at City Campus, while over a third (39.2%) are from students at Collegiate Crescent (39.2%) (Figure 1.10). 

Figure 1.10: Response by campus

Campus

n

%

City

1191

50.0

Collegiate Crescent

934

39.2

Psalter Lane

182

7.6

Missing

74

3.1

Total

2381

100.0

 A quarter of respondents (24.7%) of respondents are in their fourth year of enrolment, while just over a fifth (21.6%) are in their first year. Less than half of one percent of respondents have been enrolled for 5 or more years meaning that, for the sake of analysis, in the report enrolment was recoded so that the highest level was 4+ (Figure 1.11).

 Figure 1.11: Response by years enrolled

Years

n

%

Up to 1 year

514

21.6

1–2 years

350

14.7

2–3 years

725

30.4

3–4 years

589

24.7

4–5 years

87

3.7

5> years

8

0.3

Missing

108

4.5

Total

2381

100.0

 Over a quarter of respondents who stated their level of study describe themselves as being at level 1 (27.6%), although the fact that a fifth (20.8%) of respondents are categorised as missing suggests that there is some confusion about this issue (Figure 1.12). Almost three-quarters of respondents are full-time undergraduates (73.6%) (Figure 1.13).

 Figure 1.12: Response by level of study

Level

n

%

Foundation

56

2.4

1

657

27.6

2

429

18.0

3

475

19.9

4

222

9.3

5

25

1.0

6+

22

0.9

Missing

495

20.8

Total

2381

100.0

 Figure 1.13: Response by type and mode of study

Type and Mode

n

%

Full-time undergraduate

1752

73.6

Part-time undergraduate

189

7.9

Full-time postgraduate

179

7.5

Part-time postgraduate

232

9.7

Postgraduate ‘other’

29

1.2

Total

2381

100.0

 Over a third of respondents (36.5%) are studying for a BA, while just under a third are enrolled on courses leading to a BSc (31.4%). There were not enough respondents studying for BEd or MBA qualifications to be able to report their views separately and they were therefore included in the ‘other’ category (Figure 1.14).

 Figure 1.14: Response by award

Award

n

%

Foundation

7

0.3

BEng

62

2.6

BSc

748

31.4

BEd

5

0.2

HND/HNC

92

3.9

BA

870

36.5

MSc

195

8.2

MA

80

3.4

PGD/PDC

23

1.0

PG Credit

10

0.4

MBA

9

0.4

LLB

71

3.0

Other

103

4.3

Missing

106

4.5

Total

2381

100.0

 Over half (52.9%) the respondents stated that their programme of study included a work placement, an increase on last year (48.3%).

 English is the first language for the vast majority of respondents (94%).

 A small percentage of respondents (5.2%) stated that they were enrolled on a Combined Studies programme.

 There are almost twice as many female respondents as male. Almost half of all respondents are aged between 18 and 21 (Figures 1.15). Due to the small number of respondents aged 55 or over, this age group was combined with the 45–54 category for the purposes of analysis in the report.

 Figure 1.15: Response by gender and by age

Gender

n

%

Male

874

36.7

Female

1507

63.3

Total

2381

100.0

Age

 

 

18-19

532

22.3

20-21

657

27.6

22-24

375

15.7

25-34

353

14.8

35-44

304

12.8

45-54

110

4.6

55+

19

0.8

Missing

31

1.3

Total

2381

100.0

 Figure 1.16: Response by ethnicity

Ethnicity

n

%

Black African

27

1.1

Pakistani

24

1.0

Asian British

40

1.7

White British

1642

69.0

White Irish

22

0.9

White European

393

16.5

White Other

19

0.8

Black British

6

0.3

Black Caribbean

8

0.3

Black Other

1

0.0

Indian

25

1.0

Bangladeshi

12

0.5

Chinese

34

1.4

Asian Other

11

0.5

Mixed

23

1.0

Other

9

0.4

Prefer not to answer

53

2.2

Missing

32

1.3

Total

2381

100.0

Figure 1.17: Response by region prior to enrolling at SHU

Region

n

%

Yorkshire/Humberside

823

34.6

East Midlands

356

15.0

North

78

3.3

North West

259

10.9

East Anglia

113

4.7

West Midlands

118

5.0

South East

137

5.8

South West

63

2.6

Wales

31

1.3

Scotland

12

0.5

Northern Ireland

9

0.4

Missing/NA

382

16.0

Total

2381

100.0

 Over two thirds of all respondents are white British. The second most common ethnic group is white European (16.5%) (Figure 1.16).

Over a third of respondents live in the Yorkshire/Humberside region, while a tenth (10.9%) are from the North West  (Figure 1.17). Over a quarter (28.2%) lived in Sheffield itself prior to enrolling at SHU. 

Almost a fifth of all respondents (18.1%) describe themselves as having a disability of some kind (Figure 1.18). 

Figure 1.18: Response by disability 

Disability

n

%

Sensory disability

52

2.2

Physical disability

29

1.2

Dyslexia

54

2.3

Mental ill-health

29

1.2

Unseen disability/medical condition

180

7.6

Other

24

1.0

None

1908

80.1

Prefer not to answer

63

2.6

 Over a quarter of all respondents (27.6%) have responsibility for adult dependents, and a tenth (10.7%) care for school-age children (Figure 1.19).

 Figure 1.19: Response by care responsibility  

Responsibility

n

%

Pre-school age children

70

2.9

School-age children

255

10.7

Adult dependents

656

27.6

Elderly dependents

40

1.7

Other

22

0.9

None

1308

54.9

Prefer not to answer

42

98.2


 

 

 


 • Proposal • Topic • Lit Rev • Samples reports • Design • Qnr samples • Qnr design •

RSS Recent issues of Simulation & Gaming: An Interdisciplinary Journal
 
Peace and survival of life on Earth as we know it are threatened by human activities that lack a commitment to humanitarian values.  Destruction of nature and natural resources results from ignorance, greed, and a lack of respect for the Earth's living things... .  It is not difficult to forgive destruction in the past, which resulted from ignorance.  Today, however, we have access to more information, and it is essential that we re-examine ethically what we have inherited, what we are responsible for, and what we will pass on to coming generations.  Clearly this is a pivotal generation... .  Our marvels of science and technology are matched if not outweighed by many current tragedies, including human starvation in some parts of the world, and extinction of other life forms... .  We have the capability and responsibility.  We must act before it is too late.  Tenzin Gyatso the fourteenth Dalai Lama.