Software Quality Evaluator
This study, commissioned by the UK Joint Information Systems Committee (JISC), aims to evaluate the software produced by the 22 Distributed eLearning Tools projects and, where relevant, the effect of the programme's approach and management on the production of the software. The activities conducted during this study included an online questionnaire, face to face interviews with every project conducted at their lead institution, a selective check of the code created by each project, testing the software products where appropriate and evaluating the accessibility and usability of the products where appropriate.
distributed learning, elearning, e-learning, learning software, vle, mle, learning environment, learning tools, software quality, evaluator, adult and community education, further education, higher education, JISC, assessment, web services,
In terms of general structure and efficiency, the quality of the code we have examined has been high. The main problem areas are in the consistency of coding standards, when multiple programmers have contributed, and the general quality of code commenting. These have a significant impact on a third party developer's ability to maintain and extend the code; we consider this a high-risk issue given the open-source nature of the programs. While many projects are excellent in all respects, we feel that some do require minor levels of rework, with a small number requiring significant levels of rework, in order to maximise the programs' chances of success and longevity as active, open-source projects.
In terms of general robustness, the overall quality of the software has been high. Many of the issues we have identified have been cosmetic rather than functional.
We understand that compliance with accessibility standards was not a key focus for this group of projects and that many have specific target audiences. However, if the projects are to be taken up more widely, there should be consideration of JISCís accessibility requirements. There is a need for academic institutions to ensure that services are accessible to the widest possible number of people, regardless of any special access needs.
Four projects have produced good, usable applications and two more have produced demonstrations that would be very usable if implemented. Of the remaining interfaces tested, four were adequate and six were not adequate. For the remaining six projects, testing was of limited importance or not relevant (although brief usability reports have been produced for two of them).
Poor usability does not necessarily reflect a failure of the project, as many were proof-of-concept. Demonstrating that the application could work and creating a solid code-base were sometimes given higher priority than usability testing and refining the interface. Most projects had thought about usability, and many of the projects with poor or only adequate interfaces were aware that there were problems to be addressed.
Key recommendations include:
Points of interest
Many, if not all, of the projects are employing rapid application development or agile project management methods. The exact method used seems to have been less important than whether the team are both competent and happy with the approach. Some projects expressed concerns to us about how well the model of work packages and quality planning sits with rapid application development. If the programme manager and the evaluation team are sensitive to the potential problems and aware that aspects of the project's deliverables may change over time, we feel that this should not present a problem. Indeed, many of these projects seemed very successfully to combine rapid development with useful, concise, and accurate reporting.
Successful software development
Although it is difficult to say exactly what makes for a successful project, attributes of the most successful projects we examined included: regular testing; open flow of information within the team and with outsiders, particularly potential users; clear communication; a self-critical eye; quality control; regular management overview and updates; close contacts with other projects and standards bodies; enthusiasm; and, of course, competence and expertise. Undeniably luck comes into it too, but one can insure against bad luck to a certain extent by good planning (see next point).
Value of software quality planning and evaluation
Completing the quality plan early in the process ensured that projects considered software quality and the technologies and standards that they were going to apply from the very start. Many teams mentioned that the quality plan provided the focus that is so important for such short projects. Others found it useful to revisit their quality plans during the project to take stock and, once again, focus. Several teams said they would have planned for software quality without JISC's intervention, but that the quality planning process was easy for them "because we were doing all this stuff anyway". By contrast, a small minority of teams complained that the quality process was burdensome and time-consuming. In our view, the light touch of the quality planning and evaluation required by JISC ensured that all projects had considered the basic issues at an early stage in their development, while not over-burdening them with bureaucracy.