5.1 Summary of sources
5.2 Product Summaries
The genesis of much of the work on personalisation and adaptive user interface lies in work conducted by researchers into artificial intelligence, starting in the 1980s. This remains an active area for academic study within the Human Computer Interaction (HCI) community. The main rationale for the construction of user models is to allow systems to adapt to user needs and therefore enhance the user experience. This utopian goal has been hijacked to some extent by commercial interests who use the technologies to 'push' information at sometime unsuspecting users. Benyon & Murray review the early work in this area and identify common approaches and architectures. Although over 10 years old, this is still a useful primer for the area of study.
A more up to date state of the art review is provided by de la Flor. Noting that the benefits of adaptive interfaces based on user models include increased control (of the application by the user) and an enhanced user experience, she reviews techniques for the acquisition of data from which to construct models. These include both implicit data (server logs, cookies, etc) and data explicitly supplied by users (age, gender, location, etc). It is worth noting that, in the FHE context, there is potentially a rich source of user data already held in various institutional systems. Utilising this data may help overcome the reluctance to fill in on-screen profiles, etc. but this must be balanced with concern for privacy issues (see below).
de la Flor also reviews the major pattern matching models used by systems to create user stereotypes and assumptions which are in turn used to create associations between items and users. This process, as applied in ecommerce (see 220.127.116.11 below) is essentially identical to what would be required for a personalised service in the context of the JISC Information Environment (see 5.12.2 below).
The application of personalisation technologies to commercial Websites was a feature of the 'dot com' boom of the late 1990s. Following on from pioneering work by Amazon, virtually every retailer and service provider with an Internet presence began to implement a personalised 'my.xxx.com' interface. In the world of eCommerce, personalisation in commonly achieved using collaborative or rules based filtering, either singly or in combination. These approaches draw directly on the academic research into user modelling outlined above. A central plank in constructing user models is the tracking of user behaviour, often by the use of so-called 'tracking cookies'. Whilst this form of covert analysis of user behaviour is still widespread (the BBC, for example, makes use of 'Red Sheriff' tracking cookies) it has also been widely criticised and the use of 'anti-spyware' utilities such as AdAware is now quite widespread. The industry has responded by setting up code of ethical practice through organisations such as the Personalization Consortium, which acts as an informal industry body for the Web marketing sector.
Privacy concerns have been one factor in the slowing down of the rush towards personalisation in eCommerce. However, another factor has been the growing body of evidence that, in purely commercial terms, the paybacks from implementing personalisation to not warrant the high costs of implementation and maintenance. An early example of this line of argument can be seen in the article by Nielsen who states that money invested in personalisation would be more effective spent increasing the general usability of sites. Lighthouse also question the evidence of effectiveness. Pointing to the many examples of where personalised services have failed or quietly been withdrawn, they conclude that "the oft-cited Amazon personalisation success is not a harbinger of Internet commerce's future, but an atypical example". Indeed, a number of authors have pointed to the intrinsic suitability of books, CDs, etc. to recommender services based on collaborative filtering.
A more wide ranging and damning critique of the current state of play in eCommerce is provided in the recent Jupiter research study "Beyond the Personalization Myth: Cost Effective Alternatives to Influence Intent" (reviewed by Rush. This influential study provides evidence from a broad survey of eCommerce applications. Amongst its key finding are:
It is interesting to note that the feedback on the level of take-up of personalized services in eCommerce is supported by evidence from 'public service' providers, such as the BBC.
The case that corporate portals are the appropriate place for implementing personalisation is also taken up by Nielsen who argues that "the weaknesses of Internet portals are the strengths of intranet portals". In this context it is worth noting that an institutional portal, if it is using controlled data and therefore could supply specific information to a restricted audience, would qualify as an "intranet portal".
This section deals with generic architecture and design issues relating to the role of personalisation within Internet portals. Reviews of specific features of individual products and toolkits are provided in section 5.2.
As Dolphin et al point out "there are as many definitions of a portal as there are purposes to which portals are put". Dempsey proposes a 2x2 matrix of portal types:
In each quadrant (e.g. Mediation/ Static, Presentation/ Custom ...) he identifies different examples of portals. From the perspective of personalisation, it is the 'Custom' column which is of interest. He characterises Library Automation systems such as My.Library as Presentation/ Customisable, indicating that customisation is restricted to the presentation of information to users whereas Metasearch products such as ZPortal are labelled Mediation/ Customisable because customisation can also occur at the mediation level, usually by providing a mechanism for selection of underlying resources to be included in the search.
Sawyer & Bailey describe their experience of implementing an institutional portal at Monash University. The theme of matching user profiles to resource profiles discussed in the Digital Library section above emerges strongly in their paper. Monash have implemented an enterprise portal which acts a 'thin' layer, brokering access to resources. Within the HE context, they identify specific data requirements:
Resource Metadata. Three classes of access are defined:
At its simplest level, a resource catalogue may provide a link to an existing service. However, the authors explore the use of intelligent interfacing agents providing services such as:
Echoing Lynch's description of requirements for user profiles in a distributed environment, Monash decided to utilise an LDAP directory service to store details "drawn from existing information where possible".
Dolphin et al (op cit) describe similar work undertaken at Hull University in setting up an institutional portal and extending this to include a wide range of distributed resources as part of the JISC funded PORTAL project. The authors describe their experience of implementing a uPortal based system. Their view of the portal as a thin layer which brokers access to services is essentially the same as that described by Monash. However, to date, a pragmatic approach to personalisation has been taken with many of the options available in uPortal being deliberately disabled. Concern is expressed about the risk of personalisation options in the user interface "distracting users ... before they get to use a site for the first time". As for personalisation in terms of content, so far, other than access to personal profiles, etc, the main distinction drawn has been between "staff" and "student" users. Again, they express concern about close mapping of user profiles to resources producing "dangerously narrow views on the information landscape, in which a user is only presented with 'interesting' or 'relevant' resources that they have already classified as interesting or relevant, removing the possibility of serendipitous leaps off into related resources".
The Library and Information Retrieval communities provide another important strand in the debate about personalisation. Libraries are an important area of potential application of personalisation technologies. Also, due to a large extent to US privacy laws restricting libraries' use of patron information, the library community is particularly sensitive to privacy issues.
Lynch provides an excellent overview of the privacy issues alongside an analysis of some of the 'soft' issues relating to personalisation. Lynch focuses on 'recommender' systems. The exponential growth of information available on the worldwide Web and subsequent difficulties faced by many users when faced with thousands of hits from search engines such as Google, has led to increased interest in systems which filter or rank results for individual users. Lynch distinguishes between systems based on opinions or actions of other 'similar' people, those based on opinions or actions of opinion leaders or people rated as 'respected' by the user and those based on popularity ratings. He also suggests that it is possible to build 'privacy friendly' recommender systems which are scalable and useful, by anonymising recommendation lists submitted by users. His discussion of the role of trusted third parties to hold sensitive user information, although written from a public library perspective, is highly relevant to the UK academic community. Lynch argues strongly that such information should be held close to the user, in a distributed system, rather than in a centralised repository.
In June 2001 the DELOS Network of Excellence held a seminar in Dublin on Personalisation and Recommender Systems in Digital Libraries. The papers presented are summarised on the ERCIM Website. Although mainly focussing on technical aspects of collaborative filtering, etc., Finn et al. point out the issues surrounding classification of Web based content. In the discussion of user modelling above, reference was made to matching of user profiles to items. However, these items themselves need to be described in order to be matched. Whilst Librarians are adept at using metadata to describe a variety of resources, the sheer volume of Web content defies classification by users. Whilst attempts to encourage authors to embed metadata (e.g. through the use of Dublin Core meta-tags) have met with limited success, Finn et al. discuss the potential of automatic classification tools with recommender systems. Whilst such approaches are in their infancy and could be regarded as too 'leading edge' for consideration by JISC in concrete service provision, it is clear that some solution along these lines will be essential if the potential benefits of personalisation are to be fully realised.
In parallel with the interest in personalisation in the context of provision on Internet based services, which has provided the main focus for this study, there has been a recent growth in use of the term in the wider political context when referring to delivery of government services more generally. This trend has already been explored in some detail in section 4.2.4. In a book commissioned by the think-tank Demos, Leadbeater (Personalisation Through Participation: A new script for public services (2004)) likens personalisation to privatisation in terms of being a 'big idea; driving government policy'. This policy drive has been taken up in particular in education with ministers such as David Miliband making speeches promoting 'personalised learning'. Demos organised a high profile seminar (sponsored by WebCT) in May 2004 to promote this view of personalisation in FHE.
In relation to the previous technology-based discussion, the term personalised learning is quite loose and could be applied to any self-paced learning. Therefore, at its most basic level, most distance or technology based learning is personalised learning. However, the more radical agenda is to use the mediating technology to enable demand-led changes to the way services (in this case education is merely an example of a service) are delivered. In practical terms, the underlying tools will be the same as those discussed above, however the goal of personalisation is much broader (or, some would say, more nebulous).
Despite personalisation being so high on the current political policy agenda, the eGovernment Interoperability Framework (eGIF) has surprisingly little to say on the subject. In theory the eGIF is a major integrating factor in the diverse drive towards eGovernment across both central and local government but the only reference in the entire document to personalisation is to the use of 'transcoding' approaches to allow content to be delivered across multiple platforms (Web, kiosk, mobile phone, pda, etc.). The term 'transcoding', though not currently in widespread use, appears in some early work from IBM research on enabling content to be rendered across a range of devices with differing requirements (see Internet Transcoding Technologies for Universal Access). Developments in the use of web technologies such as Cascading Style Sheets and XSLT transformations have led to a large body of emerging good practice on the use of different 'skins' for both coping with different display devices and for enhancing accessibility. However, in the context of this report, this is often not an example of true 'personalisation' as the display format is usually either auto-negotiated based on the characteristics of the viewing device or selected from a small set of fixed options (limited customisation). APOD would occur if the system determined the appropriate display from a user profile held elsewhere.
An interesting perspective on the role of personalisation technologies in the delivery of Web services can be found in the personal Weblog of Alan Mather. Mather is one of the leading thinkers in the Office of the e-envoy and an advocate of personalisation as an important tool in helping citizens locate the information they need. However, he points out the problems of aggregating content classified using different taxonomies and with different standards for updating and maintaining data. He also points out that, for individuals to be willing to provide personal data - or to allow it to be used - there needs to be a perceived benefit - "We are probably far enough ahead to collate data anonymously and add some data based on what we can assume (e.g. from post code), but certainly not smart enough yet to link that to what government genuinely knows about you as a person. So one size fits all is still our model and we're some way off one size fits one."
A US view of personalisation within eGovernment comes from O'Looney. Although he reports that individuals surveyed "indicated a basic level of comfort with their government developing and using profiles based on information that is quite personal", he also notes the widespread privacy concerns, particularly concerning the sharing of personal data between agencies. He notes that these concerns, combined with cost, have limited attempts to build personalised services. He concludes that citizens will be more likely to accept the use of personal data if they are able to control how much information is exposed and to which agencies.
The main context for the application of personalisation technologies within FHE is, of course, the JISC Information Environment see 2.2.2 above and Appendix 11.6 (for a pictorial representation). This distributed architecture developed and enhanced by UKOLN staff over time, has received widespread support as a catalyst for interoperability and has been adopted by a range of public bodies in the UK (as the Common Information Environment). Within this context, personalisation can be seen as a function invoked by components at the presentation layer, in particular portals. However personalisation of 'push' services can also apply at the fusion and provision layer. Personalisation may be embodied entirely within the portal or may rely on interaction with shared infrastructure.
One of the most directly relevant reviews of personalisation technologies in relation to application in FHE was conducted by Monica Bonnet of UKOLN in 2001. In addition to reviewing a number of key products and projects, she lists a number of challenges which arise when personalisation technologies, particularly implicit personalisation techniques such as click stream analysis and collaborative filtering, are used. These include:
More recently, JISC has funded Bevan & Kincla to produce a foundation study into HCI Design for the FHE sector. The study draws strongly on the work of Nielsen (op cit). The authors use the term 'automatic personalisation' to refer to what we have defined as 'adaptive personalisation'. Many of the recommendations are general. Amongst the key recommendations in relation to personalisation are:
Another recent JISC funded study by Asensio investigated the user requirements for a moving pictures and sound portal. The study identified a requirement for users to be able download and edit resources available from the portal. The Asensio study calls this editing (which happens not on the service but on the user's machine after download) "personalisation" - clearly very different from the definitions used in this report. The Asensio study does not identify a requirement for personalisation of the portal presentation component.
As can be seen from the discussion of the role of personalisation within the JISC Information Environment above, Authentication and Authorisation services potentially have a major role to play in providing access to information already held about users. This is discussed in section 6.4 of this report.
Although there are no recognised or de facto standards which related solely to personalisation, it is clear from the discussion above that personalisation services will need to interact with other services in the distributed environment. The JISC IE lists a number of key standards which enable components of the JISC IE to interact. In the context of personalisation services, the Web Services architecture and its related standards (XML, UDDI, WSDL) are particularly important. A good introduction to Web services is provided in Gardner's Ariadne article.
Adoption of the Web Services architecture enables systems designers and implementers to integrate content from a variety of distributed sources relatively easily. The availability of accessible and relevant content is necessary if presentation services such as portals are to be regarded as useful by end users. In this context, the larger the pool of available content, the more likely it is that users will value a personalised view on the landscape of resources. A recent extension to the Web services framework, which promotes access to remote content, is the WSRP specification. WSRP sets out the standards for integrating remote content into a portlet using the Web services architecture. In portals based on Java (most modern portals), the most important recent standardisation effort has come from the Java Community Process where JSR 168 has defined a set of Java APIs which define the way portlets are integrated into portals.
The following diagram, produced by the Subject Portals Project describes the relationship between WSRP and JSR 168:
Although portal developers or purchasers should consider support for both WSRP and JSR 168 as being important in enhancing interoperability, it is JSR 168 which is directly relevant to the mechanics of implementing personalisation in a portal made up of multiple portlets.
Within the broader context of promoting interoperability, a range of standards developed by the IMS Global Learning Consortium are important. CETIS provide an excellent service of tracking the relevance of these to the FHE community. In the specific context of user modelling or profiling, the IMS Learner Information Profile (LIP), which forms the basis of the forthcoming British Standard BS8788, is likely to become widely adopted within FHE for exchanging information about students between systems. Pressure on institutions to adopt systems which allow exchange of learner information is increasing from a number of sources including QAA requirements for an HE Progress file, the European Diploma Supplement and growing interest in both the UK and USA in Personal Development Planning and ePortfolios. This area is reviewed fully by the JISC-funded ePortfilio project. Again, whilst not relating directly to personalisation technologies, the ability to access data about individuals held in third party systems will help clear away some of the major inhibitors to implementations of personalisation.
Looking further afield than the UK academic community, growing concerns about privacy issues as outlined above have led to the formation of the Liberty Alliance Project. The Alliance is a membership organisation with many high profile commercial and governmental members. Their goal is to develop open specifications to promote the use of 'federated identity' to both simplify the sharing of appropriate user data between trusted organisations and remove the need for centralised storage of composite user details, thus addressing some privacy concerns.
When it comes to describing resources in a consistent manner (to enable them to be mapped against learner profiles), developments based on the RSLP Collection Description Schema appear currently to be most promising, especially in the digital library context. Common approaches to representing and exchanging information about resources will act as an enabler for personalised systems in the same way as common approaches to sharing information about individuals.
The following section provides a brief overview of some of the key products and toolkits available for use in the UK academic community at the moment and summarises what personalisation features are offered.
Although products such as SharePoint, based on the Microsoft .NET framework, have a widespread following outside FHE, there is not enough evidence of use within FHE for them to be reviewed in detail here. The remaining commercial products in widespread use are generally either Java based or offer J2EE interfaces for integration purposes. SAP Enterprise Portal (part of the mySAP suite) based solutions are in widespread commercial use, but do not appear to have gained a major foothold in FHE, presumably on cost grounds.
The most widely used commercial products, based on anecdotal evidence, appear to be Oracle Portal, Plumtree and Sun Portal Server. SCT Luminis is not discussed in detail here as it is based on the uPortal framework which is covered in section 18.104.22.168.1 below.
Oracle's portal offering is designed to be easy to deploy with 'wizards' guiding administrative users through most key tasks. Earlier versions of Oracle's portal offered a proprietary SOAP interface to bind in remote content, This is being replaced with WSRP compliance. Similarly, Oracle was actively involved in the development of JSR 168 and offer the opportunity to build compliant portlets via their Portal Development Kit (PDK). They also provide access to large number of pre-built portlets via their older proprietary API. Although quite 'open' and standards-based in many respects, one interface that isn't open is that to the underlying database, with an Oracle 9i license being required.
Personalisation features are quite advanced with users being assigned the following permissions on a page-by-page basis:
Within portlets, users can also customise parameters such as search order, etc.
Adaptive Personalisation is supported through the authentication system being integrated into group level privileges, automatically assigning content, home pages, etc. to group members (APOD). Finally, Oracle has, as an optional extra, a Personalisation module which can track user activity and act as a recommender system (APUA).
Plumtree's portal is widely used in commercial contexts. Although not currently used extensively in UK FHE, Plumtree do target the education sector and have a number of US Universities amongst their customers. The slogan is 'radical openness'. By this it means the ability to integrate with both J2EE and .NET frameworks. Its portal is part of a suite of products aimed squarely at supporting the enterprise level Web presence and it is tightly integrated with elements such as the collaboration server and knowledge directory. Like Oracle, Plumtree's support for portlets and its integration of third party content pre-dates the WSRP and JSR 168 standards but now provides fully tested compliance.
Plumtree offer users the ability to customise the content and presentation of information in a range of ways. The collaboration server allows for a range of shared workspaces and collaborative environments to be deployed. The portal also builds a user profile and can distribute user details via a Web services interface. There is also the ability to monitor user activity, although no built in support is apparent for adaptive personalisation.
Sun's big selling point for their portal offering is 'identity management'. This involves integration with the bundled Sun identity server to enable single sign on across a range of applications. Otherwise, Sun's offering is similar in features to Plumtree and Oracle, including support for WSDL and JSR 168. Interestingly, Sun is the only vendor to mention Liberty Alliance in its list of supported industry standards.
Adaptive Personalization (APOD and APUA) is achieved via the add-on "Personalized Knowledge Pack" which offers:
The majority of standard library automation systems occupying the 'presentation/ customisable' quadrant of Dempsey's grid (see 22.214.171.124) offer limited customisation features, although, increasingly, features such as saved searches and alerting mechanisms based on simple interest profiles are also available. Metasearch tools, which allow multiple transparent searching of multiple data sources, allow further customisation options (mediation/ customisable), such as the ability to select the data sources to be searched, sometimes offering the ability to save one or more 'profiles' consisting of groups of search targets (rather than user interest profiles per se). Amongst the most popular of these in the UK academic community are MetaLib from Ex-Libris ENCompass from Endeavor, Horizon Information Portal from Dynix and Prism from Talis. FDI's ZPortal, although developed partly from a JISC-funded project, has no users in the UK academic community, but it is used in the North Bristol NHS Trust (UK) and as the basis of the ARL Scholars' Portal project in the USA.
Searching the Websites of these vendors reveals that only Dynix have published detailed plans for adaptive personalisation features, via a 'personalisation wizard'. Although they stress that all features will be 'opt-in' (i.e. explicit), Dynix plan to integrate 'continuous preference learning' through tracking user activity. There is no indication, however, when this vision will be incorporated into products.
The market leaders in the FHE virtual learning environment market are Blackboard, WebCT, Granada LearnWise, and Teknical Virtual Campus. Both Blackboard and WebCT ensure that their latest generation products can interface with campus portals and authentication/authorisation services. Teknical's eLearning Portal Server integrates with Virtual Campus and provides pass-through log-in to it. Blackboard, offers its own 'Community Portal' product which, similarly, has role-based personalisation options. All four offer built-in features such as collaboration environments, and accessibility options.
uPortal is developed by the Java Architectures Special Interest Group (JA-SIG), an independent membership organisation based in the USA. uPortal development was funded in part by a grant from the Mellon foundation.
uPortal's strength, compared with other portal offerings, is that it was developed by the FHE sector for the FHE sector. It is available under a "Free / Free" open source license. It is described as a portal 'framework' which "takes care of the common functionality that every portal needs, so that you can implement the parts that are important and specific to your campus".
As can be deduced from the developers' title, uPortal is written in Java, using XML and XSLT in the common, state of the art, manner. Although it is shipped with Apache Tomcat, uPortal has also been deployed successfully using other servlet containers including Resin & BEA WebLogic. It relies on a relational database (such as MySQL) to store user and resource details. WSRP compliance was built in to r2.2 in April 2004 with JSR 186 support in r2.3, available from June 2004. A major upgrade, r3.0, is in development with no current release date.
One of the refreshing features of UPortal is the honesty of the developers about its limitations. The FAQ warns of problems, for example, with use of LDAP for authentication.
The personalisation features of uPortal include the ability to define layouts based on changing the layout of 'Framework elements' (tabs, columns, header icons) or their content (the 'channel' or 'portlet'). Administrators may define a default layout and content for individuals or groups (APOD). They may also allow individuals to customise some or all of these settings. As outlined above, all user preferences are stored in the underlying database. A number of commercial organisations offer installation, integration and support services around uPortal. Of these, the most widely known in the UK is SCT Luminis [op cit] which is used by the University of Nottingham and the University of Birmingham.
Jetspeed is an enterprise information portal server developed as a project of the Apache foundation. Like other Apache projects it is based on Java and XML technologies. It has been developed in conjunction with other Apache software, notably Tomcat, but should be able to be deployed using other servlet containers. Jetspeed is available under the Apache open source license which allows the product to be freely distributed for both commercial and non-commercial applications. Jetspeed provides a range of standard portlets for content such as RSS, HTML, Java Applets, etc. However, there currently appear to be no plans for JSR 168 or WSRP support. Like uPortal and Jahia, Jetspeed allows users to choose which portlets are displayed and customise the screen layout. Customisation is also driven initially by the authentication service.
Jahia is a "collaborative source" portal server with integrated content management features written in Java. It promises full JSR 168 support for integration of portlets and a 'layout manager' which allows users to customise their screens (similar to my.yahoo or, indeed, uPortal). Jahia is aimed at commercial customers rather than FHE and the software license is based on a 'contribute or pay' model, although the source code is freely available.
There appear to be a wide range initiatives labelled "my.library". One of the most frequently cited is that developed at North Carolina State University and now maintained by Eric Lease Morgan at University of Notre Dame. My.library@ncsu is simply a set of PERL scripts (actually 2 sets - one for the admin interface and one for the end user interface). The system is based on constructing a simple user model using key words. Librarians manually catalogue a wide range of resources (Web pages, databases, Internet journals, etc.) using these keywords. The user is thus presented with a personalised list of resources which match his or her profile as a starting point (APOD). From there, the user can customise the system by adding or removing a subset of the resources (those highlighted with an asterisk) proposed by the librarians or administrative users. There is also the opportunity to customise some of the cosmetic aspects of presentation. My.library is available under the GNU Public License (GPL).
Bodington is an open source VLE developed by University of Leeds. It has been used collaboratively with other organisations, for example as part of the PORTOLE project. Publicly available documentation is sketchy at the moment and there is no reference to the availability of personalisation features, although accessibility is a major design goal.
Sakai is a collaborative development project, supported by the Mellon foundation, which is producing an open source Collaboration and Learning Environment in Java. Project partners and timescales are shown below:
Sakai Project Timeline
Although Sakai is not directly addressing personalisation as part of its primary activity for either the 1.0 or 2.0 release, it can be seen that it is closely related to uPortal and will therefore be able to utilise the personalisation features of uPortal. The architecture is based on JSR 168. In theory this means that the Sakai tools could be delivered via any compliant portal. However, initial releases of Sakai will rely on embedded uPortal configuration. The project's current advice is that "it is best to think of Sakai running in the uPortal framework for the next two years unless someone wants to mount a significant development effort".
The development of COSE was initially funded by the JISC and the team have received subsequent JISC funding to investigate and enhance interoperability. Work on interoperability has focused on implementing interfaces which support a variety of IMS specifications including Content Packaging and Metadata (now IEEE LOM) as well as ADl SCORM. The next release, v2.1, (due 'mid 2004') promises full conformance with v1.01 of the IMS Enterprise specification. This version also promises to be available as open source (the current version is available as a binary distribution with a free licence and optional, priced, support package).
COSE sets out to be learner centred and incorporates a range of groupware features including chat rooms, sharing of content and sharing of annotations. Tutors are able to create and manage hierarchical 'groups' which can be set up for a department, course, year group, topic or individual learning opportunity. Tutors can assign tasks to groups and publish 'pagesets' of resources for use by groups. Users can also manage their own peer groups, maintain their own unpublished pagesets and share these with others. In personalisation terms, the user experience is a combination of Customisation and APOD. COSE is used and actively developed by Staffordshire University. The website provides no information about other institutions using the software.
Colloquia was developed using JISC funding as another 'home grown' VLE concentrating on support for group working and group learning. It does not currently appear to be actively under development as a stand alone system, although it has been used as part of the RELOAD project.
There are a number of open source toolkits available which can assist in the implementation of adaptive personalisation. The majority of these focus on Collaborative Filtering. A number of toolkits have been developed in the context of specific use scenarios - e.g. ratings of books or movies. Whilst these may have generic components which may be applicable elsewhere, they are not reviewed here.
A second group of tools appear to have been developed principally to as an aid to academic research into the area rather than as 'products' in their own right. These include Foxtrot from Southampton University and Altered Vista from Utah State University. The latter appears to have some potential for wider use and is of particular interest because it has been used in the context of aiding the search for learning objects. The researchers conclude that US academic users tend provide ratings with a marked 'ceiling effect' and with a large degree of consensus and that this, although producing accurate predictions, means that personalised recommendations do not differ significantly from recommendations based on averages taken across the entire community.
Other toolkits which appear to have potential for applicability include COFI and the similarly named CoFE (previously known as CFEngine). COFI, although still officially a beta release, has been used in at least one real world application. It provides "a foundation of already tested [collaborative filtering] algorithms and documented that can be used in a wide range of contexts from research to applications" and is available under the GPL. CoFE, on the other hand, is intended to be a complete collaborative filtering solution, rather than an extensible set of algorithms. V0.3 of CoFE was released in March 2004, indicating that this is not yet a mature product, although development at Oregon State University appears to be quite active. The open source license is similar to the BSD license.
The final category of personalisation tools are those which are designed as modules for use with particular Content Management Systems. An example of such a module, based on APUA techniques, can be found on the Drupal website. Although there is no reference to a similar module currently in existence for use with ZOPE, this might be seen as a likely area for development.