eduroam FE workshops and Microsoft NPS

Update (17/04/2018)

We have now published an updated v0.5 of the eduroam (UK) Microsoft NPS Configuration Guide on the Jisc community site.

Update (12/03/2018)

An updated draft v0.3 of the eduroam(UK) Microsoft Network Policy Server guide is now available.   This covers configuring your own Standalone Certificate Authority, which is the preferred approach.   We will be moving to a final version to be provided on the Jisc Community Site within the next few weeks. 

I’ve recently been working on some eduroam FE workshops with a cross Jisc team of Esmat Mirzamany , Edward Wincott and Noel McDaid, these workshops have primarily been aimed at FE organisations who have signed up to eduroam(UK), but have never completed their deployment.  We’ve successfully delivered workshops to IT staff from colleges in London and Manchester.

Amongst a number of topics within the eduroam FE workshops; I’ve also demo’d the deployment of Microsoft NPS (Network Policy Server) for organisations who want to use this RADIUS deployment for the ORPS (Organisational RADIUS Proxy Server), this is the home RADIUS server that authenticates your users whilst at Home and Visiting, but also forwards Visitors requests to the NRPS (National RADIUS Proxy Servers) run for eduroam UK and by Jisc.

Whilst we work to review our guide(s), then I’m putting up a “run through” of what I believe to be the preferred and best practice method for eduroam(UK) participants wishing to deploy Microsoft NPS, based on my demo at the workshops

During this I will refer to both the “GÉANT NPS guide” – Using Windows® NPS as RADIUS in eduroam“, and the “Jisc NPS guide” eduroam(UK) Microsoft NPS Configuration Guide extensively.

If you are setting up eduroam / your RADIUS infrastructure from scratch then you start by reading the Limitations section 2 on Page 6 of the GÉANT NPS guide, the situation is that whilst Microsoft NPS is not the preferred RADIUS deployment within eduroam, many organisations are unable or unwilling to run the preferred RADIUS solution which is FreeRADIUS.

Certificates and Certificate Authority

Assuming you don’t have an existing CA (Certificate Authority) that is suitable and you want to create a CA under Windows on your new RADIUS server, then you will be following from Appendix A, Sections A1 and A2 of the GÉANT NPS guide.  Although it’s worth noting that this uses the Enterprise CA, this stores the CA in Active Directory and uses Web Enrolment.  In a lot of circumstances it will be better to use the Standalone CA, in case in the future you change your RADIUS deployment, or Active Directory, then it could be exported.

Ensure that your CAs lifetime is long for example 20+ years. This will be the certificate that goes onto end user devices, so you want to replace it as little as possible.  You should add a valid CRL Distribution point added (see Step 24/25 of this guide on deploying a standalone root CA ), this will be a URL that should reference a domain name that you have control over and could feasibly host a file if required for example

You should also tweak the default validity of the certificates issued by your CA as the default one year is too short, you could align this to the lifetime of the CA or slightly greater.  See this guide on How to change CA Certificate Validity Period

You can then follow the Jisc NPS guide, until page 17 where it says ‘Send the CSR file to your Certificate Authority for signing e.g. Janet Certificate Service’, you should use your newly created CA to sign the certificate rather than relying on an external CA, such as the Jisc (formerly Janet) Certificate Service as the Jisc NPS guide described.

The process for using the Standalone CA is well documented, so I suggest using the Certificate Authority snap-in to ‘Submit new Request’ referring to the CSR file you’ve generated, and in ‘Pending Requests’ to then ‘Issue’ a certificate.  Move to ‘Issued Certificates’ and open the new certificate, in ‘Details’ choose ‘Copy to File’.

Now continue with page 17/18 of the Jisc NPS guide.

If you are adding an additional NPS server or rebuilding your RADIUS infrastructure e.g. on Microsoft NPS, then you need to make sure you transfer the CA from the existing solution. That would need some consideration, but the main thing to achieve is that the new server has the public key of the CA, and private and public key of the server certificate installed.  You don’t need to worry about the DNS name matching the CommonName of the certificate, as 802.1x clients whilst aren’t authenticating cannot perform DNS lookups.

If the existing CA was deployed as part of a FreeRADIUS install then you could continue to manage it with OpenSSL or similar.

NPS configuration

Follow on from page 19 “Add NRPS RADIUS Client” until the end of the Jisc NPS guide to complete the majority of your configuration.

On Page 53 of the Jisc NPS guide, you should also make sure you have clicked on and followed the process to ‘Register server in Active Directory’.  Note, this process can fail if the server you are running NPS on has been cloned and not fully prepared e.g. using sysprep.

If on Page 47 under the Protected EAP properties, you get an error, then there is a problem with your installed certificate.

Following this we recommend some basic testing and also following this guide (RADIUS Attribute Filtering in Microsoft IAS and NPS) to perform RADIUS attribute filtering which is noted in the limitations section of the GÉANT NPS guide as something you should do with NPS.  You can combine this area with setting up Dynamic VLAN assignment, which is one of the key benefits of using 802.1x/WPA2 Enterprise in your network, and will be needed to comply with the technical specification for sites offering Home and Visited access.

Any comments

I’d welcome any comments here, or to me directly, or through our eduroam-FE Jiscmail list.  Our intention is to improve the Jisc NPS guide based on the above and your comments.

Being agile—not just for software developers

There’s a lot of buzz around agile methodologies these days. It’s certainly talked about a lot within Jisc, and a good number of projects here have adopted agile.

Probably the most common agile methodology is scrum, but there are others which might suit your project better. Having recently completed agile fundamentals training but having been working with agile for some time, it made me realise that many projects claim to be agile and adopt scrum principles, but some fail to fully adhere to the rules.

So what are the principles and what are the differences between them? The three main ones are scrum, Kanban and extreme programming (XP). They all share the same overall principle of breaking down a bigger task (often referred to as an epic) into smaller, more manageable tasks. They also share the principle of working in dedicated teams with time dedicated to the project, away from other distractions. Let’s take a look at each one in more detail:

Scrum is the methodology which has the most rules, and clearly defined roles. Projects are run in development cycles known as sprints. These are dedicated time slots (such as a day) where all team members are expected to commit to the project (many people often prefer to work away from their usual office space to avoid distractions). Teams are made up of a Product Owner, who is usually the biggest stakeholder, there is also a Scrum Master whose role is to facilitate the team’s work, removing any impediments which might come up. The rest of the team is made up of team members. There is no team leader—teams are self managing and responsibility for delivery is everyone’s. Scrum teams work best with no more than about 9 people.

Once the scrum team is formed, the first task is to come up with user stories. These typically start off large and are broken down into smaller, more manageable stories. User stories usually are put together in the form:

As a…
I want to…
So I can…

An important principle of scrum is being able to assess user stories for size, to estimate how much development time will be needed. These don’t have to be exact—methods used to estimate include T-Shirt Sizes so size stories as small, medium or large. Another method is Planning Poker which uses a non-linear Fibonnaci Sequence (0, 1, 2, 3, 5, 8, 13, and 21)—stories are read out and team members vote on numbers.

Stories then form product backlog items a list of tasks which need to develop a minimal viable product, with no additional unnecessary work included. These are prioritised, and any dependencies taken into account. Scrum teams then meet for sprints (often weekly for a defined period, often 3 months)—at the start of the sprint teams decide on which backlog items to work on, and at the end of the sprint teams reconvene to review progress. Progress on product backlog items is made visible to all team members, who are collectively responsible for their completion—this could be a paper-based approach with product backlog items as post it notes which move from the backlog through to in development through to complete. There are also online tools such as Trello which can be used, and Microsoft Planner, part of Office 365, can also be used to keep a product backlog.

A big benefit of scrum is that testing takes place at a product backlog item level. Problems and faults are identified early on (often referred to as failing fast), and can be rectified early on in the development cycle rather than at the end as part of a larger user testing phase. Lessons are also learned early on and can be acted on which improves the performance of teams as time goes on.

When all backlog items are complete, the sprint is complete and the product is ready for delivery to the customer. Shortly after the sprint all team members attend a sprint retrospective to review how things went, and identify and lessons learned.

Kanban is a much less prescriptive methodology, and can be better suited than scrum to non-software development situations. The basic principle of breaking a larger task into smaller tasks remains, but roles and rules on how teams work are fewer. It has origins from Japanese lean and just-in-time manufacturing going back to the 1940a. Kanban is Japanese for “visual signal” or “card”, with the emphasis on managing the flow of work. As with scrum, backlog items move a across a Kanban Board, but the fundamental difference is that Kanban allows for the imposition of limits on how many items can be at a particular stage. At it’s most simple, a Kanban Board could just have three stages (to do, doing, done), but can be adapted to include stages of development (such as analysis, development, testing). Imposing limits ensures that tasks move efficiently through the development cycle and the risk of blockages is reduced.

Finally, Extreme Programming (XP) is the methodology which is most closely aligned with software development projects as the name suggests. As with scrum, it is based on having short development cycles with frequent releases, this again allows teams to fail fast and allows new and changing user requirements to be taken into account early in the development cycle. Principles of Extreme Programming include Programming in Pairs where someone writes the code while the other reviews the code picking up any issues (switching roles frequently). Frequent code reviews are an important part of XP—by getting frequent feedback, quality of software is improved.

Which method to use really depends on the nature of your project. The rules of scrum are the most complex but many opt to adopt a scrum light approach, taking the principles that suit a particular development cycle. It is important to make sure that you don’t dilute the principles down so much that you’re no longer being agile.

Transforming the business: A Jisc Information Strategy Blog by David Reeve

Over the last few months our small information strategy team has been starting to deliver on the transformational enterprise information strategy and its principles of; being safe with our information, being smarter with our information and using the right tools, policies and processes to deliver on the other two principles.  The team has been applying these principles to the information management, compliance and data governance workstreams.

There are three main projects being undertaken in the information management workstream.  We have installed the Nintex workflow solution for SharePoint.  This is an easy-to-use product which allows us to build online forms and workflows to replace manual routine processes.  As an example, we have recently gone live with a new online Jisc-wide performance management review form which has saved a considerable amount of admin time. This is just one of a number of forms and workflows that we have been developing.

The second project focuses on making improvements to the user interface and functionality within SharePoint.  We are piloting the Beezy social collaboration tool which brings together Office 365 collaboration functionality in one easy–to-use interface. We are also exploring the new functionality in SharePoint groups which is better integrated into Office 365 functionality.  This work will lead to improvements in the way we use Office 365 and SharePoint across Jisc to give a much better user experience and benefits to the business.

The third project is carrying out an information gathering exercise across the company which will produce team information asset registers that will also be rolled up to a Jisc-wide information asset register.  This tool will allow us to identify and manage our information assets more effectively particularly in areas such as disaster recovery, business continuity, risk management, information lifecycle management (from creation to destruction); identifying sensitive data, findability (metadata tags and taxonomies) and appropriate accessibility.

The compliance workstream has focussed on getting data protection right at Jisc.  We have recently published a new data protection policy with accompanying guidance for staff to replace the existing policies.  We will soon be running mandatory online data protection training for all staff.  The next steps will be to develop an action plan to deliver the EU General Data Protection Regulations.

The data governance workstream is a new area we have developed this year at Jisc.  Its aim is to introduce data governance structure and principles and embed this across the company with a focus on our structured data capture in database systems.  We are focussing on improving the consistency and quality of data in our core systems that will, in the first instance, feed the data warehouse but will then cascade this best practice programme across other systems in due course.


David Reeve, Head of Information Strategy at Jisc

Is your organisation Wi-Fi a security incident waiting to happen?

Working for Jisc means that I’m increasingly on-the-road, which inevitably makes me a user of more and more Wi-Fi Hotspots; whether they be at the offices, campus or buildings of our Jisc members, at hotels, cafes, or on public transport.

I continue to be surprised at the compromises that are being made to deliver Wi-Fi, either compromising security, because my data is then unencrypted (at Layer 1 of the OSI model at least), or to accountability/traceability, so the operator of the Wi-Fi has left little recourse in the event that I am a malicious user.

Examples of bad Wi-Fi

Here are a few examples, I’ve gathered on my travels:

  • Hotel with either an ‘open network’ with no further authentication or authorisation process
    • The kicker with this one is that they are a hotel chain and I’ve stayed in a number of their hotels, which all suffer the same issue
  • Hotel with a (now probably widely circulated) ‘secured network’ with WEP key
    • An independent hotel, but in an extremely busy and central location
    • Wired Equivalent Privacy (WEP), is widely-known as a weak security standard. It was deprecated in 2004, and from 2005 there were known vulnerability instances in the public domain (for example the Café Latte attack)
  • Conference venue with a ‘secured network’ with WPA2 key for their wireless network but easily guessable and potential to be widely shared
  • Educational organisation ‘Secured network’ with WPA2 key for their wireless network but easily guessable and potential to be widely shared
    • This organisation should be fully utilising Jisc’s eduroam service
  • Educational organisation ‘open network’ with further authentication or authorisation process utilising a widely shared username and password
    • Again, this organisation should be fully utilising Jisc’s eduroam service

Home Wi-Fi in the workplace

WIFI and password for #TransActing, Critical practice at Chelsea College of Arts. Photo by Neil Cummings (Flickr)

What we see is that increasingly organisations have deployed a solution suitable for the home e.g. a shared key using the WPA2-PSK security method. At home your accountability risk should be low, you may have shared your key with a few friends and family, but there is a level of trust that will not be there in a public or commercial setting.  Increasingly home users don’t carry their key over when they change ISP or broadband router and simply reconfigure their devices, this serves to routinely clear out older users.

Guest Wi-Fi

For Jisc members who are connected to the Janet network, is that any Wi-Fi solution should comply with our Guest and Public network access policy, and that many of the poor solutions mentioned are unlikely to comply with the policy.  Jisc offer a Public access to the internet via the Janet network service, which is currently provided by ‘The Cloud’.


No surprise that I’m a big advocate for eduroam and its increasing footprint, which aside from being an excellent solution for roaming, also deals with security and accountability well, so should be used as the de-facto solution for users in education (especially Jisc members as we are the UK provider of eduroam).

eduroam is an implementation of 802.1x/WPA2 AES, so the trade-off is that to setup an initial connection with all the correct security controls can be more challenging than a network using Pre-Shared key (PSK) or captive portal.  However, we have tools such as eduroam CAT  to help organisations and users with this challenge, and we absolutely need to getting organisations and their users to use these tools (we call this process of using tools to configure the WiFi network on-boarding).

It should be no surprise, but if you by-pass security controls, then bad things happen, so a poorly configured 802.1x device can be susceptible to a Man-in-the Middle (MITM) attack (See this blog post on hostapd-wpe which can be used to achieve this).

Man-in-the-middle attacks

Man-in-the-Middle attacks are highly possible on Wi-Fi networks, the first step is defeating Layer 1 of the OSI model; so on an open network or Pre-shared key based on WEP, won’t require a lot of skill, and thus as organisations we should avoid using these as much as possible.

Pre-shared keys based on WPA2-PSK in small deployments like your Home are brilliant, but their security will decline as the user numbers increase, and the trust between the users diminishes, because anyone who knows the key can decrypt any data on that network.

802.1x/WPA2 AES the basis of eduroam is better because encryption is between each device and the access point, and is not shared.

Once the above is decrypted then you are at the point of tricking users or their devices, which is usually at Layer 2 and above in the OSI model.  ARP spoofing, rogue DHCP servers, intercepting proxies which issue SSL/TLS Certificate errors, with the hope that users tell their device to send data to the untrusted client (and many of them will because how many IT people have told them to ‘by-pass the security controls’ in the past?).

How many times have you add an exception to a Secure Connection Failed dialogue in your web browser? Image by on Flickr

How many times have you added an exception to a Secure Connection Failed dialogue in your web browser? Image by on Flickr

Devices like the WiFi pineapple make it much easier to setup this type of rogue network.  There is of course with every ‘hackers tool’ a valid reason to have and use a WiFi pineapple as the operator of a WiFi network.

What are the risks to the organisation?

In terms of Information Security and thinking about the ISO 27001 standard, which some organisations are trying to achieve by ensuring the Confidentiality, Integrity and Availability of information and services. These could be compromised via the Wi-Fi network, and so it’s likely to figure as an issue against a number of the controls that form part of the ISMS (Information Security Management System), which is developed as part of ISO 27001.

If you don’t have robust authentication and authorisation, then you could easily become a source of a Distributed Denial of Service attacks, Copyright Infringement, other unlawful or potential risky activity.  If you are known to have a widely open and unaccountable network, then this information may circulate from your organisation users to people who aren’t.

One potential area of risky activity to consider is what happened if you became an exit point from the ‘Dark Web’ to the Internet (called a ToR exit node), and you also had no accountability for any usage of the Wi-Fi?  There are lots of example of individuals who have run Tor exit node and whose IT equipment has been seized by authorities in the event of illegal activity.  Imagine if these data and servers were your organisation’s key infrastructure?

There is a risk of gain unauthorised access to internal Local Area Network (LAN) via the Wi-Fi, which could result in an information security breach or hacking attempt on corporate systems such as those holding financial or personal data.  I increasingly find that small organisations have no separation between Public Wi-Fi and their corporate LAN, a simple arp-scan, or port scan would often confirm that situation in moments, if the internal network is the same logical network (shared VLAN) then you are also potentially exposing the LAN to the MITM attack.

What can organisations do?

At your organisation locations, ensure you have a robust WiFi access solution, and treat it a priority area.  Bring your own device isn’t going away, and increasingly people turn up to a location and want WiFi access.

For Jisc members I would recommend a combination of eduroam (with correct on-boarding instructions or tools for users), and exploring our Public access service provided via The Cloud.

You shouldn’t need a lot else, eduroam can deal with most organisation requirements, with a Public access service to cover the general public and visitors who aren’t participating in eduroam.

For non-Jisc members you could explore 802.1x/WPA2 Enterprise (the technology we used behind eduroam), but you may need to prepare for additional costs in terms of on-boarding that this may entail.

Some things to definitely do:

  1. Avoid running unsecure Wi-Fi networks and those using older protocols (WPA, TKIP, WEP etc.)
  2. Key-based networks should only be used for small groups of users, if used larger then keys should be changed very regularly
  3. Do not run hidden networks, these may be more susceptible to the device being hi-jacked as rather than the network (SSID) being advertised your device is constantly announcing that it wants to connect to that network (and your attacker with the Wi-Fi pineapple will go ‘sure it’s over here’?)
  4. Implement access controls between on the network, particularly those used for Guest and BYOD and other networks i.e. using a Firewall

Discuss with your Jisc account manager and gain access to Subject specialists in organisational infrastructure, as well as the technical experts from Jisc’s eduroam UK support team.

What can users do?

There is a very interesting blog post here on How Safe is Public Wi-Fi, on this topic.  I would suggest the following:

  1. If you have access to eduroam then configure it (if you’ve not configured eduroam via a tool such as eduroam CAT, or manually configured certificate information, then this may not be correct, contact your organisation IT service for support)
  2. If you have access to other 802.1x WPA2 Enterprise (AES) networks use those (noting again the above about configuration)
  3. Regularly ‘forget’ any open SSIDs on your devices that you no longer need
  4. Invest in or provide yourself a VPN service, which ensures you have some encryption. See this article from BBC News on using a RaspberryPI to create a VPN
  5. Ensure you use secure protocols in your application, so HTTPS rather than HTTP, IMAPS rather than IMAP etc.
  6. Do not by-pass security controls either in configuring your device or in later use, for instance if you get an SSL/TLS Certificate warning, treat with caution

The future

I hope the future is increased take-up of Wi-Fi Certified Passpoint (formerly Hotspot 2.0), which builds on and uses some of the technology in 802.1x/WPA2 Enterprise, but there is a lot of Wi-Fi equipment out there, both at the consumer and organisation end.  So it’s going to take time to trickle through…

Walled garden for on-boarding user devices to eduroam – Technical deployment guide

One of the key barriers in successful deployment of eduroam, is around ensuring that users are adequately supported. 802.1x/WPA2 Enterprise configuration on the majority of devices is a little more complex than PSK-based Wireless solutions, which users are familiar with at home. As a result there is a need for on-boarding tools to be made available to users, such as eduroam CAT.

Organisations providing eduroam will need to provide access to their chosen on-boarding tool; typically they will be available via the organisations eduroam support web page. The challenge for many organisations is that devices need Internet access to visit to the organisations eduroam support web page, this usually isn’t a problem for mobile phones which usually have access to the Internet via the mobile network.

However, there are other situations which require an Internet connection or local network access to gain access to the on-boarding tools. This includes where organisations have poor mobile coverage, but also for tablets and laptop devices, which may not have access to the mobile network.

The solution deployed by many organisations is that of a ‘Walled Garden’, this is a network connection that enables sufficient access to download on-boarding tools, gain access to support webpage and any other relevant access, but it should fall short of providing full internet access.

Our work with the Further Education sector has highlighted the need for organisations deploying eduroam to also deploy a ‘Walled Garden’ to on-board large numbers of users onto eduroam, and to enable users to ‘self-service’. The ‘Walled Garden’ is typically made available by broadcasting an open wireless SSID.

The following Walled garden for onboarding user devices to eduroam guide is aimed to help you in configuring a walled garden network. This uses the walledgarden-preview
Open-source product pfSense, which is a Firewall solution based on FreeBSD. It’s a viable solution, even for organisations which typically don’t use Unix-based operating systems, since it is almost entirely configured through a web page, is comparable to appliance-based solutions, and is relatively easy to maintain and update.

pfSense can be configured to run in a Virtual Machine, this guide covers using VMware, but it can also be deployed in a Xen environment, or on physical hardware.


Looking back down the road

Re-posted from Brian Kelly’s Institutional Web Mangers Workshop (IWMW) blog—prepared on the occasion of IWMW’s 20th anniversary

A brief look back down the road

Screen shot, desktop computer, 1993

About this guest post

I’ve known Brian Kelly for about 30 years. He is one of those guys with whom I feel instantly at home, with half a universe of common understanding and opinion born from living through the same eras and being excited or concerned by the same developments. We bump into each other at meetings, conferences, deliver the odd workshop together—but never go looking for each other in between, it just happens. So it was at Jisc’s Digifest this year. But of course Brian is not the sentimental old dozer than I am. No sooner had I put my arm round his shoulder with a “good to see you mate” than he was out with “oh, would you like to write a blog post for me” “but sure I only ever took a workshop activity at your IWMW meetings “that doesn’t matter, you could mention how things have changed since the early 90s” “no problem Brian!”.

Putting things in perspective

So, what on earth can I share that would be of interest to readers today, in an age where we dip our heads into the fast flowing stream of passing multi-faceted mish mash of bumf, news and information gems, then lift it out gasping for relief from the intake and pay no heed to what else passes in the interim and that never gets a chance to impress on our consciousness because there just isn’t time and space! Hmm… maybe that’s the first thing to say, how email and lists used to be the real source of collaboration and sharing ideas, before the only spam we knew about was in a tin or on Monty Python, how Mailbase (Jiscmail’s predecessor) was revered and productively employed all over the world, how we could have spent an hour or more writing a post, knowing it would be read considerately and responded to with due attention. This social media of the day was focused, quite academic and the pre-runner to the best interactive blogs you’ll find anywhere today. Of course we did have the whole gamut of trivia and serious topics dealt with in the leviathan News that rolled through subscribing sites—but that was always seen as somewhat “recreational”.

Of course everything got a little more exciting with internet information services. I remember playing with what we would now call “intranets” essentially campus information systems (anyone remember CWIS). But as Europe gave way to TCP/IP and the internet started to look like the global network it was to become, the tantalising prospect of information systems that worked across very remote sites, began to form.

Early internet information services

Alan Emtage
Brewster Kahle
Early internet services timeline

The first and most basic was genius in its simplicity: Archie. All credit to Peter Deutsch and Alan Emtage from McGill University in Montreal. Lots of universities and other sites had anonymous ftp and a list of available files in different directories, so Peter and Alan just compiled a master list, built a trivial search facility around it and let people find what they were looking for in one place rather than having to visit every site. Hats off to you Peter and Alan, you went on to create one of the first true internet companies, Bunyip, above a Dunkin Donuts shop in downtown Montreal, and for years kept sending staff to IETF (Internet Engineering Task Force) meetings contributing to discussions and development that are still guiding our online world.

Then came Brewster Kahle and his Wide Area Information System (WAIS) connecting rich databases across the internet. And for anyone who regards Al Gore as the father of the internet (rather than Vint Cerf) then it is only because Brewster was at his side advising on the potential and encouraging an admirably receptive mind to go forward. Brewster later installed mega storage discs in the Presidio in San Francisco when he developed Alexa, the information engine that Amazon later adopted to match customers to what they might like to buy. Today his “Wayback Machine” archives over 150 billion web pages that have appeared since 1996.

Along with WAIS came what we all thought would be the de facto standard, and it was for a while, Gopher. Plain text and a few images. That was multimedia enough to begin with, and the ease with which “pages” could be created and include “links” to different gopher “sites” globally really did set the scene for more hypertext systems.

At the 1992 Network Services conference in Pisa we heard how Cornell University were trying to provide a standard package of interfaces to internet resources, they called it “Bear Access“. We also had a presentation about Hypertext Markup Language (HTML) but everyone thought this was far too complicated to ever catch on! Not so Tim Berners-Lee and his supportive colleagues, principally Robert Cailliau, at CERN. They were already using a command line interface to something they called web sites made up of HTML pages, and had started to look at graphical versions on Next workstations.

Emergence of the world wide web

Tim Berners-Lee
Web page for the first WWW conference in 1994
Marc Andreessen and the first ubiquitous graphical browser
Robert Cailliau

At this time I had been persuaded to head up a TERENA task force to review what a “unifying” desktop computer interface to internet resources could be, and I had established the UNITE (User Network Interface To Everything) discussion list and prompted the interest of a global audience in having a say and dreaming about our ideal “screen to a universe of connected stuff”. We issued a position paper in November 1992, which read a bit like a blueprint for a web browser. In this group was a young, impatient and very talented programmer at Illinois by the name of Marc Andreessen who wanted to know why we just didn’t get on with it… he had already been shaping up something he called Mosaic on expensive Unix graphic workstations.

By the time the 1993 Network Services conference in Warsaw arrived the matter was just about settled. Yes, I gave a talk about UNITE and presented all our views on what the ideal interface was. Yes, we were told about other hypertext systems such as the Austrian Hyper-G and our own Wendy Hall’s Microcosm being developed at Southampton. But when George Brett, director of the Centre for Networked Information Discovery and Retrieval (CNIDR) in the US sat Jill Foster and myself down in the lobby of the Europejski Hotel, with a “wait till you see this” as he pulled out his Mac notebook and fired up Marc’s Mosaic for Macintosh, showing off an early web site that Jim Croft of the Australian National University had compiled (it even included a sound file of a Kookaburra)—that was when I knew the world had just changed… irrevocably!

Mosaic went from strength to strength and Jim Clarke eventually took Marc to the west coast, changing the name of the company from Mosaic Communications Corporation to Netscape later in 1994 (a name we had used to label the next phase of the UNITE ideas in creating the infrastructure required to satisfy user needs identified via the desktop interface). And of course Netscape went on to represent the biggest initial public offering (IPO) in history.

But before that we had, what I had dubbed, the “Woodstock of the internet” in the Spring of 1994 at CERN—the very first World Wide Web conference. And what a party that was. Never had so many leading edge technologists, developers, engineers, game and virtual reality geeks and all other types of luminaries usually involved in changing the world, come together with such expectation. Robert Cailliau’s biggest fear at this first web conference was that the web would become a broadcast medium rather than interactive. And to some extent his fears were certainly valid. But who could deny that Tim Berners-Lee’s infectious enthusiasm stole the show, as it has done ever since.

Spreading the word

Logo for UNITE Solutions Limited
The DESIRE project

I went on to set up a small company in Northern Ireland, in the spirit of Peter and Alan’s Bunyip, called UNITE Solutions. This was only after campaigning fruitlessly within my university that we should create a resource centre for the web and partner with business to develop it’s wealth of opportunities. I was unable to convince certain authorities that the web would become significant. And when I sought support from local economic development agencies outside the university, I was told my ideas didn’t fit into any existing categories and therefore they could do nothing!

In Europe there was a little more open mindedness and we participated in a large multinational Framework IV project, DESIRE, to develop information services in a number of community, educational and business scenarios. At the same time in Northern Ireland many were requesting UNITE’s services as they realised that something big was happening. My “demo” web projects at the university had attracted attention both locally from organisations such as Anthony McQuillan’s Conservation Volunteers, and as far away as the Washington Post (from whence a delegation appeared at the Vice Chancellor’s office one day to learn more about Michael McGrath’s Gaelic Football news site that had attracted a large US audience). In Geosciences we launched the first fully electronic peer reviewed international journal, Brian Whalley’s GGG (Glacial Geology and Geomorphology). We compiled instructional web pages from old 35mm slides of Orthopaedic Surgery scenarios in the BONE (Base for Orthopaedic Expertise) project. And of course we forged down some new inroads, such as getting local politicians to answer questions from the public online, or setting up a web camera in the Crown Bar in Belfast (a first) for expatriates all over the world to view and feel at home (the keyboard got so sticky that we eventually had to change the access password to use only a single key).


Since those early innovative years I do feel that business has hijacked the internet. In terms of hardware CPU, RAM, storage and bandwidth, we have progressed at a rate that would have seemed incredible back then. But in terms of achieving what should be possible in terms of managing content, we have only scratched the surface. Tim has lamented that we haven’t yet embedded sufficient metadata in web pages to provide the semantics and convey universal context to everything. His remark that he should have called it the Giant Global Graph, GGG, instead of WWW, is so pertinent. I also view with nostalgia old web pages in that simple innocent “Times” font format with blue underlined links and the odd image awkwardly perched to one side. They raise the long ago thoughts that we really had found a common format for sharing information in all possible shapes and forms. It was about information engineering, not gloss and marketing… alas!

The pioneering spirit

Webmaster definition

However, more than the mild sadness of conceding that a certain amount of lag was inevitable in progressing the web towards our early visions, as the rest of the world on-boarded, I regard with more concern the relative indifference that many “web managers” exhibit compared to the zeal and fervour of early “webmasters”. The concept of a webmaster has changed over the years and evolved into a plethora of organisational roles and functions. Many of the early “web pioneers” will mostly remain nameless, not having clambered onto a social media stage that to them seems too self-promoting compared to the real work of pushing back the frontiers of global networking and related services, the sort of work that has made our current online world possible.

The substance of the workshop I led for Brian at IWMW in 2011 focused on the responsibilities of “institutional web managers” and how contemporary roles compared to the first webmasters. I was disappointed to find attitudes that insisted a web manager should be cautious and not “waste” time on new technologies (the example at the time was HTML5) until such technologies had been properly tried and tested by others! Today I am regularly flabbergasted at the genuine ignorance nursed by some web managers when it comes to matters of how the web actually works and what it can and should do for them. Some seem to even regard it as their duty to discard discussion about technology as it could conflict with their views of how a web site or application should work!

And now—service is being resumed, brace yourself

World Wide Web Consortium

Am I disheartened… resoundingly “NO”. Behind today’s facade of online torrents of individual experience minutiae there’s still a wealth of first class blogs in the spirit of the old email lists, which actually still survive in many Jiscmail and other fora also, despite email being plagued by spam. But more than that, there are still real webmasters around too. Often they’re the jacks of all web trades and masters of many, perhaps a bit risk averse, feared by authorities, thinking they can change the world a little and just getting on with it. Tim Berners-Lee’s GGG is not dead, it’s growing every day; embedded metadata is becoming more of a standard; the World Wide Web Consortium that Tim created is bigger than ever and co-ordinates web related development globally with every bit as much clout as the Internet Engineering Task Force (IETF) has coordinated the underlying network protocols over the years. We have only dipped our toe in the possibilities unfolding, enabled by the web and all its derivatives. Progress is inevitable. The value of a real webmaster is higher than ever. The compunction for web managers to think things through properly has never been more, and the risk they must face is not embracing change but ignoring or missing it!

There is a new version of the international standard for records management (ISO 15489-1:2016)

Time to rejoice! A new version of the international standard for records management BS ISO15489-1:2016 has been released by the International Organization for Standardization and has been adopted as a British Standard a couple of months ago. A revision has been long awaited since the first version of this popular standard came out in 2001.

What’s new? What’s different?

Much of the standard remains unchanged and the British national foreword to the standard (which is publicly available from the preview) provides a good overview of what changed and what remained.

With a stronger focus on records systems and controls, the revised standard feels a tad more theoretical than the 2001 version. The previous standard provided much more of a practical read in terms of what these controls should be (registration, classification, tracking etc.) in the business context. This could be in appreciation that records management has come a long way since the inception of the standard and many professionals are now familiar with the more practical aspects of the concepts that surround the control and management of records.

Greater prominence is given to metadata as an important control mechanism for records. This is certainly in recognition of the digital nature of the majority of records created in modern businesses. No particular metadata schema is recommended but it should be authorised and relate to different entities (such as records, agents, business etc.).

Some of the terminology has changed and in particular the definition of ‘Appraisal’. Appraisal has been extended from a focus on assessing the value of records for continued business use or archival value to include an analysis of records’ business context, activity and risk. Appraisal now essentially comprises the requirements analysis for records in a business context and therefore represents initial steps in establishing a records management programme.

The biggest change is the omission in the new standard of any prescribed methodologies that could be used to establish such a records management programme – most notably the omission of DIRKS (Designing and Implementing Recordkeeping Systems) which for many years has served as a useful implementation tool for many a records manager. Instead the standard describes what record systems and controls should achieve without specifying a particular methodology to be used or a particular system (the good old EDRMS) to be adopted. This is a shift away from specialised methodologies and systems towards a more holistic approach to recordkeeping that incorporates information security, information compliance and risk management.  It also reminds me of the modular approach to managing digital records that was proposed in MoReq2010 (Modular requirements for records systems) which – rather than focussing on one system – describes a set of core services for managing records that can be shared by many different systems in a business.

So in summary, the new version of the international standard for records management has succeeded in making the move into the digital environment and in showing an appreciation of more diverse business contexts and systems in which digital records need to be managed nowadays.

What difference will it make? What difference can we make?

When it came out in 2001, the first international standard for records management was well anticipated and received by the information management community. It provided them with a theoretical framework for recordkeeping and also – and maybe more importantly – a means to highlight the importance of managing records in their organisations. Arguably, the nature of organisations and how they use and manage records has changed over the years and this change has been acknowledged in the new version. Today, there is a greater diversification in both the business processes in which records are used and managed as well as in the role of the people who are tasked with managing them. ISO 15489-1:2016 can prove invaluable in bringing together all stakeholders in organisations that now need to be involved in the management of structured and unstructured information (and data!) in multiple business systems and applications by providing a theoretical framework for the management of records across business activities, contexts and processes.

The release of the new version of the standard then raises once more the question: What difference can we make? (with thanks to Heather Jack)

It is up to the information management community to promote the importance of recordkeeping in organisations for compliance, accountability, risk management and greater efficiency. This new version of the standard can help raise the profession’s profile and raise awareness of essential recordkeeping controls and processes to a wider range of information management professionals.

P.S. I have recently been appointed to the British Standards Institute (BSI) committee for records management as representative for the UK education sector and if you would like to know more about the committee’s work or would like to feed back to it, please do not hesitate to contact me.       

The past, present and future of Freedom of Information in Higher Education

The results of the 11th annual information legislation and management survey were published last week. They show that for the first time since the survey’s beginnings in 2005 the number of Freedom of Information (FOI) requests that Higher Education institutions (HEIs) receive has slightly decreased from a yearly average of 218 per institution in 2014 to 212 in 2015.

Other trends have continued: student issues, IT provision and financial information are still the main subjects of requests, whilst journalists, members of the public and commercial organisations continue to represent the main requesters.

The data collected from the survey since its inception in 2005 allows us to analyse the long term trends in the sector and give institutions the opportunity to benchmark the numbers of requests received and their key performance indicators. Data from the survey has been used for reports by sector bodies and most recently for submissions by UUK to the call for evidence from Independent Commission on Freedom of Information.

The Commission was established by the government in July 2015 to review the Freedom of Information Act 2000 and was dissolved in March 2016. It considered -amongst others- whether a change is needed to moderate the burden of the Act on public authorities whilst at the same time maintaining public access to information. The Commission has recently published a report outlining recommendations that have a direct impact on HEIs. Some results of the 2015 survey are interesting in light of the Commission’s recent report:

The commercial nature of HEIs 

The main outcome of the Commission’s review for HEIs is that their status in regard to the FOI Act is not likely to change now. Many institutions and sector bodies (UUK and the Russell Group) have argued that universities should cease to fall under the Act due to their increasing commercial nature. The commission found these arguments unpersuasive.

To support the view that universities are becoming more commercial in nature, the survey results show that requests for financial information (such as contracts etc.) have slightly increased to 16% of requests and are the third most requested category of information. At the same time, the use of FOI exemptions Section 43 (commercial interests) has slightly increased to 16% in 2015 but is only the fourth most frequently used category. Many requests for commercial information might have fallen under the IT provision and use category (13%) as well as exemption Section 12 (excessive cost of compliance) (25%). Judging from these figures, it would appear that interest in commercial activities of universities is increasing slightly year by year.

Tackling delays due to internal reviews and public interest test

The Commission recommended putting in place a time limit of 20 working days for internal reviews and public interest tests. Judging by the survey results, this recommendation should not be difficult for HEIs because the number of appeals to be dealt with was relatively low overall. In 2015 the total number of internal appeals was 114 and of external appeals only 24 (of a total of 10796 FOI requests). Overall 95% of all FOI requests were responded to within 20 working days or less.

Mandating the publication of compliance statistics and the publication of responses to requests

The Commission recommended the publication of statistics for every public authority with over 100 employees and that these should be co-ordinated by a central body. In addition, it was put forward that these public authorities should also be required to publish responses to requests as soon as practical after the information is given out.

This is certainly an area where more work can be done and Jisc would be interested to understand further how HEIs are planning to implement disclosure logs. According to our survey only 10% of responding institutions publish a disclosure log and the main reason for not implementing one is that their limited value would not justify the resources required to implement it (52%). 16% of institutions state that a disclosure log is planned but not yet implemented and 27% of institutions do not publish one because it is not currently mandatory.

Jisc provides an Information Request Register tool that allows HEIs to record and track requests under either the Freedom of Information Act (FoIA), or FoISA for Scottish organisations, Data Protection Act (DPA) or Environmental Information Regulations (EIR). This tool could be useful for institutions to enable them to gather relevant statistics, if requested by the government.

The burden of FOI on HEIs

In their submission to the Commission, Universities UK have argued that the burden of FOI has increased over the years. This is supported by ILM survey results over the years with monthly average numbers of FOI requests per institution rising from 2.8 in 2005 to 17.7 in 2015. However, the 3% decrease in FOI requests in our most recent survey indicates at least a temporary standstill in the number of requests being received by HEIs but it is too early to see whether this represents the beginning of a sustained trend.

Another argument here was that requests have become increasingly more complex and costly, with the increases in areas where the cost cannot be claimed such as redaction. UUK have therefore recommended to extend the range of costs taken into account. However, even though the time taken to redact information prior to publication is overall still perceived as long or very long, the latest ILM survey results indicate a decline in the time taken to redact information. In 2015 67% of institutions felt that it would take medium to very long to redact information compared to 73% in 2014. Similarly, time taken to identify information has declined with 78% of respondent perceiving it as very short or short in 2015 compared to 67% in 2014.

Interestingly, there is an increase in the amount of respondents who felt that locating and accessing information took long to very long from 31% in 2014 to 38% in 2015. This could be an indication that requests have become more complex and information needs to be retrieved from across the digital infrastructure which adds an additional burden for staff responsible for information compliance and management.

Jisc have long supported the information compliance and management community at HEIs with practical tools, advice and guidance around information governance.  Please contact Nicole Convery, our subject specialist for systems, tools and information management if you would like to find out more about the support that Jisc can offer in this regard.

We would like to thank everyone who has submitted data to our annual Information Legislation and Management survey. Your contribution has enabled the use of the data to inform government about FOI in Higher education and will inform future activity in this area at Jisc.