Capturing Requirements for Jisc Frameworks

I’ve procured and administered a handful of procurement frameworks on Jisc’s behalf over the years, and a lot of the language and accepted practices of public procurement are familiar to me (drummed in by colleagues in our excellent procurement unit). However, I’ve been struck recently that for a number of our members, when we invite them to complete an invitation to tender (ITT) for one of our frameworks, such as the dynamic purchasing system for procuring public wifi services, it may be the first time they’ve had to write in this particular idiom. Hence this short guide.

The purpose of an ITT is to describe exactly what you want to buy and to elicit responses from the supplier that give you enough information to judge whether they will supply a good solution and be a good delivery partner.


In your ‘background’ section, you must set the context, and you have to keep reminding yourself that the bidders don’t know things that you take for granted. If later in the ITT you are going to ask them to describe how their solution will integrate with your network, you have to provide them with some details and diagrams of that network here. This will probably need to be suitably edited to remove unnecessary detail that might prompt a security concern, such as the IP address of routers. Similarly, if you are going to ask them to confirm that their solution has sufficient capacity, you need to offer some usage projections for the life of the solution based on your experience of your campus. Basically, for every question you ask them in subsequent sections, you need to reflect whether there is any background info that you have that would make it easier for them to answer fully that you could add in here. The more relevant detail you provide here, the fewer clarification questions you may get from the bidders, and ultimately the better design of solution you may be offered.

The bulk of your ITT consists of statements for the bidder to respond to, that can come in two flavours, a mandatory requirement or an informational requirement.

Mandatory Requirements

These are the nuclear option in your ITT. If you ask for something in a mandatory requirement (MR), the bidder must be able to provide exactly what you asked for. If they can’t, you are forced to throw the rest of their bid away and they cannot win the contract. Mandatory requirements are therefore graded purely on a pass or fail basis; you can’t assess how well they do whatever it is you asked for, or compare between bidders and decide which does that thing better or with more features. The language is typically direct and forceful to express this: The bidder must confirm that the proposed solution is capable of <X>.

If you write a too-narrow MR, or make assumptions about what kind of solution they might offer and phrase it with that in mind, you may find yourself forced to reject an otherwise excellent solution. For example, if you are sourcing a network system that will be used by minors, you might be concerned that it has to be firewalled from undesirable content on the internet. You might use an MR to specify that the bidder’s solution must integrate with your existing firewall (which you would have described in your background section, knowing you’d ask this question), or you might perhaps have a requirement such as the solution must be capable of implementing the site blacklist as published on the PREVENT website at a known URL. But you should avoid, for example, assuming that the firewall would be implemented as a router access control list just because that’s an approach that you are familiar with and describing it as such in an MR, because that might exclude an arguably better solution based on alternate technologies.

Generally, you are better off pairing an MR that describes at a high level the functionality that must be present with an informational requirement (IR) that gives you the opportunity to ask for further details.

Informational Requirements

IRs will typically form the bulk of your ITT.  They give you the chance to ask for details of approach and implementation, and require you to offer a marking scheme so you can indicate how well the bidder’s response meets the requirement you set. Those marks will help guide your eventual purchasing decision, and can also be used to give an indication of the relevant importance you assign to different aspects of the solution. You should make your IRs as specific as you can, to avoid bidders going off on a tangent and providing information you don’t need, but keep it focused on the area you are addressing in that IR; it’s seldom helpful to mix questions about multiple facets of the bidder’s proposal in a single IR.

Taking the firewall example above, you might end up with:

  • MR1 The bidder must confirm that their solution is capable of implementing firewalling rules.
  • IR1 The bidder shall describe, using diagrams where relevant:
    • how the proposed firewall solution would implement the blacklist as published by PREVENT at <URL> (5 marks);
    • what logs will be held (and for how long) of firewall operations (3 marks); and
    • the mechanism(s) by which the customer can change firewall parameters in real time (5 marks).

It’s a quirk of procurement rules that you must give maximum marks within your marking scheme to an answer that fully meets the requirement that you set out; you can’t leave ‘headroom’ marks for an even better answer that does everything you asked for plus even more that you didn’t mention.

It’s not just structure

This blog addresses the formalism of structuring the ITT language; it doesn’t really tell you what you should enquire about. How will the solution evolve with time to accommodate changing needs? How does the bidder see GDPR duties being divided with the customer? Could you introduce charging next year if you wanted to? All I can suggest is trying to anticipate the headaches that a future version of you might wish you’d avoided at this stage.

For expert advice on our frameworks, you can always speak to our team


The key ingredients of a well structured ITT that gives you the best possible chance of getting good bids in for consideration should include:

  1. a background statement that provides all the relevant detail a bidder might need about what you are seeking to purchase;
  2. a limited and highly selective handful of MRs that address only the most vital essentials of a solution that you can’t live without and are phrased in a general way open to a range of approaches;
  3. a comprehensive set of IRs that address every different facet of the proposed solution that you want information on (alongside their marking scheme to allow bidders to judge their relative importance to you).

Thoughts on Crisis Management

Earlier this month I attended CLAW 2019, the third Crisis Management Workshop for the GÉANT Community. The event was held at the Poznan Supercomputing and Networking Centre in Poland – not the easiest place to get to from the UK, but lovely once you’re there:


My role at Jisc is Head of Delivery, but I also act as a Major Incident Manager, part of our process for dealing with major network incidents. This blog post highlights some of my learnings from CLAW 2019 – how Crisis Management is done at other NRENs, how that differs from what Jisc does, and what improvements I can take back to the Jisc MI team.

Worth also noting that we’re constantly reviewing and updating our processes anyway, in light of incidents that occur and feedback we receive, and some of the things discussed in this post will feed into that ongoing cycle of continual improvement.

Before I start however, I couldn’t forgive myself if I didn’t mention that the trip started with my first ever flight on lesser known Hungarian airline WizzAir, and also that I stayed in an official Euro 2012 hotel in Poznan 🙂

So the event itself…..

It was split over 2 days – day 1 was a mix of presentations and a short (3 hours!) practical exercise, and day 2 was a much more in-depth (6 hours!!) practical exercise. I only attended the first day due to other commitments back at base on the second day, so what follows is based on the first day of the event only.

First thing to note is what Jisc calls Major Incident Management, everyone else seems to call Crisis Management, so for the purposes of this blog I’ll standardise on Crisis Management.

The contents of day 1 of the event can be split into 2 areas:

  1. Presentations from other NRENs about Crisis Management;
  2. A practical exercise in dealing with a crisis.

First the presentations….

The standout talk for me was by Anna Wilson from HEAnet. Anna presented on ‘Real Life Crisis: Network Outage During 9/11’, a fascinating look back at how the events of 18 years ago impacted the internet and global NREN connectivity. The talk also looked at the shape of internet in general in 2001, which in itself was eye opening, and ended with some reflections on the lasting impact of 9/11 and how it helped shape networking as we see it today. I was so impressed with Anna’s talk that I immediately tapped her up for a repeat performance at Jisc’s Networkshop conference next year, an offer she kindly accepted.

The remainder of the talks on day 1 of the event were from members of other NRENs across Europe, talking about their own approach to Crisis Management. All talks were interesting and informative in their own right, and it’s hard to summarise or pick out any highlights as I found all content useful. Perhaps the one thing that did become clear though was that everyone who presented had a similar approach to Crisis Management, and they all differed to the way Jisc does it.

Jisc MI structures and processes were borne out of a DDoS attack 3 years ago, where the scale of the attack prompted a surge of incoming calls that swamped the Jisc Service Desk, and as a result the MI processes we’ve developed have been almost entirely focused on Jisc’s approach to comms during a MI – managing calls into Jisc, and coordinating outbound comms to a variety of stakeholders. Aside from comms, one of the other principles key to Jisc’s approach is to leave the teams responsible for fixing the problem, to fix the problem. Let them focus on what they’re supposed to be doing, rather the bringing them into additional structures and meetings tasked with ‘managing’ the situation. This principle also extends to the people managing the people fixing the problem (engineering team leaders, for example).

All other NRENs I spoke to and watched present of course have a strong focus on comms as well, but also on how to deal with stressful situations, how manage priorities in a crisis, what to focus on, making clear decisions, etc etc etc. Jisc has chosen MI managers based on who is deemed best equipped to deal with such situations, rather than proactively developing people to be better prepared to act in such a way when required to. Most notably however was the involvement in a wide variety of groups in the process – as above, Jisc considers functions like engineering and security as inputs into the MI process, whereas other NRENs consider them part of the process – in the meetings, sharing information, and supporting decision making. Food for thought, and points I’ll definitely take back to the Jisc team.

Onto the group exercise….

It was hosted by Wouter Beijersbergen van Henegouwen, an external consultant specialising in Crisis Management. The scenario was based on a fictional NREN in a fictional country that had experienced a fictional data leak. The scenario consisted of 5 roles, each of which had its own set of information on the incident, and the exercise was to conduct a crisis meeting to ascertain what had happened and agree a course of action. With each role drip feeding various bits of information during the meeting, it had a genuine ‘real life’ feel to it which is so often hard to recreate in a simulation environment. My role was ‘manager’, meaning I knew less about the incident than most and had to chair the meeting whilst trying to piece together the series of events that had led to the crisis. Good fun, and a really useful exercise to take part in. I’ll be taking the format and supporting scenario information back to Jisc to feed into our next major incident workshop.

So overall a really good event, despite only experiencing half of it. I’ll definitely be attending again in future.

The journey home was very nearly derailed as I was forced to run through Warsaw airport to catch my connecting flight back to Heathrow, but thankfully time was just about on my side and my own personal crisis was averted with seconds to spare.

To 100Gbit/s, and beyond!

The capacity of the Janet network has always seen massive growth – in my time at Jisc it’s gone from the 10Gbits/s SuperJanet4 network in 2006, to the 100Gbits/s SuperJanet5 network in 2011, to the 600Gbits/s Janet6 network that operates today. We also made a bit of noise last year when we upgraded the core of the network to 400Gbit/s, which was a complex and time consuming piece of work, but ultimately one that put us at the forefront of R&E networking globally.

So, in summary, we’re almost constantly upgrading Janet. But why do we do it? In simple terms, this is why:

Traffic on the network just keeps rising, thanks to all of our lovely members and customers doing more and more exciting things that require more and more bandwidth.

It used to be the case (no more than 5 years ago) that even our biggest users were connected to Janet at 10Gbits/s and that was plenty. The past few years however we’ve seen a step change in network requirements, and the number of 100Gbits/s connected customers is on the rise! This post looks at those big users, when they made the leap to 100Gbit/s, and what they’re doing with it.

First up was the Science and Technology Facilities Council (STFC) Rutherford Appleton Laboratory (RAL). STFC is a world-leading multi-disciplinary science organisation. Its research seeks to understand the Universe from the largest astronomical scales to the tiniest constituents of matter, and creates impact on a very tangible, human scale. RAL’s 40Gbit/s of Janet connectivity was bursting at the seams by Q3 2018, and adding yet more 10Gbit/s channels was no longer the most efficient way to increase capacity, so 100Gbit/s upgrades were implemented. RAL is one of the five largest computing centres that make up the WLCG collaboration; a group of ~200 universities and research institutes around the world providing computing to The Large Hadron Collider (LHC) experiments. RAL archives around 12% of the LHC data produced at CERN. Whilst RAL has dedicated connectivity to CERN for the purposes of receiving this data (which is expected to be upgraded to 100Gbit/s by 2021 to meet the LHC Run 3 requirements), it then uses it’s Janet IP connectivity to share that data with the rest of the UK, hence needing high bandwidth Janet connections.

Next was Imperial College London who made the leap and upgrade Janet connectivity to 100Gbit/s in Q1 2019, and also implemented 100Gbit/s connectivity to the Jisc Shared Data Centre in Slough at the same time. Imperial’s previous 20Gbits/s of Janet connectivity was filling up, and behind the scenes there was a lot of throttling back of the particle physics researchers going on to avoid flooding the connections completely, so on upgrading Janet connectivity to 100Gbit/s all limits were removed, and the traffic graph below shows what happened.

(image taken from a presentation given by Imperial College London at Networkshop47)

University of Edinburgh, specifically its Advanced Computing Facility on the outskirts of Edinburgh – the high-performance computing data centre of EPCC housing a range of supercomputers. As of Q1 2019, connectivity via the University of Edinburgh 10Gbits/s Janet IP connections was no longer suitable given the increase seen in traffic to/from the ACF. Dedicated 100Gbit/s connections were successfully deployed from the ACF directly onto the Janet backbone in July 2019, relieving the pressure on the University and the Janet regional network in Edinburgh.

Finally the European Bioinformatics Institute (EBI) on the Hinxton Campus south of Cambridge –  is part of the European Molecular Biology Laboratory (EMBL) Europe’s flagship laboratory for the life sciences. No prizes for guessing that this customer generates, processes and transmits an enormous amount of data. Work is underway to upgrade raw bandwidth to the site from N x 10Gbit/s to 100Gbit/s (whilst also retaining the N x 10Gbits/s as well). On top of that 100Gbit/s connectivity will be provided from the Hinxton Campus to EMBL-EBIs new data centre to further support its activities.


So, what next?

Well we’re in discussion with a number of other institutions about upgrading from N x 10Gbits/s to 100Gbits/s, all of which we expect to come to fruition over the next 12 months – the traffic growth speaks for itself and there’s never really any option other than keep upgrading.

We’ll also continue to upgrade the Janet backbone to cope with the steady flow of upgrades at the edge, in units of 100Gbits/s and 400Gbits/s where appropriate.

Finally, we continue to crunch the numbers, run the reports and predict the future growth, so that we’re always ahead of the game in terms of knowing how and where the traffic levels are increasing. We also work closely with our optical and routing hardware vendors to understand the next generation of products they’re working on, as well as monitoring wider industry activities.


IPv4 Address Brokers and Legacy Address Space

If you are the contact listed in one of the global registries for the IPv4 addresses held by your University or College, I’m sure you will have received numerous emails from address brokers offering to buy some or all of your IPv4 addresses. This post is intended to provide some advice.

Many of Jisc’s members have substantial amounts of IPv4 address space, some of which may have been allocated under various different regimes.

First there are ‘legacy’ or ‘early registration’ address blocks. These were allocated by the Internic before the introduction of the Regional Internet Registry (RIR) system in the 1990s, and were typically allocated directly to the university. These are usually referred to as a ‘Class B’ or a ‘Class C’ as they dated from the days when Internet routing was classful.

Second there are ‘provider aggregatable'[1] addresses. These were assigned by, depending on the time of assignment, UKERNA, Janet, or Jisc, operating as a Local Internet Registry (LIR) within the RIPE NCC, the RIR for Europe and the Middle East.

There is a third type of address, ‘provider independent’ addresses that are used much less frequently on Janet, but for these purposes are similar to ‘legacy’ addresses.

Whilst the second type of address is assigned only as long as the original criteria for assignment hold true, and must be returned to Jisc’s pool of available addresses when this changes, the former were effectively allocated in perpetuity and are often seen as belonging to the institution in question.

There have been various dates in recent years that have been trumpeted as the “end of IPv4,” but regardless of that the number of free IPv4 addresses within the RIR system is low, and as it is no longer possible for Internet Service Providers (ISPs), Cloud Service Providers (CSPs) or Content Delivery Networks (CDNs) to get new IP addresses from the RIRs, they are increasingly buying them in the growing IP address marketplace.

This means that legacy addresses have a value, and address brokers have for some time been approaching Jisc’s members offering to sell their IPv4 addresses for them. A recent email offered “up to” US$1.3M for a /16 (by the time I publish this, that’s probably ~£1.3M).

At least two of our members have sold some IPv4 addresses, which is entirely legitimate, but we would like to provide our members with some advice.

Jisc cautions our members about accepting offers that come from unsolicited emails, especially from brokers based outside of the EU, and if address sale is something of interest, we suggest approaching brokers that have agreed to abide by the relevant RIPE NCC policies:
Note that listing on that page is not a recommendation.

The RIPE NCC’s Frequently Asked Questions page for transfers is here:

To understand the terminology on that page, much of the early registration address space is registered with the RIPE NCC with Jisc acting as a sponsoring LIR.

Please remember that ‘provider aggregatable’ addresses assigned by the Jisc/Janet LIR function may not be transferred to a different provider.

Of course the value of IPv4 addresses may fall as well as rise, and it will certainly do the former as the deployment of IPv6 increases and the need to use IPv4 decreases. Many large CDNs, such as the one used by Facebook, use only IPv6 internally and only use IPv4 for the externally-facing load-balancers and proxies. Jisc members should, however, consider carefully what their usage of IPv4 addresses is likely to be over the coming years, even if using Network Address Translation (NAT), as a greater number of devices on private addresses requires a larger pool of public addresses to ensure session stability.

Jisc maintains a reasonable supply of IPv4 addresses that can be assigned to our members on request, subject to the rules of the RIPE NCC. Usage of historical assignments and the planned utilisation of addresses are part of those rules.

[1] Whilst it should be ‘aggregable’, the resource documents almost all use ‘aggregatable.’

Designing our way out of a rats nest!

A few (5) words on the process of designing new access networks, aka regional aggregation networks (health warning: written by not-a-network-designer).

  1. It
  2. Takes
  3. A
  4. Long
  5. Time

I could leave it there, but for the sake of clarity I’ll delve into some of the more interesting and relevant details.

So, we have a contractual requirement to replace a Janet regional network, and we have a list of member and customer services we need to deliver…3…2…1…GO!

First thing we do is open up our copy of the Openreach Exchange spreadsheet. I’m told that this information is a national secret, but it appears to be one of those secrets that everyone knows (well, service providers know), so we use it. We start by mapping each customer site in a region to its logically closest Exchange, this uncovers the spread of Exchanges we ideally need to build into, but more importantly the important ones where multiple customers will connect. These will form the basis of the regional aggregation network. We also factor into that the locations of the current RNEPs and other key PoPs within the region, and build that into the design. The common outcome of this (so far) is that we’ll end up with a core set of PoPs and Exchanges where the majority of customers will connect, then some outlying Exchanges where few customers will connect, and over longer distances, but that represents the best value solution. We then connect the dots with Openreach optical services to give us a complete core topology of rings and arcs that deliver diverse connectivity into each location. This is what we call our reference design.

The next phase is to engage our fibre suppliers. We have a Dynamic Purchasing System with a number of fibre providers registered, so we issue an invitation to all of them, to bid fibre spans to replace Openreach optical services on our reference design (on the basis fibre that we can light ourselves is better, etc). We’ll then evaluate the bids that come back and re-design the network on a ‘pick & mix’ basis, where we’ll use individual spans from separate suppliers if that represents best value. The main rule of thumb here is that any individual ring or arc needs to be either 100% dark fibre or 100% Openreach optical services, not a combination of both. As you can imagine, this takes a lot of time, especially when 4 or 5 fibre bids containing multiple fibre spans and resilience options are on the table, but it’s a key step to get right to ensure that we’re getting best value from the market whilst also building a resilient, scalable and future-proofed network. Once complete, we have a final topology, and experience to date tells us that the most likely outcome is a fibre core in the centre of a region (where the big towns and cities are, and where the majority of our customers are), then Openreach optical services out to the more remote parts of the region, where we have less customers and the aggregated capacity requirements are lower. We’re happy with this.

We then move to more detailed mapping of individual customer connections to their serving Openreach Exchange, checking fibre routing, adding resilience where required, investigating fibre vs Openreach services. All of the same steps as above but this time with an eye on services to Members and Customers rather than the regional aggregation network. Again a time consuming exercise, but another one that it’s important to get right.

Once all of the above has been completed, we mash it all together into a list of optical and Ethernet equipment that is needed, at Exchanges and at customer sites, that goes out to our equipment suppliers to quite against. It’s only at this point that we have a genuine view of the total cost of the network (we estimate up front, but as we all know, all estimates are wrong), so we cross fingers (and toes) and hope that it’s within budget – thankfully it usually is.

And when we’ve done all of that and have a final design, we proceed to ordering the required rack space at Openreach Exchanges, one outcome of which could be that the space we want isn’t actually available, and we have to start the whole process again 🙂

Back from holiday, back to work

There’s nothing like a good holiday to escape, relax and rejuvenate. I was away for a couple of weeks recently (read this post on LinkedIn about how I kept in touch with all things Jisc and Janet), as were a lot of the rest of the access programme team. A lot of us have got children on school holidays, and with the Janet summer engineering moratorium also in full flow, August is the perfect month to escape. Work didn’t grind to a halt, but it did slow down. Now we’re all back, it’s a case of pedal to the floor and off we go.

This blog post briefly outlines what we’ll be working on between now and the end of the year.

Project work continues:

  • The South region is entering its final stages of delivery of the underlying infrastructure, and we’re already planning ahead in terms of commissioning, and ultimately transition of services;
  • Ditto the South West region.
  • The Midlands region design (East and West combined) has been completed and orders have been placed for the fibre components of the network;
  • Ditto the London region;
  • The East Anglia, North West and Scotland regions have entered the formal design phase, where we’re working with Members and fibre suppliers to collate requirements and optimise topologies;
  • Everything else is of course on the radar as well, and it won’t be very far into 2020 when we start working on the rest.

In other areas:

  • The final Tech 2 Tech event will be taking place in Belfast, and we’ll be developing plans for that series of events to continue longer term into 2020 (and beyond).
  • We’re constantly working with additional fibre suppliers (and Openreach) to ensure we have access to the best value infrastructure available;
  • We’re looking at how we can better meet the intersite requirements of Members, with a view to more formally identifying them,, and building them into network designs and projects where possible;
  • Plans and Gantt charts evolve on an almost daily basis, so documentation and admin forms a large part of the overall workload;
  • We’re learning lessons at every turn that then feed into future projects – by the time we finish this programme of work we’ll be really good at it!

And then it’ll be Christmas and we can all have 2 weeks off work again 🙂

Project progress update by Neil Shewry

It’s been far too long since my last blog post. The good news is that it’s because we’ve all been really busy planning, designing and delivering new access infrastructure. The bad news (for me at least) is that excuse only lasts so long and it’s about time I took a moment to provide you all with an update.

In short:

  • The south region is well into delivery and due to complete by the end of the year
  • Ditto the south-west region
  • We’re in the process of signing contracts and placing orders for the Midlands region (east and west combined) and also London
  • We’re making some smaller interim changes in the North West; and
  • We’re planning almost everything else

But that doesn’t tell half the story! The lessons learned log is bursting at the seams with all sorts of process constraints and quirks that you only really uncover when you start doing these things. Almost every step and every interaction has contributed to a steep learning curve. Needless to say once we’ve finished the programme we might just about know how to do it all

One of the most pleasing things is the level of interest and engagement we’ve had from the community.

Change is never easy, especially when it’s imposed (as we’re doing to you), but the feedback we’ve had on our strategy and design principles has been really positive, to the point that we’ve undertaken a lot of additional engagement with institutions and regional groups to explain further precisely what we’re doing, and also to collaborate on a technical level where’s there’s been interest to help support our network design processes.

This is really great to see in terms of us making sure we have stakeholder buy-in (which I can’t stress enough the importance of), but also in enabling us to tap into local knowledge of network infrastructure ‘in the ground’ to help steer design and topology decisions. The more of that the better!

Finally we have the series of Tech 2 Tech events, the first phase of which is now coming to an end. We’ve already had great and well-attended events in Bristol, Birmingham, Edinburgh, London and Durham.

The Jisc-led sessions have grown into a really comprehensive run through of the Janet access programme, and the feedback we’ve had has been really positive, but the standout sessions at all events have been the member-led talks, it’s really useful to us at Jisc to hear what you’ve got to say, and having been in the room at all of the events, you seemed to learn a lot from each other too.

Next week we head to Manchester, and then to Belfast in October. After that, we’re looking to continue the series of events with a slight change of focus – more news on that as we have it. We’ve also had a webinar-based Tech 2 Tech – an online version of the physical events – which too was well attended and is something we plan to do more of as the programme progresses.

Neil Shewry, head of infrastructure delivery… thoughts

A year or so ago Jisc announced that it would be embarking on a programme of work to refresh connectivity to all 1000 members and customers – the Janet Access Programme. The new access infrastructure delivered by the programme would be capable of supporting an enhanced set of connectivity services, scaled to last 3-5 years (and more), whilst preserving Janet’s already stellar levels of reliability, resilience and security. No mean feat, given that every traffic graph is pointing up and to the right, while every funding graph is heading the other way.

Fast forward a year, and after multiple procurements, delayed product launches, and a whole load of meetings, and we’re still planning! For good reason though. During my time at Jisc (previously Janet, previously UKERNA) there has been a steady flow of regional network refreshes, and in almost every case the new network would share most of the design principles and traits of the network it was replacing. Such projects were so regular and so predictable that they were very much considered BAU (a contradiction of the word ‘project’ I know).

The changes we’re implementing this time round certainly aren’t BAU, they’re wholesale, so the overall process has needed considerably more thought. Moving from a layer 3 regional network to a layer 2 regional network is one such change that couldn’t be adopted lightly – resilience and rerouting in failure scenarios became a particular hot topic for debate!

The equipment we’ll be using is new (to Jisc) so needs to be tested, which is why we’ve invested in a lab setup at our Network Ops Centre in London, to get our hands on the equipment and start to build some test networks. The set of connectivity products we’ll be using has evolved since we last went to market, so there are new suppliers, processes and technologies to get to grips with. We’ll be deploying more dark fibre where available and cost-effective. All change in almost all aspects of the way we build regional network infrastructure.

As head of infrastructure delivery at Jisc, it won’t surprise you to hear that I’m keen to get on and start actually delivering something soon, but I can’t overstate enough the importance of getting the design right (and then checking it, and then checking it again, and then probably again).

Whilst the key objective of saving public money isn’t impacted by subtle changes to design details, preserving a vital set of services to members is, so there really isn’t too many times every component of every design can be checked, and that is why each project is overseen by a team of expert network architects, engineers and project managers, and further overseen by a programme board, all of whom have their eyes closely on all of the details.

So where are we now? Well we’re ‘getting there’ in terms of confirming the scope, design and implementation plan the first few projects. Design for the South region of Janet has been finalised, and contracts are being drafted with a view to placing orders in the coming weeks; we’re currently putting the final touches to the design of the South West region; and initial planning has started for the London, West Midlands and North West regions.

With each project spanning approximately 12 months from contract to completion, over the next 12-18 months we’ll start to see genuine progress in delivering replacement access infrastructures, connectivity to Members being enhanced, and ultimately benefits being realised, so watch this space!

If you’d like to know more about the Janet Access Programme, you can head to the Jisc website:

We’re also running a series of Tech 2 Tech events that all Members are invited to attend: