What’s next for HPC and the future of research innovation?

From big data analytics to genomics, complex science and education research is typically reliant on sophisticated technology to speed up processing times, enabling larger and more ambitious projects while at the same time reducing development costs. BY Spencer Lamb, Director of Research, Verne Global.


 

High performance computing (HPC) has long been instrumental in putting many UK academic bodies, government-funded and private sector research labs on the map, enhancing both research accuracy and the timely delivery of results. Take for example the genomics and genetics research of the internationally renowned Earlham Institute or the Wellcome Sanger Institute. A quick look at Innovate UK’s grant pages also illustrates the scale at which universities across Britain are using the technology to experiment and drive innovation and discoveries. Many of these universities are also responsible for nurturing the creation and growth of commercially competitive and specialist startups.


Yet, many publicly funded scientists in the UK continue to be reliant on the UK’s primary academic research supercomputer ARCHER (Advanced Research Computing High End Resource); a Cray XC30 based in Edinburgh. Though free, users have been constrained by limited access terms brought about by queuing systems. To address this gulf in provision and capability between ARCHER and local university systems, in 2017, the Engineering and Physics Sciences Research Council announced it would spend £20m building six regional high-performance computing HPC centres. Unfortunately, many smaller research bodies can still find themselves struggling to make the case against more high profile projects, even with this increased capacity. 

 

To ensure all organisations in our research and scientific community (big, small and specialist), public or privately-funded, keep on the front foot, there’s no getting away from the fact that we need to see more investment and increased advocacy for data-driven technologies like HPC. Indeed, the potential implications of supercomputer applications for competitive research areas like the life sciences and bioinformatics are huge; no other tools currently match the acceleration of the technology. As a first step towards achieving this -- for both those reliant and eager to embrace the technology and for those that shape research innovation policy -- we need to first look at the key issues at play.

A principal challenge for those reliant on HPC clusters for computer modelling and simulation today is that they consume a great deal of energy to power the servers used. In turn, these same servers must also be kept cool so that don’t overheat and stop working. The power required here can put significant pressure on budgets. This is especially the case where machines are in-house or on campus. With the UK’s power mix mainly made up of fossil fuels (almost 50 percent being gas), users are exposed to some of the highest energy prices in Europe and they are – somewhat inadvertently – increasing their own carbon footprint. For universities and organisations working in climate change and weather modelling or forecasting, like Natural Environment Research Council (NERC) for example, this is not good news. 

 

A second key issue to consider is that specialist technology often needs specialist technical support. This can be either be just to carry out regular maintenance or to improve systems and avoid technology depreciation. The latter can result in differing systems and hardware being patched together, which is not optimal in terms of HPC performance. A related challenge to this is the UK’s widening digital skills gap – a lack of homegrown computer scientists could hamper the UK's leadership position in harnessing the potential of supercomputing. Last year the Open University revealed that the nation’s shortage of engineering and technology skills costs the private sector an estimated £6.3bn every year.

 

At a time when advanced technologies like AI offer many new and exciting opportunities, the research community needs easy access to innovative data centres that can be tailored to its requirements – providing solutions that naturally flex between varied resiliency requirements and adapt to a wide range of power density needs. In Europe, HPC-related actions continue to be a key focus area of the Horizon 2020 budget. Under the “Research and Innovation (R&I)” pillar of the EuroHPC Joint Undertaking initiative, for example, it is committed to establishing an innovation ecosystem for supercomputing technologies.

 

With Brexit, uncertainty over the UK’s future access to networks and initiatives like these needs to be mitigated. This issue is important for our big public sector research bodies, as well as the private sector initiatives that start out in university labs or innovation hubs and go on to spin out and become profitable and pioneering businesses in their own right. Cambridge-based environmental data intelligence company Satavia is just one such example of this, illustrating how targeted, third-party funding can be instrumental in driving industry innovation forward. Of course, with data processing and storage so vital for many sectors, the questions of Brexit are significant across the board.

Ultimately, as the demand for computing capability increases, so too will the pressure on the capacity and operational costs for data centre services. Forward-thinking heads of scientific HPC must look to cloud computing to fulfill their need for supercomputers that are flexible and easy to access.

 

Hyperscaler cloud platforms like Microsoft Azure, Google Cloud Project and Amazon Web Services are popular and easily accessible solutions for certain types of compute, but with their clouds based on virtualised servers there are legitimate concerns as to their suitability to provide genuine, true HPC environments for intensive academic and research applications. The good news is that, today, there are specialist solutions on the market that can be deployed as stand-alone compute or used as an on-demand HPC extension, overcoming such hyperscale issues and saving on price. In these instances, specialist technology providers also act as valuable partners to the users, helping to plug any gaps in skills and expertise while alos creating space for researchers and companies to get on with their core work.

 

It is also encouraging is that with the continued expansion of the G-Cloud marketplace, publicly funded organisations can access more competitive cloud and HPC solutions that are highly attuned to their needs, from providers in the same way private entities can. This creates another avenue to established Jisc-led partnerships that also enable research institutes to privately access remote data centres abroad in locations where energy is affordable, abundant and often renewable. This is something the Earlham Institute, for one, has availed itself of to date.

 

It’s essential that UK policymakers continue to prioritise the measures that support the computing departments of our prized institutions and universities to expand their compute capabilities in a way that works for them, while also creating a commercially competitive market of private sector innovators. Government can also go a long way towards addressing wider challenges like IT skills development and training. This is the only way to ensure we stay on the cutting-edge of research and development.

 

By Stuart Farmer, Sales Director, Mercury Power.
By Nick Bannister, vice president sales for Arrow’s enterprise computing solutions business in...
Here are the top six trends according to Brent Owens, Director Sales & Partner Enablement EMEA for...
By Paul Flannery VP of International Channel Sales at ERP provider, Epicor.
By Chris McKie, VP, Product Marketing Security and Networking, Datto, a Kaseya company.