Denne wiki'en blir oppgradert 17. mars. Vennligst sjekk om din wiki vil bli overført. Er den ikke oppført i listen her:, så blir wiki'en avviklet om en ikke melder den inn.

Available resources

From HPC documentation portal
Revision as of 13:48, 20 June 2017 by Alexander Oltu (Talk | contribs) (IBM Tape Library)

Jump to: navigation, search


UiB operates supercomputer facilities that serve the high-end computational needs of scientists at Norwegian universities and other national research and industrial organizations. Additionally, the system is used for research and development by international organizations and provides services to European research groups (and projects) as well as to cooperating scientists from international institutions.

The installations are funded by the Norwegian Research Council through the NOTUR project, the University of Bergen (UiB), the Nansen Environmental and Remote Sensing Center (NERSC), the Institute for Marine Research (IMR), and Uni Research AS. These partners make critical use of the system for scientific research and development, in particular targeting marine activities ranging from marine molecular biology to the large scale simulation of ocean processes including the management of ocean resources and monitoring of the environment. Heavy use by academic research groups has traditionally come from computational chemistry, computational physics, computational biology, the geosciences, and applied mathematics.

The supercomputer facilities are installed at the High Technology Center in Bergen (HiB) and are managed and operated by UiB. The installation consists of the following parts:

Cray XE6m-200 (

Hexagon small.jpg
  • 204.9 TFlops peak performance
  • 22272 cores
  • AMD Opteron 6276 (2.3GHz "Interlagos")
  • 1392 CPUs (sockets)
  • 696 nodes
  • 32 cores per node
  • 32GB RAM per node (1GB/core)
  • Cray Gemini interconnect
  • 2.5D Torus topology
  • OS: Cray Linux Environment, CLE 5.2 (Based on Novell Linux SLES11sp3)

It was upgraded from Cray XT4 in March 2012.

Linux cluster

  • 7.7 teraflops
  • 97 nodes, 1328 cores
    • 672 Intel Xeon E5-2670 (2.60 GHz) cores (2 eight-core per node,32 threads)
    • 256 Intel Xeon E5420 (2.5 GHz) cores (2 quad-core per node, 8 threads)
    • 256 Intel Xeon E5430 (2.66 GHz) cores (2 quad-core per node, 8 threads)
    • 144 AMD Opteron 2431 (2.4 GHz) cores (2 six-core per node, 12 threads)
  • 110 TB Lustre parallell filesystem for /work and /fimm/home, pluss internal disks on all nodes
  • Linux operating system (Rocks/Centos 6.5)
  • Gigabit Ethernet on all nodes
  • Infiniband interconnect on 16 nodes has legacy hardware from 2008 with 64 HP blades and in 2010 with 12 Dell blades,2013 with 21 Dell blades. In addition to grid services for NorGrid and CERN Tier1 it serves a variety of applications, including bio-informatics, physics, geophysics, and chemistry.


A page that lists all HPC equipment that was once operated by Parallab can be found here.