Project Description

The Concurrency and Locality Challenge

Coloc-highlight

The European project COLOC aims to provide simulation software developers with methodologies and tools to optimise their applications and High Performance Computing users to gain the most value from expensive and heterogeneous computing resources.

Challenge

Coloc_Leaflet_Itea_300The current trend in the supercomputer industry is to provide more and more computational cores as well as even more heterogeneous systems and consequently an increasing amount of time is spent on communication rather than computation. To allow applications to fully exploit the power of modern multi/many-core processors, COLOC seeks to design, implement and validate new approaches to optimise process placement and data locality management (data distribution, data transfer, and data storage)….

Solution

The project will work on disruptive and innovative approaches to manage thread concurrency and input/out (I/O) locality on large-scale platforms, including memory management, CPU usage, accelerator technology, the network and storage, especially I/O, by taking all of these aspects into account concurrently and in relation to each other in order to optimise the use of applications and high-performance computing resources on all levels. Continuous updating of the state of the art to keep track of major changes in science, technology and industry will result in a public deliverable that extends and develops this state of the art. The project has assembled the most renowned European supercomputing centres and HPC research, dynamic HPC software tool editors, a range of HPC users to validate the proposed technology in real applications and Europe’s only HPC provider.

coloc-geo

Results

Specific to Scilab

  1. sciCUDA and sciOPENCL Scilab toolboxes freely available for the community worldwide

‣ easy access to GPU computing from Scilab language

shanchen

The Shan-Chen model displayed above is a classical multiphase flow relaxation problem, solved by the lattice boltzmann method. This model simulation is highly appropriate for GPU computing, as it mainly involves matrix computation.

2. Scilab MPI + sciMUMPS Scilab toolbox

‣ easy access to MUMPS solvers with Scilab language for non-HPC-experts

Here are to use cases of MUMPS solving Ax=b for finite elements electromagnetic fields problems from the University of Florida sparse matrix collection.

First case: 2 cubes sphere

Finite-element time domain solvers for electromagnetic diffusion equations: 88,213 tetrahedral elements in the 3-D computational domain

A: 101,492 x 101,492 sparse symmetric definitive positive matrix with 1,647,264 non-zero elements
b: 101,492 full vector

cubesspherescubesspheresmatrix

Second case: offshore

Finite-element system matrix from transient electric field diffusion equation with tetrahedral elements in the 3-D computational domain

A: 259,789 x 259,789 sparse symmetric definitive positive matrix with 4,242,673 non-zero elements
b: 259,789 full vector

offshore offshorematrix

The following benchmark displays the speed-up enabled by scaling the models to more process. After 12 processes, we observe though that adding processes doesn’t influence anymore on the execution time.

benchNOVA

3. Explore opportunities with Scilab Cloud

‣ Hide the complexity of simulation
applications and HPC to reach
new types of users
(engineers, domain experts,..)

Give use of HPC to non specialists
linking Scilab Cloud with NOVA.

Scilab-Cloud-HPC

A Scilab Cloud application is a web-based application developed using a set of Scilab functions including Scilab GUI-building (uicontrols). This application developed for COLOC intends to present what is possible to do between Nova and Scilab Cloud through the HTTP API server.

NOVAwebapp

This window is divided in different parts:

  • Scilab MPI Script is the main script of the computation.
  • In the Input Data and Script part we can upload files on Scilab Cloud storage, list them and select those needed by the main Scilab script.
  • In the Ressources Configuration we set the number of tasks needed and the Nova partition we want to use for the computation.
  • When MPI Binding is used, we can upload the communication matrix which will be used by a Scilab on Nova to generate the rankfile given to MPI.
  • After these elements are filled, we can launch the job from Job Management and download Slurm logs at the end.
  • If the main Scilab script saves intermediary step results, this application is able to display them by making periodic requests to download and display a result file.
  • Finally, Results Files can be download in a zip file.

General outcome of the project

The major expected outcomes are new algorithms, libraries and tools that will be developed to enhance existing resource managers and runtime systems, with a special focus on efficient mapping of data to processes or vice versa. The project will advance a set of technologies ranging from programming models to performance and resource optimisation using data locality and extended analysis tools.

Expected impacts include the ability to address larger simulation problems, to reduce simulation time and to provide ways to use HPC infrastructure resources more efficiently.

As a result, all partners are expected to strengthen their position: Bull gaining a stronger position as an HPC platform provider, ESI Group as a simulation software editor, Scilab Enterprises as a numerical software provider, while Dassault Aviation and the Swedish Defence Research Agency (FOI) as users of state-of-the-art HPC solutions will respectively reinforce their position in the aeronautics industry and Defence. HPC Research labs (INRIA and UVSQ) will also fortify their expertise and position in the worldwide HPC ecosystem.

Keywords : High performance computing

The COLOC project is funded by the following agencies/organizations:

  •  France – Direction générale des entreprises

 •  Sweden – Sweden’s Innovation Agency

  • EUREKA Cluster programme