The Worldwide LHC Computing Grid

Servers at the CERN Data Centre form Tier 0 of the Worldwide LHC Computing Grid (Image: CERN)

The Worldwide LHC Computing Grid (WLCG) is a global collaboration of computer centres. It was launched in 2002 to provide a resource to store, distribute and analyse the 15 petabytes (15 million gigabytes) of data generated every year by the Large Hadron Collider (LHC).

In 1999, when work began on the design of a computing system for LHC data analysis, it rapidly became clear that the required computing power was far beyond the funding capacity available to CERN. On the other hand, most of the laboratories and universities collaborating on the LHC had access to national or regional computing facilities.

These were integrated into a single LHC computing service – the Grid – in 2002. It now links thousands of computers and storage systems in over 140 centres across 35 countries. These computer centres are arranged in "Tiers", and together serve a community of over 8000 physicists with near real-time access to LHC data. The Grid gives users the power to process, analyse and in some cases to store LHC data.

The WLCG is the world's largest computing grid. It is based on two main grids – the European Grid Infrastructure in Europe, and Open Science Grid in the US – but has many associated regional and national grids (such as TWGrid in Taiwan and EU-IndiaGrid, which supports grid infrastructures across Europe and Asia).

This grid-based infrastructure is the most effective solution to the data-analysis challenge of the LHC, offering many advantages over a centralized system. Multiple copies of data can be kept at different sites, ensuring access for all scientists independent of geographical location; there is no single point of failure; computer centres in multiple time zones ease round-the-clock monitoring and the availability of expert support; and resources can be distributed across the world, for funding and sociological reasons.

For more on grid computing, check out GridCafé 

Using the Grid

With more than 8000 LHC physicists across the four main experiments – ALICE, ATLAS, CMS and LHCb – actively accessing and analysing data in near real-time, the computing system designed to handle the data has to be very flexible.

WLCG provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. Users make job requests from one of the many entry points into the system. A job request can be almost anything – storage, processing capacity, or availability of analysis software, for example. The computing Grid establishes the identity of the user, checks their credentials, and searches for available sites that can provide the resources requested. Users do not have to worry about where the computing resources are coming from – they can tap into the Grid's computing power and access storage on demand.

Tier 0 of the Grid runs around one million jobs per day. Peak data-transfer rates of 10 gigabytes per second – the equivalent of two full DVDs of data per second – are not unusual.

Voir en français