26 Jul/24
13:30 - 15:30 (Europe/Zurich)

Foundation models


31/3-004 at CERN


Foundation models, also known as large-scale self-supervised models, have revolutionized the field of artificial intelligence. These models, such as ChatGPT and AlphaFold, are pre-trained on massive amounts of data and can be fine-tuned for a wide range of downstream tasks. In this lecture, we’ll explore the key concepts behind foundation models and their impact on machine learning systems. In particular we will give a brief overview of the points below:
  1. What are foundation models? Challenges and opportunities.
  2. Strategies for training foundation models : self-supervision and pre-training. 
  3. How to reach adaptability and fine tuning.
  4. Some examples 


Sofia is a CERN physicist with extensive experience in software development in the high-energy physics domain, particularly in deep learning and quantum computing applications within CERN openlab. She has a PhD in physics obtained at the University of Geneva. Prior to joining CERN openlab, Sofia was responsible for the development of deep-learning-based technologies for the simulation of particle transport through detectors at CERN. She also worked to optimise the GeantV detector simulation prototype on modern hardware architectures.