Hackathon - Explainable A.I.

When:
Sunday 5 from 9:00 to 18:00 CEST
Monday 6 June from 9:00 to 17:00 CEST


Where: Room TBC & Online

 

Programme

Sunday 5 June

TimeActivity
9:00 - 9:30 CESTWelcome and coffee
9:30 - 10:30 CESTProject and team discovery
10:30 - 18:00 CESTHacking all day

Monday 6 June

TimeActivity
9:00 - 9:30 CESTWelcome and coffee
9:30 - 15:00 CESTHacking
15:00 - 16:30 CESTPresentations
16:30 - 17:00 CESTAnnouncement of the winners and prizes*

*The presentation of the authors of the winning projects will be included in the programme of the Dedicated Session organized by the EAGE A.I. Committee.

Registration

A separate registration is required to participate in this activity. Spaces are limited so hurry up and register now!

RegistrationFee
Student (in-person or online)25 eur
Regular (in-person or online)50 eur

Theme: eXplainable Artificial Intelligence (XAI)

XAI is the theme of this year’s EAGE Annual Hackathon organized by the EAGE A.I. Committee. Teams will explore ways in which we can build more interpretable machine learning tools. The goal is more understandable and trustworthy subsurface prediction.

The Wolf-or-Husky classifier

This deep neural network can tell the difference between wolves and huskies, with 90% accuracy. More than 30% of surveyed ML researchers said they trusted it. 

Picture: Various correctly classified images, with one misclassification

Source: Ribeiro et al. https://arxiv.org/abs/1602.04938

XAI

LIME shows that the model pays attention only to the background of a sample image. 

It’s a snow detector.

Picture: 
(Left) Husky-that-is-a-wolf
(Right) LIME’s explanation

Source: Ribeiro et al. https://arxiv.org/abs/1602.04938

XAI2

The High Geothermal Potential of the Madrid Basin

This story is often incorrectly given as an example of ‘AI gone wrong’. But it was trained intentionally to test humans’ ability to spot bad models. This AI was trained using Google’s Inception neural network and achieves ~90% accuracy.

The researchers asked:

  1. Would you trust this classifier to work in the real world?
  2. Why?
  3. How do you think it is making decisions?

In fact, it was trained on only 20 images. ALL the wolves had snow in the picture; none of the huskies did.

More than 1/3 ML researchers trusted this model… until Ribeiro et al showed them a LIME analysis, which ‘explains’ the model.

Geosecrets of Madrid 3

About the EAGE A.I. Committee

The EAGE A.I. Committee is a team of EAGE members and volunteers who endeavour to share knowledge and create new connections in the digital transformation that are relevant for geoscientists. In addition to regular contributions to EAGE conferences and workshops, they curate a periodical newsletter on all things A.I., machine learning and digitalization, as well as interviews with experts and other initiatives for the community. You are welcome to join the EAGE Artificial Intelligence group on LinkedIn to receive updates on all future opportunities to get involved.

Geosecrets of Madrid 4