XAI With Python


Starts July 24th 2022
The summer course Explainable AI with Python is a joint collaboration between LAMBDA LAB, Tel Aviv University and W-AI. Registration is free for both women and men.
Course Syllabus
Registration Form
Guest Talk Abstracts
Dr Claudia Goldman, Staff Researcher, General Motors 24th July
Title: Bridging the communication gap between humans and machines
Trust in automated machines, controlled by AI algorithms (e.g., decision making under uncertainty), is essential for their users’ acceptance and effective use of these systems. However, assessing these solutions, solely, by their prediction accuracy and the (mathematical) optimality might not provide the transparency needed by users to understand why these systems are behaving as they learned to do. We suggest adding eXplainability capabilities to these AI algorithms (XAI) to bridge this gap between humans’ expectations and advanced decision-making solutions. First, we present an algorithm to explain semantically why a prediction was classified, supported by results from learning the semantics of two classifiers, trained to predict AV driving comfort and driving trajectories. Then, we discuss current state of the art in explainable AI for planning systems, comparing it to the users’ needs discovered in data collected from interactions with simulated AV behaviors.
Shelly Shenin, AI Research Engineer, Meta AI, 27th July
KNN-Diffusion: Image Generation via Large-Scale Retrieval
Paper: https://arxiv.org/pdf/2204.02849.pdf
While the availability of massive Text-Image datasets is shown to be extremely useful in training large-scale gener- ative models (e.g. DDPMs, Transformers), their output typ- ically depends on the quality of both the input text, as well as the training dataset. In this work, we show how large- scale retrieval methods, in particular efficient K-Nearest- Neighbors (KNN) search, can be used in order to train a model to adapt to new samples. Learning to adapt enables several new capabilities. Sifting through billions of records at inference time is extremely efficient and can alleviate the need to train or memorize an adequately large genera- tive model. Additionally, fine-tuning trained models to new samples can be achieved by simply adding them to the ta- ble. Rare concepts, even without any presence in the train- ing set, can be then leveraged during test time without any modification to the generative model. Our diffusion-based model trains on images only, by leveraging a joint Text- Image multi-modal metric. Compared to baseline meth- ods, our generations achieve state of the art results both in human evaluations as well as with perceptual scores when tested on a public multimodal dataset of natural images, as well as on a collected dataset of 400 million Stickers.
Adi Watzman, Research Scientist at PayPal, 31st July
SHAP Values for ML Explainability
How do ML models use their features to make predictions?
SHAP opens up the ML black box by providing feature attributions for every prediction of every model. Since being published in 2017 (https://arxiv.org/abs/1705.07874), SHAP has gained popularity extremely quickly thanks to its user-friendly API and theoretical guarantees. In this talk I will guide your intuition through the exciting theory SHAP is based on, and demonstrate how SHAP values can be aggregated to understand model behavior. Throughout the talk I will present real-life examples for using SHAP in the fraud detection domain at PayPal.
Leigh Eitan, Data Scientist at Intel AI Group, 31st July
Mind The Path: Path-based KnowledgeGraph with Neural Attention
Knowledge-graphs (KG) recommender systems utilize both Collaborative Filtering (CF) data together with an informative knowledge-graph on the items in order to improve prediction accuracy. Path-based KG methods explore the interlinks within a knowledge graph in order to enhance the connectivity between users and items with rich complementary information. A key advantage of path-based KG methods over embedding methods stems from their ability to naturally enable intuitive explanations. In this work, we present a novel path-based algorithm which employs neural attention in order to better extract the relevant information on the KG paths. Evaluations of public KG recommendation datasets indicate a clear advantage to the proposed method when compared to state-of-the-art path-based alternatives. Furthermore, we demonstrate the ability of our approach to generate clear and intuitive explanations which provide interpretability and improve user trust.
Michal Moshkovitz, Postdoc, TAU, 3rd Aug
Explainable k-Means Clustering
Many clustering algorithms lead to cluster assignments that are hard to explain, partially because they complicatedly depend on data features. To improve interpretability, we consider using a small decision tree to define the clustering. We show that popular top-down decision tree algorithms may lead to clusterings with an arbitrarily high cost. On the positive side, we design a new algorithm that is O(k^2)-approximation compared to the optimal k-means. Prior to our work, no algorithms were known to have provable guarantees independent of dimension and input size. Empirically, we validate that the new algorithm produces a low-cost clustering, outperforming both standard decision tree methods and other algorithms for explainable clustering. Implementation available at https://github.com/navefr/ExKMC.
Vivian Levin, MSc, TAU, 10th Aug
Gender Bias in AI Recruitment Systems based on Machine Learning Analysis of LinkedIn Profiles.
In process of publication.
BETTER TOGETHER -- INTRO TO DEEP LEARNING WITH PYTOCH COURSE FOR WOMEN



Starts July 11th 2021
Introduction to Deep Learning with Pytorch is a free online course that is a great way to learn hands-on deep learning for everyone who knows python. We will complete it together in order to gain as much as possible from it and to provide maximal support:
1. Evening meetings at Tel Aviv University
2. Star TAs to solve any issue and provide advice live and on forums
3. Guest lecturers who are top women AI researchers
Registration Form
GUEST LECTURES
Abstracts & Slides

ATTENTION IN DEEP LEARNING- INTRO AND MOTIVATION, 11 JULY
Hila Chefer, M.Sc Candidate, Computer Science, TAU
Attention-based models, and specifically Transformers, have revolutionized the field of text processing and are becoming increasingly popular in computer vision, speech, and multi-modal tasks. In this talk, we will discuss the advantages of using such models, and the techniques that enable training these models. Additionally, we will explore various groundbreaking applications for attention-based models, such as GPT-3 and DALL-E.
MEMORISE AUTO-GENERATION IN A PERSONAL CLOUD PHOTO APP
MEMORIES AUTO-GENERATION IN A PERSONAL CLOUD PHOTOS APP, 21 JULY
Tamar Glaser, Computer Vision Algorithm Developer and Researcher, Alibaba-MIIL
As the amount of media, each smartphone user capture is consistently rising, automated management of this vast number of images and videos is crucial. We’re developing an advanced photo app., that applies many AI features to the user’s photo gallery. In this talk, Tamar describes some features and focuses in particular on auto-memories generation, exploitation of the amount of data as well as smart weight sharing between the features and transformers attention implementation. Which enable to detect and recognize the user’s most precious moments and separate the wheat from the chaff with high accuracy. This talk comprises both basic and advanced deep learning explanations of the algorithmic architectures as well as covers the recent research paper published by Tamar, DAMO Academy Alibaba Group, titled PETA: Photo Album Event Recognition Using Transformers Attention.


BUILDING AI-BASED PRODUCTS, 28 JULY
Inbal Horev, Lead Data Scientist, Gong
Bad or scarce data, long research cycles and inaccurate problem definitions are common pitfalls in ML projects. They can cause long delays or even worse, make a project fail completely when it meets the users. How do we make sure that we build relevant models that deliver real value to our users? In this talk I'll share my experience and present some AI-based products that we built at Gong. I'll describe some best practices and considerations on how to successfully take models out of the lab and into the real world.

TAKING AI TO THE NEXT LEVEL WITH DATA-CENTRIC APPROACH IN COMPUTATIONAL PATHOLOGY, 8 AUGUST
Dr Chen Sagiv, Co-founder SagivTech, DeePathology, SurgeonAI
In order to develop AI solutions, we rely on two main components: the data and the ML model. In order to improve a solution, we can either improve our model or, improve our data or, of course, do both.
In the last few years, the model-centric approach was thriving assuming that the data is well-behaved. This meant that you struggled to collect huge amounts of data and then strived to optimize in the model domain. But this is now changing. In a recent perspective presented by Andrew NG to achieve a good AI solution there should be a balance between a model-centric vs a data-centric approach, and having high-quality data is essential. Data quality has to be monitored and improved at every step of AI development. Meaningful data is scarce and noisy and also very expensive to be obtained, especially in the healthcare domain.
In this lecture, I will discuss the model-centric vs. data-centric approaches and demonstrate practical tools to obtain high-quality data for computational pathology.

THE RISE OF ATTENTION, 15 AUG
Dr. Yoli Shavit, Principal Research Scientist, Huawei, Postdoctoral Researcher, BIU
The attention mechanism, allowing for a context- and position- aware aggregation, has become widely used in various domains, either as part of Transformers, Graph Neural Networks (GNNs) or as an ad-hoc operation. In this talk, we will embark on a technical and historical journey to understand the rise of attention. We will start by defining the attention operation and discuss the initial motivation for using it, rooted in natural language processing (NLP) problems. From attention to multi-head attention (MHA) to Transformers, we will cover the evolution of attention operations, and 'look under the hood' of the Transformer architecture. Finally, we will see how attention and Transformers are gaining success in Computer Vision by reviewing recent state-of-the-art methods for image recognition, object detection and camera localization.

DEEP LEARNING SEGMENTATION IN MEDICAL IMAGING, 14 Nov

WHY SEARCH WHEN YOU CAN FIND ?, 14 Nov
Efrat Dagan, Owner WorkAround, HR Consulting Services
(Prev. Lead Staffing at Google and Waze for over a decade)
In this talk we will cover the following topics:
-
CV Tips
-
How to stand out (even without experience and grades)
-
What to consider when selecting your first post
-
The important questions to ask during the hiring process
-
How to negotiate
Bella Specktor Fadida, Phd Candidate, HUJI, ML Consultant
3D Segmentation in Medical Imaging is useful for a variety of applications, such as quantitative tumor evaluation, neurological brain assessment and cardiac analysis. However, manual contour delineations are a time consuming and tedious task, and therefore not practical for clinical use. Error-prone 2D linear measurements are still the norm today for the majority of applications. Since the introduction of U-Net architecture, the quality of automatic segmentation significantly increased, making it feasible for clinical use in many applications. However, the robustness of Deep Learning methods is still an open issue. In the first part of the talk, I will go over major Deep Learning segmentation algorithms in medical imaging and talk about the open questions in the field. In the second part I will provide a review of my recent publication: "A bootstrap self-training method for sequence transfer: State-of-the-art placenta segmentation in fetal MRI" which has been accepted as an Oral presentation at the PIPPI MICCAI workshop 2021.
MEET OUR TEACHING ASSISTANTS

















