BETTER TOGETHER -- INTRO TO DEEP LEARNING WITH PYTOCH COURSE FOR WOMEN

press to zoom
Guest Lecturers and TAs.pptx (63)
Guest Lecturers and TAs.pptx (63)

press to zoom

press to zoom

Starts July 11th 2021

Introduction to Deep Learning with Pytorch is a free online course that is a great way to learn hands-on deep learning for everyone who knows python. W
e will complete it together in order to gain as much as possible from it and to provide maximal support:

1. Evening meetings at Tel Aviv University
2. Star TAs to solve any issue and provide advice live and on forums
3. Guest lecturers who are top women AI researchers 

Registration Form

THE COURSE SPONSORS & DONORS

NVIDIA-Logo-V-ForScreen-ForDarkBG_edited.png
BlavatnikENGlogoTall white_edited.png
yandex ML Initiative.jpeg
Powered by AWS logo for dark background.png

COURSE COORDINATOR

Ortal Dayan 

W.AI Founder, Lecturer and Data Scientist

Me lecturing_edited_edited.jpg

GUEST LECTURES

Abstracts & Slides

photo-wai-18.JPG

ATTENTION IN DEEP LEARNING- INTRO AND MOTIVATION, 11 JULY

Hila Chefer, M.Sc Candidate, Computer Science, TAU

​Attention-based models, and specifically Transformers, have revolutionized the field of text processing and are becoming increasingly popular in computer vision, speech, and multi-modal tasks. In this talk, we will discuss the advantages of using such models, and the techniques that enable training these models. Additionally, we will explore various groundbreaking applications for attention-based models, such as GPT-3 and DALL-E.

 

Video Recording

Lecture Slides

MEMORISE AUTO-GENERATION IN A PERSONAL CLOUD PHOTO APP

MEMORIES AUTO-GENERATION IN A PERSONAL CLOUD PHOTOS APP,  21 JULY

Tamar Glaser,  Computer Vision Algorithm Developer and Researcher, Alibaba-MIIL

As the amount of media, each smartphone user capture is consistently rising, automated management of this vast number of images and videos is crucial. We’re developing an advanced photo app., that applies many AI features to the user’s photo gallery. In this talk, Tamar describes some features and focuses in particular on auto-memories generation, exploitation of the amount of data as well as smart weight sharing between the features and transformers attention implementation. Which enable to detect and recognize the user’s most precious moments and separate the wheat from the chaff with high accuracy. This talk comprises both basic and advanced deep learning explanations of the algorithmic architectures as well as covers the recent research paper published by Tamar, DAMO Academy Alibaba Group, titled PETA: Photo Album Event Recognition Using Transformers Attention. 

Video Recording 

Lecture Slides

IMG_20210914_140714.jpg
Inbal_Horev.jpeg

BUILDING AI-BASED PRODUCTS, 28 JULY

Inbal Horev,  Lead Data Scientist, Gong

Bad or scarce data, long research cycles and inaccurate problem definitions are common pitfalls in ML projects. They can cause long delays or even worse, make a project fail completely when it meets the users. How do we make sure that we build relevant models that deliver real value to our users? In this talk I'll share my experience and present some AI-based products that we built at Gong. I'll describe some best practices and considerations on how to successfully take models out of the lab and into the real world.

Chen_3.jpg

TAKING AI TO THE NEXT LEVEL WITH DATA-CENTRIC APPROACH IN COMPUTATIONAL PATHOLOGY, 8 AUGUST

Dr. Chen Sagiv, Co founder SagivTech, DeePathology, SurgeonAI

In order to develop AI solutions we rely on two main components: the data and the ML model. In order to improve a solution we can either improve our model or, improve our data or, of course, do both.

In the last few years the model-centric approach was thriving assuming that the data is well-behaved. This meant that you struggled to collect huge amounts of data and then strived to optimize in the model domain. But this is now changing. In a recent perspective presented by Andrew NG to achieve a good AI solution there should be a balance between a model-centric vs a data-centric approaches, and having a high quality data is essential.  Data quality has to be monitored and improved at every step of the AI development. Meaningful data is scarce and noisy and also very expensive to be obtained, especially in the healthcare domain.

In this lecture I will discuss the model centric vs. data centric approaches and demonstrate practical tools to obtain a high quality data for computational pathology.

Yoli_Shavit.jpeg

THE RISE OF ATTENTION, 15 AUG

Dr. Yoli Shavit, Principal Research Scientist, Huawei, Postdoctoral Researcher, BIU 

 

The attention mechanism, allowing for a context- and position- aware aggregation, has become widely used in various domains, either as part of Transformers, Graph Neural Networks (GNNs) or as an ad-hoc operation. In this talk, we will embark on a technical and historical journey to understand the rise of attention. We will start by defining the attention operation and discuss the initial motivation for using it, rooted in natural language processing (NLP) problems. From attention to multi-head attention (MHA) to Transformers, we will cover the evolution of attention operations, and 'look under the hood' of the Transformer architecture. Finally, we will see how attention and Transformers are gaining success in Computer Vision by reviewing recent state-of-the-art methods for image recognition, object detection and camera localization. 

Lecture Slides 

2989812e-ee9e-4d17-84a8-05e739ac759e.jpg

DEEP LEARNING SEGMENTATION IN MEDICAL IMAGING, 14 Nov

Efrat Dagan Headshot.png

WHY SEARCH WHEN YOU CAN FIND ?, 14 Nov

Efrat Dagan, Owner WorkAround, HR Consulting Services

(Prev. Lead Staffing at Google and Waze for over a decade)

In this talk we will cover the following topics:  

  • CV Tips

  • How to stand out (even without experience and grades)

  • What to consider when selecting your first post

  • The important questions to ask during the hiring process 

  • How to negotiate 

Bella Specktor Fadida, Phd Candidate, HUJI, ML Consultant

 

3D Segmentation in Medical Imaging is useful for a variety of applications, such as quantitative tumor evaluation, neurological brain assessment and cardiac analysis. However, manual contour delineations are a time consuming and tedious task, and therefore not practical for clinical use. Error-prone 2D linear measurements are still the norm today for the majority of applications. Since the introduction of U-Net architecture, the quality of automatic segmentation significantly increased, making it feasible for clinical use in many applications. However, the robustness of Deep Learning methods is still an open issue. In the first part of the  talk, I will go over major Deep Learning segmentation algorithms in medical imaging and talk about the open questions in the field. In the second part I will provide a review of my  recent publication: "A bootstrap self-training method for sequence transfer: State-of-the-art placenta segmentation in fetal MRI" which has been accepted as an Oral presentation at  the PIPPI MICCAI workshop 2021.

MEET OUR TEACHING ASSISTANTS

Lior Yevnin.jpeg

YUVAL YEVNIN

Ayelet Ashkenazi.jpeg

ORTAL ASHKENAZI