Город МОСКОВСКИЙ
00:09:36

JEPA Architectures - How neural networks learn abstract concepts about images (IJEPA)

Аватар
Coding Oracle
Просмотры:
16
Дата загрузки:
23.02.2025 21:12
Длительность:
00:09:36
Категория:
Технологии и интернет

Описание

This video explains Self-Supervised Learning from Images with a
Joint-Embedding Predictive Architecture paper that proposes a revolutionary approach for "human-like" Machine Learning training. The video dives into the ideologies behind JEPA methods, network architectures, results, and comparison with existing generative methods (like Masked Autoencoders) and contrastive learning methods (like SimCLR).


Follow on Twitter: @neural_avb


To support me, consider JOINING the channel. Members get access to Code, project files, scripts, slides, animations, and illustrations for most of the videos on my channel! Learn more about perks below.
Join and support the channel - https://www.youtube.com/@avb_fj/join


Learn more about contrastive learning in my breakdown video about Multimodal ML:
https://youtu.be/-llkMpNH160

Papers referenced:
I-JEPA: https://arxiv.org/pdf/2301.08243.pdf
Yan Lecun's original human-like AI paper: https://openreview.net/pdf?id=BZ5a1r-kVsf
SimCLR: https://arxiv.org/pdf/2002.05709.pdf
Masked AE: https://arxiv.org/pdf/2111.06377.pdf
RCDM: https://arxiv.org/pdf/2112.09164.pdf

Timestamps:
0:00 - Intro
1:05 - Why IJEPA?
5:22 - Network architecture
7:43 - Results
8:50 - Summary

#deeplearning #computervision #ai #machinelearning

Рекомендуемые видео