Several recent papers have explored self-supervised learning methods for vision transformers (ViT). Key approaches include:
1. Masked prediction tasks that predict masked patches of the input image.
2. Contrastive learning using techniques like MoCo to learn representations by contrasting augmented views of the same image.
3. Self-distillation methods like DINO that distill a teacher ViT into a student ViT using different views of the same image.
4. Hybrid approaches that combine masked prediction with self-distillation, such as iBOT.
This document summarizes recent research on applying self-attention mechanisms from Transformers to domains other than language, such as computer vision. It discusses models that use self-attention for images, including ViT, DeiT, and T2T, which apply Transformers to divided image patches. It also covers more general attention modules like the Perceiver that aims to be domain-agnostic. Finally, it discusses work on transferring pretrained language Transformers to other modalities through frozen weights, showing they can function as universal computation engines.
This document summarizes recent developments in action recognition using deep learning techniques. It discusses early approaches using improved dense trajectories and two-stream convolutional neural networks. It then focuses on advances using 3D convolutional networks, enabled by large video datasets like Kinetics. State-of-the-art results are achieved using inflated 3D convolutional networks and temporal aggregation methods like temporal linear encoding. The document provides an overview of popular datasets and challenges and concludes with tips on training models at scale.
This document summarizes a research paper on scaling laws for neural language models. Some key findings of the paper include:
- Language model performance depends strongly on model scale and weakly on model shape. With enough compute and data, performance scales as a power law of parameters, compute, and data.
- Overfitting is universal, with penalties depending on the ratio of parameters to data.
- Large models have higher sample efficiency and can reach the same performance levels with less optimization steps and data points.
- The paper motivated subsequent work by OpenAI on applying scaling laws to other domains like computer vision and developing increasingly large language models like GPT-3.
This document summarizes a research paper on scaling laws for neural language models. Some key findings of the paper include:
- Language model performance depends strongly on model scale and weakly on model shape. With enough compute and data, performance scales as a power law of parameters, compute, and data.
- Overfitting is universal, with penalties depending on the ratio of parameters to data.
- Large models have higher sample efficiency and can reach the same performance levels with less optimization steps and data points.
- The paper motivated subsequent work by OpenAI on applying scaling laws to other domains like computer vision and developing increasingly large language models like GPT-3.
【DL輪読会】An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
1. 1
DEEP LEARNING JP
[DL Papers]
http://deeplearning.jp/
“An Image is Worth One Word: Personalizing Text-to-
Image Generation usingTextual Inversion”
University ofTsukuba M1,Yuki Sato
2. 書誌情報
An Image is Worth One Word: Personalizing Text-to-Image Generation
using Textual Inversion
? Rinon Gal1, 2, Yuval Alaluf1, Yuval Atzmon2, Or Patashnik1, Amit H.
Bermano1, Gal Chechik2, Daniel Cohen-Or1 - 1Tel-Aviv
University, 2NVIDIA
? 投稿先: arXiv(2022/08/02)
? プロジェクトページ: https://textual-inversion.github.io/
? 選定理由:
?近年盛んなText-to-Imageにおいて生成画像の多様性だけではなくユーザの意図
を汲んだ画像生成を実現しており需要が高いと考えられる.
?シンプルな手法で応用の幅が広いと考えられる.
※出典が明記されていない限り図表は論文?プロジェクトページより引用
2
17. 実験結果: 埋め込み表現の評価指標
? “A photo of S*”のテキストと埋め込み表現を用いて生成された64枚の画像
と埋め込み表現の学習に用いたデータセットのペアごとのCLIP特徴量のコ
サイン類似度の平均で再構成の精度を算出する.(Image Similarity)
? 背景の変更、スタイルの変更など様々な難易度のテキスト(ex “A photo of
S* on the moon”)を用いて、各テキストを入力として50回のDDIMステップで
64枚の画像を生成し、生成画像のCLIP特徴量の平均を算出、” S*”を含ま
ないテキスト(ex “A photo of on the moon”)のCLIP特徴量とのコサイン類
似度を算出する.(Text Similarity)
17