vae gan nlp

Post on 05-Apr-2017

124 Views

Category:

Data & Analytics

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

VAE+GAN Controllable Text Generation2017/03/24

mabonki0725

Contents and  Self-introduction

• Contents1.変分法(Variational Bayes)の説明2.VAE(Variational Auto-Encoder)論文の説明3.DCGAN(Deep CNN Generative Advance Network)の説明4.Conditional GANの説明5.VAE+GANを使った  Controllable Text Generation論文の説明

• 自己紹介– 統計数理研究所 機械学習ゼミ所属– 都立産業技術大学院 創造技術学科 学生– 金融機関でAIモデルを構築

1.変分法(Variational Bayes) from PRML

Lower Band

PRML

MAX

ZERO

Z:latent value

一般的にはMCMCでθを解く

変分法(Variational Bayes) Proof

)()()()|(log)|(log

)|(log)()|(log

)()|,(

)|,()|()(log)|(log

)(|,log

)|,()|()(log)()|(log

|,log)|(

|,log)()|(log

|,log),|(log)()|(log

)()|,(log,|log)()|(log

Pr,||)|(log

xFxFzqxpxp

xpzqxp

zqzxp

xzpxpzqzqxp

zqzxp

xzpxpzqzqxp

zqzxp

xpzqxzpzqxp

zqzxp

zqxzpzqxp

zqzxpzq

zqxzpzqxp

oofqLpqKLxp

z

z

z

z

z

z

zz

  

2.VAE(Variational AutoEncoder)

VAEはKingmaが提唱• データxを生成している隠れ変数zを確率密度関数   qΦ(z/x)で近似する方法

• qΦ(z/x)は変分法で近似する• 変分近似はニューロ・モデルを適応する

https://arxiv.org/abs/1312.6114

Auto-Encoding Variational Bayes

proof (2) →(3)

proof (2) from PRML(10.3)

下限のMLP

PRML(10.1)

loss Function on VAE

Encoder Decoder

when

MAX

networkでθφを改善してErrorを最小化する

ZERO

MIN

VAE by Neuro Model

VEA Program by Chainer (1)

VEA Program by Chainer(2)

Result of VAE by MNIST

3.DCGAN Program by Chainer(1)

LossLoss Func

Training

GeneratorDiscriminator

object of GAN

softplus=log{1+exp(D)}

GCGAN Program by Chainer(2)

Generator Discriminator

Result of DCGAN

4.Conditional(Controllable) Gan

求めたいパターンの架空の画像を生成する

Condition cImage x

Encode cEncode x

Generator

Discriminator

Condition

Image

Conditional GAN

5.Controllable Text Generation

style Labelsentiment(pos/nega)

tense

VAEunsupervised

model

GANsemi-supervised

model

Annealing LSTM for learning

Annealing LSTM to mitigate peak weight on Discreate model

16 LSTM

Generator Paramater by Loss Function

Discriminater Parameter by Loss Function

Real ExampleLoss Function

Generated ExampleLoss Function

Integrate Loss Function

min entorpy regulator for noize

Disciminator

Training Data and accuracy

use data set content sizeCorpus IMDB movie reviews of max

16 words1.4M

Sentiment

SST-full labeled sentence with annotations

2737

SST-small labeled sentence 250

Lexicon sentiment labeled word 2700

IMDB For train/dev/test 16K

Tense TimeBank tense labeled sentences 5250

Training Data Classification accuracy

Alogorithm for Parameters of VAE Generater Discriminater

wake proc sleep proc

VAE gen-dis

Input unlabeled sentence

Input labeled sentence

z~ VAE c~p(c)

c~Discriminator(X)Xt~LSTM(z,c,Xt-1)

Expriments

Fixed Style Free Style

negapos

negapos

negapos

negapos

negapos

6.Summary

まとめ

• 現象xから原因zの分布p(z|x)はベイズ統計で求めていたが、DeepLearningのVAEやGANで可能になった。

• DeepLearningモデルは対で構造なので実装が簡単– VAE:Encoder - Decoder – GAN:Generator - Discriminator

• 求めたい(Contrallable)画像や文章生成が可能になった。– 画像:Conditional GAN– 文書:Controlable Text Generation

top related