此内容因违规无法查看 此内容因言论自由合法查看
文章于 2017年3月17日 被检测为删除。
查看原文
被微信屏蔽
其他

GitHub|基于Tensorflow实现的Wasserstein GAN(附源代码)

2017-03-11 全球人工智能



全球人工智能



文章来源:GitHub   作者:Oliver Hennigh


Go Home Discriminator, You're Drunk / Fine Tuning with Discriminator Networks


In this repository we look at fine tuning generated images from GANs using the discriminator network. The idea is to tune the generated image such that the discriminator is more likely to predict it as a real image. We go about this in two ways. The first method is to generate an image with the generator network and then fine tune the pixels with the discriminator loss. The second method is to fine tune the input vector, Z, to the generator. As suggested by the title, the discriminator can be very easily fooled (in some cases). When fine tuning the pixels, the discriminator can be fooled with extremely small changes. Surprisingly, fine tuning the Z vector can actually produce more realistic images. This result is difficult to quantify so we have included pictures to support this claim. When training on the Celeb Dataset, Images with patchy hair or glasses tend to either remove the patches or fill in the missing spaces.


Similar Work


The idea of tuning images steams from work in Style Transfer and Fooling Neural Networks. The predominate papers in these areas are Image Style Transfer Using Convolutional Neural Networks and Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. Our GAN implementation is taken from here. To get a list of dependencies see that page. We test our method on Wasserstein GANs because of their recent success.


Fine Tuning Pixels


As seen in the photos and error plot. It is extremely easy to fool the discriminator with very slight changes in pixels. This is consistent with the results seen in Fooling Neural Network texts. I was hopeful that this would not be the case. Training GANs is dramatically different then training classification networks and it seems reasonable to hope that the discriminator would be more resistant to adversarial examples. Alas, this does not appear to be the case.


Fine Tuning Z vector




Fine tuning the Z vector produces some interesting results. It appears to degrade images in a few cases however in those with very little structure there is a dramatic improvement. Images with almost no clear face began to take on eyes and textured skin. It required playing with the learning rate and number of iterations to get these results. Too few steps and there is almost no change in the image. Too many steps and the resultant image is far from the original.


More images!


Before:



After:



Difference:



Conclusion


  • Poor images sampled from a GAN can be fine tuned to become more realistic.

  • Discriminators are just as susceptible to adversarial examples as classification networks.

  • My discriminator has had too much to drink and are therefore MUST GO HOME.


论文地址:https://arxiv.org/abs/1701.07875

GitHub资源(可点击原文链接跳转):https://github.com/loliverhennigh/WassersteinGAN.tensorflow


兼职翻译 招聘

全球人工智能正式面对所有用户招聘兼职翻译人员,工作内容及待遇请于公众号回复兼职+个人微信号,工作人员会与你取得联系。


热门文章推荐

重磅|百度PaddlePaddle发布最新API 从三大方面优化了性能

重磅|NVIDIA发布两款"深度神经网络训练"开发者产品:DIGITS 5 和 TensorRT

重磅|“萨德”——不怕!我国的人工智能巡航导弹可破解

重磅|MIT发布脑控机器人:用脑电波(10毫秒分类)纠正机器人错误

重磅|谷歌预言:2029年通过纳米机器人和器官再造 或将实现人类永生

重磅|Messenger bot错误率高达70% Facebook被迫削减AI投资

招聘|腾讯大规模招聘AI开发工程师 年薪30-80W

讨论|周志华教授gcForest论文的价值与技术讨论(微信群)

最新|李飞飞:人口普查不用上门,谷歌街景加深度学习就搞定(附论文)

最新 | 百度最新“Deep Voice”语音技术 比WaveNet提速 400 倍(译)

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存