Loading [MathJax]/jax/output/CommonHTML/config.js
前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >【论文推荐】最新八篇生成对抗网络相关论文—BRE、图像合成、多模态图像生成、非配对多域图、注意力、对抗特征增强、深度对抗性训练

【论文推荐】最新八篇生成对抗网络相关论文—BRE、图像合成、多模态图像生成、非配对多域图、注意力、对抗特征增强、深度对抗性训练

作者头像
WZEARW
发布于 2018-06-05 09:13:35
发布于 2018-06-05 09:13:35
1.1K0
举报
文章被收录于专栏:专知专知

【导读】专知内容组整理了最近八篇生成对抗网络(Generative Adversarial Networks )相关文章,为大家进行介绍,欢迎查看!

1.Improving GAN Training via Binarized Representation Entropy (BRE) Regularization(通过二值化表示熵(BRE)正则改进GAN训练)



作者:Yanshuai Cao,Gavin Weiguang Ding,Kry Yik-Chau Lui,Ruitong Huang

Published as a conference paper at the 6th International Conference on Learning Representations (ICLR 2018)

摘要:We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminator D spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse. These in turn help G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D. Our regularizer guides the rectifier discriminator D to better allocate its model capacity, by encouraging the binary activation patterns on selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability and convergence speed of the GAN training, as well as higher sample quality. The approach also leads to higher classification accuracies in semi-supervised learning.

期刊:arXiv, 2018年5月9日

网址

http://www.zhuanzhi.ai/document/1daa64655b1a4199334be9631bb7dc98

2.MC-GAN: Multi-conditional Generative Adversarial Network for Image Synthesis(MC-GAN:多条件生成对抗网络的图像合成)



作者:Hyojin Park,YoungJoon Yoo,Nojun Kwak

机构:Seoul National University

摘要:In this paper, we introduce a new method for generating an object image from text attributes on a desired location, when the base image is given. One step further to the existing studies on text-to-image generation mainly focusing on the object's appearance, the proposed method aims to generate an object image preserving the given background information, which is the first attempt in this field. To tackle the problem, we propose a multi-conditional GAN (MC-GAN) which controls both the object and background information jointly. As a core component of MC-GAN, we propose a synthesis block which disentangles the object and background information in the training stage. This block enables MC-GAN to generate a realistic object image with the desired background by controlling the amount of the background information from the given base image using the foreground information from the text attributes. From the experiments with Caltech-200 bird and Oxford-102 flower datasets, we show that our model is able to generate photo-realistic images with a resolution of 128 x 128. The source code of MC-GAN is available soon.

期刊:arXiv, 2018年5月8日

网址

http://www.zhuanzhi.ai/document/5c97977b9f8ad4b237d8b6f9ba75497f

3.MEGAN: Mixture of Experts of Generative Adversarial Networks for Multimodal Image Generation(MEGAN: 多模态图像生成对抗网络专家的混合)



作者:David Keetae Park,Seungjoo Yoo,Hyojin Bahng,Jaegul Choo,Noseong Park

27th International Joint Conference on Artificial Intelligence (IJCAI 2018)

机构:Korea University,University of North Carolina at Charlotte

摘要:Recently, generative adversarial networks (GANs) have shown promising performance in generating realistic images. However, they often struggle in learning complex underlying modalities in a given dataset, resulting in poor-quality generated images. To mitigate this problem, we present a novel approach called mixture of experts GAN (MEGAN), an ensemble approach of multiple generator networks. Each generator network in MEGAN specializes in generating images with a particular subset of modalities, e.g., an image class. Instead of incorporating a separate step of handcrafted clustering of multiple modalities, our proposed model is trained through an end-to-end learning of multiple generators via gating networks, which is responsible for choosing the appropriate generator network for a given condition. We adopt the categorical reparameterization trick for a categorical decision to be made in selecting a generator while maintaining the flow of the gradients. We demonstrate that individual generators learn different and salient subparts of the data and achieve a multiscale structural similarity (MS-SSIM) score of 0.2470 for CelebA and a competitive unsupervised inception score of 8.33 in CIFAR-10.

期刊:arXiv, 2018年5月8日

网址

http://www.zhuanzhi.ai/document/4e0326f4e3c2d60536da6874c9c8fc63

4.Unpaired Multi-Domain Image Generation via Regularized Conditional GANs(利用正则化的条件GANs生成非配对的多域图)



作者:Xudong Mao,Qing Li

机构:City University of Chinese Hong Kong

摘要:In this paper, we study the problem of multi-domain image generation, the goal of which is to generate pairs of corresponding images from different domains. With the recent development in generative models, image generation has achieved great progress and has been applied to various computer vision tasks. However, multi-domain image generation may not achieve the desired performance due to the difficulty of learning the correspondence of different domain images, especially when the information of paired samples is not given. To tackle this problem, we propose Regularized Conditional GAN (RegCGAN) which is capable of learning to generate corresponding images in the absence of paired training data. RegCGAN is based on the conditional GAN, and we introduce two regularizers to guide the model to learn the corresponding semantics of different domains. We evaluate the proposed model on several tasks for which paired training data is not given, including the generation of edges and photos, the generation of faces with different attributes, etc. The experimental results show that our model can successfully generate corresponding images for all these tasks, while outperforms the baseline methods. We also introduce an approach of applying RegCGAN to unsupervised domain adaptation.

期刊:arXiv, 2018年5月7日

网址

http://www.zhuanzhi.ai/document/2782f0094d1961fad34e81a96a114c56

5.Attentive Generative Adversarial Network for Raindrop Removal from a Single Image(从单个图像去除雨滴的注意力的生成对抗网络



作者:Rui Qian,Robby T. Tan,Wenhan Yang,Jiajun Su,Jiaying Liu

CVPR2018 Spotlight

机构:Peking University,National University of Singapore

摘要:Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training. Our main idea is to inject visual attention into both the generative and discriminative networks. During the training, our visual attention learns about raindrop regions and their surroundings. Hence, by injecting this information, the generative network will pay more attention to the raindrop regions and the surrounding structures, and the discriminative network will be able to assess the local consistency of the restored regions. This injection of visual attention to both generative and discriminative networks is the main contribution of this paper. Our experiments show the effectiveness of our approach, which outperforms the state of the art methods quantitatively and qualitatively.

期刊:arXiv, 2018年5月6日

网址

http://www.zhuanzhi.ai/document/911b81fc00817b37b029c73318675209

6.Adversarial Feature Augmentation for Unsupervised Domain Adaptation(非监督域适应的对抗特征增强)



作者:Riccardo Volpi,Pietro Morerio,Silvio Savarese,Vittorio Murino

Accepted to CVPR 2018

机构:Universita di Verona,Stanford University

摘要:Recent works showed that Generative Adversarial Networks (GANs) can be successfully applied in unsupervised domain adaptation, where, given a labeled source dataset and an unlabeled target dataset, the goal is to train powerful classifiers for the target samples. In particular, it was shown that a GAN objective function can be used to learn target features indistinguishable from the source ones. In this work, we extend this framework by (i) forcing the learned feature extractor to be domain-invariant, and (ii) training it through data augmentation in the feature space, namely performing feature augmentation. While data augmentation in the image space is a well established technique in deep learning, feature augmentation has not yet received the same level of attention. We accomplish it by means of a feature generator trained by playing the GAN minimax game against source features. Results show that both enforcing domain-invariance and performing feature augmentation lead to superior or comparable performance to state-of-the-art results in several unsupervised domain adaptation benchmarks.

期刊:arXiv, 2018年5月4日

网址

http://www.zhuanzhi.ai/document/74ec78227fe38592dd361ff04f049d4e

7.Boosting Noise Robustness of Acoustic Model via Deep Adversarial Training(通过深度对抗性训练提高声学模型的噪声鲁棒性)



作者:Bin Liu,Shuai Nie,Yaping Zhang,Dengfeng Ke,Shan Liang,Wenju Liu1

机构:University of Chinese Academy of Sciences,Beijing Forestry University

摘要:In realistic environments, speech is usually interfered by various noise and reverberation, which dramatically degrades the performance of automatic speech recognition (ASR) systems. To alleviate this issue, the commonest way is to use a well-designed speech enhancement approach as the front-end of ASR. However, more complex pipelines, more computations and even higher hardware costs (microphone array) are additionally consumed for this kind of methods. In addition, speech enhancement would result in speech distortions and mismatches to training. In this paper, we propose an adversarial training method to directly boost noise robustness of acoustic model. Specifically, a jointly compositional scheme of generative adversarial net (GAN) and neural network-based acoustic model (AM) is used in the training phase. GAN is used to generate clean feature representations from noisy features by the guidance of a discriminator that tries to distinguish between the true clean signals and generated signals. The joint optimization of generator, discriminator and AM concentrates the strengths of both GAN and AM for speech recognition. Systematic experiments on CHiME-4 show that the proposed method significantly improves the noise robustness of AM and achieves the average relative error rate reduction of 23.38% and 11.54% on the development and test set, respectively.

期刊:arXiv, 2018年5月2日

网址

http://www.zhuanzhi.ai/document/9dd23e2b343ed994cf5e6143700df612

8.Controllable Generative Adversarial Network(可控的生成对抗网络)



作者:Minhyeok Lee,Junhee Seok

机构:Korea University

摘要:Recently introduced generative adversarial network (GAN) has been shown numerous promising results to generate realistic samples. The essential task of GAN is to control the features of samples generated from a random distribution. While the current GAN structures, such as conditional GAN, successfully generate samples with desired major features, they often fail to produce detailed features that bring specific differences among samples. To overcome this limitation, here we propose a controllable GAN (ControlGAN) structure. By separating a feature classifier from a discriminator, the generator of ControlGAN is designed to learn generating synthetic samples with the specific detailed features. Evaluated with multiple image datasets, ControlGAN shows a power to generate improved samples with well-controlled features. Furthermore, we demonstrate that ControlGAN can generate intermediate features and opposite features for interpolated and extrapolated input labels that are not used in the training process. It implies that ControlGAN can significantly contribute to the variety of generated samples.

期刊:arXiv, 2018年5月2日

网址

http://www.zhuanzhi.ai/document/a8cf307e26ee5cbbc1af9a940b7bcc7f

-END-

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2018-05-14,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 专知 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
暂无评论
推荐阅读
编辑精选文章
换一批
【论文推荐】最新八篇生成对抗网络相关论文—离散数据生成、设计灵感、语音波形合成、去模糊、视觉描述、语音转换、对齐方法、注意力
【导读】专知内容组整理了最近八篇生成对抗网络(Generative Adversarial Networks )相关文章,为大家进行介绍,欢迎查看! 1.Correlated discrete data generation using adversarial training(使用对抗训练的相关离散数据生成) ---- ---- 作者:Shreyas Patel,Ashutosh Kakadiya,Maitrey Mehta,Raj Derasari,Rahul Patel,Ratnik Gandhi 机
WZEARW
2018/04/13
1.1K0
【论文推荐】最新八篇生成对抗网络相关论文—离散数据生成、设计灵感、语音波形合成、去模糊、视觉描述、语音转换、对齐方法、注意力
最新5篇生成对抗网络相关论文推荐—FusedGAN、DeblurGAN、AdvGAN、CipherGAN、MMD GANS
【导读】专知内容组整理了最近生成对抗网络相关文章,为大家进行介绍,欢迎查看! 1. Semi-supervised FusedGAN for Conditional Image Generation(基于半监督FusedGAN的条件图像生成) ---- ---- 作者:Navaneeth Bodla,Gang Hua,Rama Chellappa 摘要:We present FusedGAN, a deep network for conditional image synthesis with contr
WZEARW
2018/04/13
1.8K0
最新5篇生成对抗网络相关论文推荐—FusedGAN、DeblurGAN、AdvGAN、CipherGAN、MMD GANS
【论文推荐】最新6篇生成式对抗网络(GAN)相关论文—半监督对抗学习、行人再识别、代表性特征、高分辨率深度卷积、自监督、超分辨
【导读】专知内容组整理了最近六篇生成式对抗网络(GAN)相关文章,为大家进行介绍,欢迎查看! 1. Classification of sparsely labeled spatio-temporal data through semi-supervised adversarial learning(基于半监督对抗学习的稀疏标记时空数据分类) ---- ---- 作者:Atanas Mirchev,Seyed-Ahmad Ahmadi 摘要:In recent years, Generative Adver
WZEARW
2018/04/13
1.6K0
【论文推荐】最新6篇生成式对抗网络(GAN)相关论文—半监督对抗学习、行人再识别、代表性特征、高分辨率深度卷积、自监督、超分辨
【论文推荐】最新六篇生成式对抗网络(GAN)相关论文—半监督学习、对偶、交互生成对抗网络、激活、纳什均衡、tempoGAN
【导读】专知内容组整理了最近六篇生成式对抗网络(GAN)相关文章,为大家进行介绍,欢迎查看! 1. Exploiting the potential of unlabeled endoscopic video data with self-supervised learning(基于半监督学习的无标签内窥镜视频数据分析方法) ---- ---- 作者:Tobias Ross,David Zimmerer,Anant Vemuri,Fabian Isensee,Manuel Wiesenfarth,Sebas
WZEARW
2018/04/16
1.1K0
【论文推荐】最新六篇生成式对抗网络(GAN)相关论文—半监督学习、对偶、交互生成对抗网络、激活、纳什均衡、tempoGAN
【论文推荐】最新6篇行人重识别相关论文—深度空间特征重构、生成对抗网络、图像生成、系列实战、图像-图像域自适应方法、行人检索
【导读】专知内容组整理了最近六篇行人重识别(Person Re-identification)相关文章,为大家进行介绍,欢迎查看! 1. Deep Spatial Feature Reconstruction for Partial Person Re-identification: Alignment-Free Approach(基于深度空间特征重构的部分行人重识别:对齐无关方法) ---- ---- 作者:Lingxiao He,Jian Liang,Haiqing Li,Zhenan Sun 摘要:P
WZEARW
2018/04/13
1.5K0
【论文推荐】最新6篇行人重识别相关论文—深度空间特征重构、生成对抗网络、图像生成、系列实战、图像-图像域自适应方法、行人检索
【论文推荐】最新七篇图像分类相关论文—条件标签空间、生成对抗胶囊网络、深度预测编码网络、生成对抗网络、数字病理图像、在线表示学习
【导读】专知内容组整理了最近七篇图像分类(Image Classification)相关文章,为大家进行介绍,欢迎查看! 1. Learning Image Conditioned Label Space for Multilabel Classification(学习图像条件标签空间的多标签分类) ---- ---- 作者:Yi-Nan Li,Mei-Chen Yeh 摘要:This work addresses the task of multilabel image classification. I
WZEARW
2018/04/16
1.3K0
【论文推荐】最新七篇图像分类相关论文—条件标签空间、生成对抗胶囊网络、深度预测编码网络、生成对抗网络、数字病理图像、在线表示学习
【论文推荐】最新5篇行人重识别( Person Re-ID)相关论文—样本生成、超越人类、实践指南、姿态归一化、图像生成
【导读】专知内容组整理了最近五篇行人重识别( Person Re-Identification)相关文章,为大家进行介绍,欢迎查看! 1. Multi-pseudo Regularized Label for Generated Samples in Person Re-Identification(行人重识别:基于多伪正则化标签的样本生成方法) ---- ---- 作者:Yan Huang,Jinsong Xu,Qiang Wu,Zhedong Zheng,Zhaoxiang Zhang,Jian Zha
WZEARW
2018/04/13
1.3K0
【论文推荐】最新5篇行人重识别( Person Re-ID)相关论文—样本生成、超越人类、实践指南、姿态归一化、图像生成
【论文推荐】最新六篇对抗自编码器相关论文—多尺度网络节点表示、生成对抗自编码、逆映射、Wasserstein、条件对抗、去噪
【导读】专知内容组整理了最近六篇对抗自编码器(Adversarial Autoencoder)相关文章,为大家进行介绍,欢迎查看! 1. AAANE: Attention-based Adversarial Autoencoder for Multi-scale Network Embedding(AAANE: 基于注意力机制对抗自编码器的多尺度网络节点表示) ---- ---- 作者:Lei Sang,Min Xu,Shengsheng Qian,Xindong Wu 摘要:Network embeddi
WZEARW
2018/04/13
1.5K0
【论文推荐】最新六篇对抗自编码器相关论文—多尺度网络节点表示、生成对抗自编码、逆映射、Wasserstein、条件对抗、去噪
【论文推荐】最新六篇行人再识别(ReID)相关论文—和谐注意力网络、时序残差学习、评估和基准、图像生成、三元组、对抗属性-图像
【导读】专知内容组整理了最近六篇行人再识别(Person Re-Identification)相关文章,为大家进行介绍,欢迎查看! 1. Harmonious Attention Network for Person Re-Identification(和谐注意力网络的行人再识别) ---- ---- 作者:Wei Li,Xiatian Zhu,Shaogang Gong 摘要:Existing person re-identification (re-id) methods either assume t
WZEARW
2018/04/16
1.8K0
【论文推荐】最新六篇行人再识别(ReID)相关论文—和谐注意力网络、时序残差学习、评估和基准、图像生成、三元组、对抗属性-图像
【论文推荐】最新六篇视觉问答(VQA)相关论文—盲人问题、物体计数、多模态解释、视觉关系、对抗性网络、对偶循环注意力
【导读】专知内容组整理了最近六篇视觉问答(Visual Question Answering)相关文章,为大家进行介绍,欢迎查看! 1. VizWiz Grand Challenge: Answering Visual Questions from Blind People(VizWiz Grand Challenge:回答来自于盲人的视觉问题) ---- ---- 作者:Danna Gurari,Qing Li,Abigale J. Stangl,Anhong Guo,Chi Lin,Kristen Gr
WZEARW
2018/04/16
1.2K0
【论文推荐】最新六篇视觉问答(VQA)相关论文—盲人问题、物体计数、多模态解释、视觉关系、对抗性网络、对偶循环注意力
【论文推荐】最新七篇行人再识别相关论文—深度排序、风格自适应、对抗、重排序、多层次相似性、深度空间特征重构、图对应迁移
【导读】既昨天推出六篇行人再识别文章,专知内容组今天又推出最近七篇行人再识别(Person Re-Identification)相关文章,为大家进行介绍,欢迎查看! 1. MaskReID: A Mask Based Deep Ranking Neural Network for Person Re-identification(用于行人再识别的一种基于深度排序的神经网络) 作者:Lei Qi,Jing Huo,Lei Wang,Yinghuan Shi,Yang Gao 机构:Nanjing Univer
WZEARW
2018/04/16
1.5K0
【论文推荐】最新七篇行人再识别相关论文—深度排序、风格自适应、对抗、重排序、多层次相似性、深度空间特征重构、图对应迁移
【论文推荐】最新八篇图像描述生成相关论文—比较级对抗学习、正则化RNNs、深层网络、视觉对话、婴儿说话、自我检索
【导读】专知内容组整理了最近八篇图像描述生成(Image Captioning)相关文章,为大家进行介绍,欢迎查看! 1.Generating Diverse and Accurate Visual Captions by Comparative Adversarial Learning(通过比较级对抗学习产生多样而准确的视觉描述) 作者:Dianqi Li,Qiuyuan Huang,Xiaodong He,Lei Zhang,Ming-Ting Sun 机构:University of Washingt
WZEARW
2018/04/13
1.3K0
【论文推荐】最新八篇图像描述生成相关论文—比较级对抗学习、正则化RNNs、深层网络、视觉对话、婴儿说话、自我检索
【论文推荐】最新六篇机器翻译相关论文— 自注意力残差解码器、SGNMT、级联方法、神经序列预测、Benchmark、人类水平
【导读】专知内容组整理了最近六篇机器翻译(Machine Translation)相关文章,为大家进行介绍,欢迎查看! 1.Self-Attentive Residual Decoder for Neural Machine Translation(基于自注意力残差解码器的神经机器翻译) ---- 作者:Lesly Miculicich Werlen,Nikolaos Pappas,Dhananjay Ram,Andrei Popescu-Belis 摘要:Neural sequence-to-sequen
WZEARW
2018/04/08
1.1K0
【论文推荐】最新六篇机器翻译相关论文— 自注意力残差解码器、SGNMT、级联方法、神经序列预测、Benchmark、人类水平
【论文推荐】最新7篇变分自编码器(VAE)相关论文—汉语诗歌、生成模型、跨模态、MR图像重建、机器翻译、推断、合成人脸
【导读】专知内容组整理了最近七篇变分自编码器(Variational Autoencoders)相关文章,为大家进行介绍,欢迎查看! 1. Generating Thematic Chinese Poetry using Conditional Variational Autoencoders with Hybrid Decoders(利用带混合解码器的条件变分自编码器生成主题汉语诗歌) ---- ---- 作者:Xiaopeng Yang,Xiaowen Lin,Shunda Suo,Ming Li 摘要
WZEARW
2018/04/13
2.7K0
【论文推荐】最新7篇变分自编码器(VAE)相关论文—汉语诗歌、生成模型、跨模态、MR图像重建、机器翻译、推断、合成人脸
【论文推荐】最新八篇网络节点表示相关论文—可扩展嵌入、对抗自编码器、图划分、异构信息、显式矩阵分解、深度高斯、图、随机游走
【导读】专知内容组整理了最近八篇网络节点表示(Network Embedding)相关文章,为大家进行介绍,欢迎查看! 1.SIGNet: Scalable Embeddingsfor Signed Networks(SIGNet: 基于可扩展嵌入的Signed网络) ---- 作者:Mohammad Raihanul Islam,B. Aditya Prakash,Naren Ramakrishnan 摘要:Recent successes in word embedding and document e
WZEARW
2018/04/08
1.5K0
【论文推荐】最新八篇网络节点表示相关论文—可扩展嵌入、对抗自编码器、图划分、异构信息、显式矩阵分解、深度高斯、图、随机游走
【论文推荐】最新5篇图像分割相关论文—条件随机场和深度特征学习、移动端网络、长期视觉定位、主动学习、主动轮廓模型、生成对抗性网络
【导读】专知内容组整理了最近五篇视觉图像分割(Image Segmentation)相关文章,为大家进行介绍,欢迎查看! 1. Conditional Random Field and Deep Feature Learning for Hyperspectral Image Segmentation(基于条件随机场和深度特征学习的高光谱图像分割) ---- ---- 作者:Fahim Irfan Alam,Jun Zhou,Alan Wee-Chung Liew,Xiuping Jia,Jocelyn C
WZEARW
2018/04/12
1K0
【论文推荐】最新5篇图像分割相关论文—条件随机场和深度特征学习、移动端网络、长期视觉定位、主动学习、主动轮廓模型、生成对抗性网络
【论文推荐】最新6篇图像描述生成相关论文—语言为枢纽、细粒度、生成器、注意力机制、策略梯度优化、判别性目标
【导读】专知内容组整理了最近六篇图像描述生成(Image Caption)相关文章,为大家进行介绍,欢迎查看! 1. Unpaired Image Captioning by Language Pivoting(以语言为枢纽生成不成对图像的描述) ---- 作者:Jiuxiang Gu,Shafiq Joty,Jianfei Cai,Gang Wang 机构:Alibaba AI Labs,Nanyang Technological University 摘要:Image captioning is a m
WZEARW
2018/04/08
9690
【论文推荐】最新6篇图像描述生成相关论文—语言为枢纽、细粒度、生成器、注意力机制、策略梯度优化、判别性目标
【论文推荐】最新5篇图像分割(Image Segmentation)相关论文—多重假设、超像素分割、自监督、图、生成对抗网络
【导读】专知内容组整理了最近五篇图像分割(Image Segmentation)相关文章,为大家进行介绍,欢迎查看! 1. Improved Image Segmentation via Cost Minimization of Multiple Hypotheses(通过多重假设最小化损失改进图像分割性能) ---- ---- 作者:Marc Bosch,Christopher M. Gifford,Austin G. Dress,Clare W. Lau,Jeffrey G. Skibo,Gordon
WZEARW
2018/04/13
1.3K0
【论文推荐】最新5篇图像分割(Image Segmentation)相关论文—多重假设、超像素分割、自监督、图、生成对抗网络
【论文推荐】最新七篇视觉问答(VQA)相关论文—融合算子、问题类型引导注意力、交互环境、可解释性、稠密对称联合注意力
【导读】既昨天推出七篇视觉问答(Visual Question Answering)文章,专知内容组今天又推出最近七篇视觉问答相关文章,为大家进行介绍,欢迎查看! 1. Generalized Hadamard-Product Fusion Operators for Visual Question Answering(基于广义Hadamard-Product融合算子的视觉问答) ---- ---- 作者:Brendan Duke,Graham W. Taylor 机构:, University of Gu
WZEARW
2018/06/05
9890
【论文推荐】最新5篇行人再识别(ReID)相关论文—迁移学习、特征集成、重排序、 多通道金字塔、深层生成模型
【导读】专知内容组整理了最近五篇行人再识别(Person Re-identification)相关文章,为大家进行介绍,欢迎查看! 1.Unsupervised Cross-dataset Person Re-identification by Transfer Learning of Spatial-Temporal Patterns(基于迁移学习时空模式的无监督跨数据集的行人再识别) ---- 作者:Jianming Lv,Weihang Chen,Qing Li,Can Yang 机构:South C
WZEARW
2018/04/08
1.7K0
【论文推荐】最新5篇行人再识别(ReID)相关论文—迁移学习、特征集成、重排序、 多通道金字塔、深层生成模型
推荐阅读
【论文推荐】最新八篇生成对抗网络相关论文—离散数据生成、设计灵感、语音波形合成、去模糊、视觉描述、语音转换、对齐方法、注意力
1.1K0
最新5篇生成对抗网络相关论文推荐—FusedGAN、DeblurGAN、AdvGAN、CipherGAN、MMD GANS
1.8K0
【论文推荐】最新6篇生成式对抗网络(GAN)相关论文—半监督对抗学习、行人再识别、代表性特征、高分辨率深度卷积、自监督、超分辨
1.6K0
【论文推荐】最新六篇生成式对抗网络(GAN)相关论文—半监督学习、对偶、交互生成对抗网络、激活、纳什均衡、tempoGAN
1.1K0
【论文推荐】最新6篇行人重识别相关论文—深度空间特征重构、生成对抗网络、图像生成、系列实战、图像-图像域自适应方法、行人检索
1.5K0
【论文推荐】最新七篇图像分类相关论文—条件标签空间、生成对抗胶囊网络、深度预测编码网络、生成对抗网络、数字病理图像、在线表示学习
1.3K0
【论文推荐】最新5篇行人重识别( Person Re-ID)相关论文—样本生成、超越人类、实践指南、姿态归一化、图像生成
1.3K0
【论文推荐】最新六篇对抗自编码器相关论文—多尺度网络节点表示、生成对抗自编码、逆映射、Wasserstein、条件对抗、去噪
1.5K0
【论文推荐】最新六篇行人再识别(ReID)相关论文—和谐注意力网络、时序残差学习、评估和基准、图像生成、三元组、对抗属性-图像
1.8K0
【论文推荐】最新六篇视觉问答(VQA)相关论文—盲人问题、物体计数、多模态解释、视觉关系、对抗性网络、对偶循环注意力
1.2K0
【论文推荐】最新七篇行人再识别相关论文—深度排序、风格自适应、对抗、重排序、多层次相似性、深度空间特征重构、图对应迁移
1.5K0
【论文推荐】最新八篇图像描述生成相关论文—比较级对抗学习、正则化RNNs、深层网络、视觉对话、婴儿说话、自我检索
1.3K0
【论文推荐】最新六篇机器翻译相关论文— 自注意力残差解码器、SGNMT、级联方法、神经序列预测、Benchmark、人类水平
1.1K0
【论文推荐】最新7篇变分自编码器(VAE)相关论文—汉语诗歌、生成模型、跨模态、MR图像重建、机器翻译、推断、合成人脸
2.7K0
【论文推荐】最新八篇网络节点表示相关论文—可扩展嵌入、对抗自编码器、图划分、异构信息、显式矩阵分解、深度高斯、图、随机游走
1.5K0
【论文推荐】最新5篇图像分割相关论文—条件随机场和深度特征学习、移动端网络、长期视觉定位、主动学习、主动轮廓模型、生成对抗性网络
1K0
【论文推荐】最新6篇图像描述生成相关论文—语言为枢纽、细粒度、生成器、注意力机制、策略梯度优化、判别性目标
9690
【论文推荐】最新5篇图像分割(Image Segmentation)相关论文—多重假设、超像素分割、自监督、图、生成对抗网络
1.3K0
【论文推荐】最新七篇视觉问答(VQA)相关论文—融合算子、问题类型引导注意力、交互环境、可解释性、稠密对称联合注意力
9890
【论文推荐】最新5篇行人再识别(ReID)相关论文—迁移学习、特征集成、重排序、 多通道金字塔、深层生成模型
1.7K0
相关推荐
【论文推荐】最新八篇生成对抗网络相关论文—离散数据生成、设计灵感、语音波形合成、去模糊、视觉描述、语音转换、对齐方法、注意力
更多 >
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档