Source Themes

Image Generation Via Minimizing Frechet Distance in Discriminator Feature Space

We use the intuition that it is much better to train the GAN generator by minimizing the distributional distance between real and generated images in a small dimensional feature space representing such a manifold than on the original pixel-space. We use the feature space of the GAN discriminator for such a representation. For distributional distance, we employ one of two choices; the Frechet distance or direct optimal transport (OT); these respectively lead us to two new GAN methods as proposed in the paper.

Regression via Implicit Models and Optimal Transport Cost Minimization

This paper addresses the classic problem of regression, which involves the inductive learning of a map, $y=f(x,z)$, $z$ denoting noise, $f:\mathbb{R}^n\times \mathbb{R}^k \rightarrow \mathbb{R}^m$. We take another step towards regression models that implicitly model the noise, and propose a solution which directly optimizes the optimal transport cost between the true probability distribution $p(y|x)$ and the estimated distribution $\hat{p}(y|x)$.

CAWA: An Attention-Network for Credit Attribution

We present Credit Attribution With Attention (CAWA), a neural-network-based approach, that instead of using sentence-level labeled data, uses the set of class labels that are associated with an entire document as a source of distant-supervision.

Image Hashing by Minimizing Independent Relaxed Wasserstein Distance

In this paper, we propose an Independent Relaxed Wasserstein Autoencoder, which presents a novel, efficient hashing method that can implicitly learn the optimal hash function by directly training the adversarial autoencoder without any discriminator/critic.

Targeted display advertising: the case of preferential attachment

In this paper, we develop domain-adaptation approaches to address the challenge of predicting interested users for the partners with insufficient data, i.e., the tail partners. Specifically, we develop simple yet effective approaches that leverage the similarity among the partners to transfer information from the partners with sufficient data to cold-start partners.

Intent Term Selection and Refinement in E-Commerce Queries

In this paper, we leverage the historical query reformulation logs of a major e-commerce retailer (walmart.com) to develop distant-supervised approaches to solve two problems (i) identifying the query terms that describe the query’s product intent, and (ii) addressing the vocabulary gap between the query terms and the product’s description.

Text Segmentation on Multilabel Documents: A Distant-Supervised Approach

In this paper, we develop an approach that instead of using segment-level ground truth information, it instead uses the set of labels that are associated with a document and are easier to obtain as the training data essentially corresponds to a multilabel dataset.

Distributed Representation of Multi-Sense Words: A Loss-Driven Approach

This work presents LDMI, a new model for estimating distributional representations of words. LDMI relies on the idea that, if a word carries multiple senses, then having a different representation for each of its senses should lead to a lower loss associated with predicting its co-occurring words, as opposed to the case when a single vector representation is used for all the senses.