Saurav Manchanda

Saurav Manchanda

Ph.D. Candidate, Computer Science

University of Minnesota

manch043 [at] umn [dot] edu

About me

Hello! I am a Ph.D. Candidate at University of Minnesota, Twin Cities advised by Prof. George Karypis. My research interests broadly include Graph Neural Networks, Information Retrieval and Computational Linguistics. My Ph.D. research is focused on developing distant-supervised algorithms with applications to text-segmentation, product-search, and citation-analysis. Recently, I have also developed interest (and published too) in Generative Adversarial Networks.

Interests

  • Graph Neural Networks
  • Information Retrieval
  • Computation Linguistics
  • Generative Adversarial Networks

Education

  • Ph.D. in Computer Science, 2015-now

    University of Minnesota

  • B.Tech. in Computer Science and Engineering, 2011-2015

    Indian Institute of Technology (IIT), Kharagpur

Experience

 
 
 
 
 

Applied Scientist Intern

Amazon Web Services

Jan 2020 – Present Palo Alto, California
Part of the Deep Learning team, working on developing new methods and tools for deep learning on graphs (https://www.dgl.ai/ ).
 
 
 
 
 

Research Scientist Intern

Criteo AI Lab

Jun 2019 – Dec 2019 Palo Alto, California
Developed domain-adaptation approaches to increase the performance of tail partners in targeted advertising. Work was published in IEEE BigData.
 
 
 
 
 

Research Scientist Intern

Walmart Labs

Jun 2018 – Aug 2018 Sunnyvale, California
Worked with Search and Relevance Algorithms team: Developed neural approaches for understanding rare queries in product search. Work was published in ACM CIKM.
 
 
 
 
 

Software Development Engineering Intern

Amazon

May 2014 – Jun 2014 Chennai, India
Worked with EPeriodicals Content Ingestion team: Implemented an MVC (Model-View-Controller) architecturalpattern for 64 bit hosts for a crucial console to be used for content fix up of EPeriodicals.

Recent Publications

Image Generation Via Minimizing Frechet Distance in Discriminator Feature Space

We use the intuition that it is much better to train the GAN generator by minimizing the distributional distance between real and generated images in a small dimensional feature space representing such a manifold than on the original pixel-space. We use the feature space of the GAN discriminator for such a representation. For distributional distance, we employ one of two choices; the Frechet distance or direct optimal transport (OT); these respectively lead us to two new GAN methods as proposed in the paper.

Regression via Implicit Models and Optimal Transport Cost Minimization

This paper addresses the classic problem of regression, which involves the inductive learning of a map, $y=f(x,z)$, $z$ denoting noise, $f:\mathbb{R}^n\times \mathbb{R}^k \rightarrow \mathbb{R}^m$. We take another step towards regression models that implicitly model the noise, and propose a solution which directly optimizes the optimal transport cost between the true probability distribution $p(y|x)$ and the estimated distribution $\hat{p}(y|x)$.

CAWA: An Attention-Network for Credit Attribution

We present Credit Attribution With Attention (CAWA), a neural-network-based approach, that instead of using sentence-level labeled data, uses the set of class labels that are associated with an entire document as a source of distant-supervision.

Image Hashing by Minimizing Independent Relaxed Wasserstein Distance

In this paper, we propose an Independent Relaxed Wasserstein Autoencoder, which presents a novel, efficient hashing method that can implicitly learn the optimal hash function by directly training the adversarial autoencoder without any discriminator/critic.

Targeted display advertising: the case of preferential attachment

In this paper, we develop domain-adaptation approaches to address the challenge of predicting interested users for the partners with insufficient data, i.e., the tail partners. Specifically, we develop simple yet effective approaches that leverage the similarity among the partners to transfer information from the partners with sufficient data to cold-start partners.

Intent Term Selection and Refinement in E-Commerce Queries

In this paper, we leverage the historical query reformulation logs of a major e-commerce retailer (walmart.com) to develop distant-supervised approaches to solve two problems (i) identifying the query terms that describe the query’s product intent, and (ii) addressing the vocabulary gap between the query terms and the product’s description.

Text Segmentation on Multilabel Documents: A Distant-Supervised Approach

In this paper, we develop an approach that instead of using segment-level ground truth information, it instead uses the set of labels that are associated with a document and are easier to obtain as the training data essentially corresponds to a multilabel dataset.

Distributed Representation of Multi-Sense Words: A Loss-Driven Approach

This work presents LDMI, a new model for estimating distributional representations of words. LDMI relies on the idea that, if a word carries multiple senses, then having a different representation for each of its senses should lead to a lower loss associated with predicting its co-occurring words, as opposed to the case when a single vector representation is used for all the senses.

Contact

  • Digital Technology Center, 499 Walter Library, Minneapolis, MN 55455