HI, I'M
SHUBHANG

ABOUT

I'm a Senior studying Computer Science at Stanford University. I am focused on Machine Learning and am involved in research at the Stanford Vision Lab. I am interested in CS education and have headed many education initiatives, such as TA-ing multiple high level AI classes during my undergrad and founding and teaching an AI class through CS + Social Good.

Download Resume

PROJECTS

Arbitrary Neural Style
The Commuter Chicago
RNN Encoder-Decoder for Machine Translation
dv8
Iron Drive
Nibbly
shubhangdesai.github.io
Style Transfer App
Concepts iOS

EXPERIENCE

Software Engineering Intern (Deep Learning)
Microsoft, Summer 2019

Spearheaded the development of Microsoft’s new cutting-edge Deep Learning-based handwriting recognition effort. Experimented with input features, model architectures, and schedules to reach recognition state-of-the-art result set by Google Research.

Research Intern
Salesforce Research, Spring 2019

Worked with Salesforce Research time on a project which involved predicting diagnoses in pathological slides using AI. Ran experiments on full-slide pathology data and iterated based off of experimental results; manuscript in progress

Research Assistant
Stanford Vision Lab, Winter 2019 - Present

Worked on a couple projects. The first was a project involving artistic creativity with GANs. The second is a dataset collection and cleaning project, with the goal of releasing the dataset to the public.

Technical Writer
deeplearning.ai, Summer 2018

Worked with interdisciplinary group to help democratize AI education. Created educational content on deep learning and iterate through drafts, survey target audience to create better content

Deep Learning Intern
PayPal, Summer 2018

Devleoped state-of-the-art NLP models in production-ready environments using GPU-optimized TensorFlow. Created deep learning framework in Python that will be used to develop, test, and deploy models across the org.

Research Assistant
Stanford Artificial Intelligence Lab, Fall 2017 - Fall 2018

Worked in Ng group on applying AI to medicine. Led the group focused on applying Computer Vision techniques to classify abnormalities in ultrasounds of the leg.

Machine Learning Intern
Nasdaq, Summer 2017

Interned on ML team. Designed, developed, and back-tested Deep Learning models on historical financial data. Delivered research to Nasdaq through internal whitepaper.

Research Fellow
IDEO CoLab, January 2017

Prototyped three business models in nine days to solve the problem of facilitating future connected markets using Blockchain. Delivered three business models and respective products to IDEO.

WRITINGS

Neural Artistic Style Transfer: A Comprehensive Look

Artists and Machine Intelligence

An article describing the current state of Neural Style Transfer, as well as my contributions to the space

A Soft Introduction to Neural Networks

Towards Data Science

An article that introduces the reader to neural networks by guiding them through building a simple NN

End-to-End Learning of One Objective Function to Represent Multiple Styles for Neural Style Transfer

Stanford CS231n

Final paper for "Convolutional Neural Networks" class project, a cutting-edge research project on neural style transfer

BLOG

A Comprehensive Look into Neural Artistic Style Transfer

August 18, 2017

This past year, I took Stanford’s CS 231n course on Convolutional Neural Networks. My final project for the course dealt with a super cool concept called neural style transfer, in which the style of a piece of artwork is transferred onto a picture. Here’s a classic example–a picture of Hoover Tower at Stanford, in the style of The Starry Night:

Read More

Understanding Recurrent Neural Networks

August 13, 2017

In my last post, we used our micro-framework to learn about and create a Convolutional Neural Network. It was super cool, so check it out if you haven’t already. Now, in my final post for this tutorial series, we’ll be similarly learning about and building Recurrent Neural Networks (RNNs). RNNs are neural networks that are fantastic at time-dependent tasks, especially tasks that have to do with time series as an input. RNNs can serially process each time step of the series in order to build a semantic representation of the whole time series, one step at a time.

Read More

Introducing Krikos - A Python ML Framework for Learning and Experimentation

July 23, 2017

I am pleased to announce that I have published my first Python library: Krikos!

Read More

See All