Tejal Wakchoure
Software Developer, UI & UX Designer, Cinema Addict

A profile picture of Tejal

Hi! I'm Tejal.


I currently develop code at Goldman Sachs and live in Mumbai, India. I studied computer science at BITS Pilani and pursued my undergraduate thesis with Andrew Lippman at the MIT Media Lab.

I like to work on projects that are design-oriented and solve real-world human-computer interaction (HCI) problems. I enjoy working with people who value empathy-driven development. My interests are interdisciplinary, blending concepts from HCI and product design with literature, cinema, and economics.

Back in 2019, a summer internship project drove me to explore UI design, and I have been fascinated with it ever since. I built this website and implemented one of my favourite tabletop games. For my thesis, I worked on politics and computer vision for a grassroots-level analysis of news media. (Here is an interview on my experience!)

When I'm not coding up random stuff, I:

  • write. Back in university, I was an editor for our literary and technical magazine.
  • illustrate on Krita. It's not much, but it's honest work :)
  • attend pub quizzes and play CMS games. I am obsessed with Rollercoaster Tycoon.
  • listen to pop rock. I firmly believe that the '00s was the greatest decade for pop music.
  • read fiction. My favourite in 2021 so far has been Anxious People.
  • watch and dissect movies/TV shows. I am currently watching Lupin.

If you have an exciting opportunity that I would be a great fit for, please contact me. Found an interesting article on cinema studies? I'd love to chat!

  • 2021 - 2020

    Goldman Sachs

  • 2019 - 2019

    Massachusetts Institute of Technology

  • 2020 - 2016

    BITS Pilani

Featured Work

A news story being filmed
A computer analysing data
A mobile app detecting contact
A chat system in use
A stethoscope
A system visualizing and manipulating data
A news story being filmed
A computer analysing data
A mobile app detecting contact
A chat system in use
A stethoscope
A system visualizing and manipulating data


A media digestion tool that cross-analyses verbal and nonverbal cues for presentation, analysis, and summarization of broadcast news.

The main objective of this project was to understand the relationship between content and presentation for any given scene and understand the emotive aspects of the same. We wanted to study the extent to which such presentation affects audiences and see if we can extract the content from its packaging. We also investigated how the presentation of the same content differs from channel to channel. This analysis aimed to measure if such portrayals affect their audiences and contribute to the formation of potentially dangerous echo chambers.

SuperGlue fuses multiple modalities to create a comprehensive model for the cross-analysis of body language, scene context, and other signals to explore the nature of news on different media outlets. I spearheaded the body language analysis for this project, where we cross-examined hand gestures, facial expressions, and body posture of the newscaster as three dimensions of influence on the overall manner of demonstration. The model is developed in OpenCV and PyTorch using fusion at the decision level for emotion analysis. Its principal components are a Recurrent Neural Network, 3D Convolutional Neural Network, and Azure Media Analytics’ model for posture recognition, hand gesture recognition, and facial emotion recognition respectively.


An end-to-end framework that queries data from 3 different sources simultaneously for a flat view for data collection.

If a user wanted to query data from Data Lake, Sybase IQ, and Elasticsearch, they would have to go to the 3 platforms, get results from each of them, and manually merge them. The idea behind this project was to create a framework that queries these sources simultaneously to create a black box for database querying. The user would then have to just run a single query from a comprehensive UI, and the framework would execute it on different sources internally to produce a consolidated result. This design facilitates smooth lateral data extraction and increases the efficiency of the database management system considerably.

We also designed and implemented a user interface for this project. SQL queries compiled from the user's selections on the UI are sent as calls to the middleware. The responsive web interface is developed in ReactJS, integrated with the backend services using RESTful APIs to create a consistent user experience.

  • Date: May 2019


A facial recognition-based Android attendance system built to improve transparency, trained on a high-density database with 1000+ images.

Student attendance is still recorded manually in many academic institutions. This is a time-consuming practice subject to considerable amounts of fraudulence. We created a facial recognition-based attendance system to tackle these concerns, thereby improving transparency in student-teacher interaction, reducing instances of bias, and raising overall administrative efficiency.

The system was deployed as an Android app that automatically records student attendance based on input by the students and teachers. We built a transfer learning model using OpenCV for face detection and TensorFlow for face recognition. The Java/XML user interface has extensive landings for students and teachers, including provisions for courses, grading, dashboards, and registration services for effortless daily interaction.

  • Date: Jan 2019
  • Details: GitHub


A desktop-based application that uses NLP and social network analysis techniques to study user interactions on the Ubuntu IRC networking service.

With the increased use of open source software, Internet Relay Chat (IRC) has become a popular form of synchronous communication. The primary objective of this study was to track the development of the Ubuntu IRC community over time and examine the dynamically changing participation patterns. The aims of this analysis were twofold - to delineate substructures and calculate the frequency of discussion of concepts in the network.

We constructed a community model for information flow to assist and assess knowledge transfer and filter messages to split participants into groups for greater efficiency. We provided a new perspective on generalizing the pattern of these relationships, studying linguistic behaviour using Natural Language Processing approaches like reply structure and word context in conjunction with Gephi's clustering analysis and inferential modelling algorithms.

Users are often subjected to long wait times for developers to resolve their queries, increasing the possibility that the question gets buried under others. Our second aim was to benefit the learning community by capturing the topic-wise rate of discussion to reduce that loss of knowledge transfer. We created a python-based application that helps users note this frequency, increasing the ease of usage of the forum substantially. Detecting the distinct topics also helped match users to chat rooms, optimize chat queries, and trace subject changes within a channel.

  • Date: Sept 2018
  • Details: GitHub

Healthcare Analytics

Health-based classification of patient data into 8 different classes with high real-time diagnostic accuracy.

The aim of this project was originally to obtain predictions on medical diagnoses noted by professionals to make their roles smoother. However, while gathering data for this project from the hospital, we encountered an issue - all the patient data for the past 30 years was handwritten. This led us to a second research direction, namely digitising all that data using machine learning algorithms.

The classification model notes out-of-range test values and makes a prediction based on these analyses to help the doctor diagnose appropriately. The model is built with PyTorch and uses a Convolutional Neural Network to create health-based classification using exploratory data analysis techniques like Principal Component Analysis and t-distributed Stochastic Neighbour Embedding. The training dataset comprising the expression levels of 77 proteins measured in the cerebral cortex of mice was grouped into 8 classes. We used Optical Character Recognition to digitise the physical files.

  • Date: May 2018


A mailing list parser that constructs time-varying conversation thread hypergraphs revealing communication patterns to form predictive models.

We developed an IMAP server-based mailing list parser to extract information such as senders and time stamps, the study of which helped us construct organisational structures and derive local and global communication patterns. The focal points of this research were the invariant characteristics of a discussion thread, mailing list filters to remove spam messages for subscribers, and temporal behaviour modelling. We use the Infomap algorithm for multilevel community detection analysis. Relevant labels are incorporated from text mined using the WordNet lemmatizer.

This architecture identifies the community structure, makes predictive models, and assigns weights based on the activeness of each user amongst other activities. For example, in cases where conversations happen slowly, authors arrive slowly and the discussion spans many generations of nodes. When they end quickly, however, authors arrive quickly and the discussion ends within a few generations. We can attribute this behaviour to the popularity of the topic - authors come in from many sources for popular ones, but help trickles in slowly for specialised ones. The questions we ask then are, what are these popular threads? Can text mining help us detect them?

  • Date: Jan 2018
  • Details: GitHub

Get in touch

tejal [dot] wakchoure [at] gmail [dot] com