Andrew Choi


About Me

I currently work as an Engineer for Algorand, where I work on programmability, smart-contract languages, and developer infrastructure for blockchain platforms and Web3 apps. I am broadly interested in software engineering, distributed systems, and blockchain. I am also interested in Computer Science education, and frequently teach as an Adjunct in the Boston area.

Prior to this, I was a graduate student at the University of Toronto's Department of Computer Science, where I worked with Prof. Fan Long. I was also briefly at Google and Amazon where I worked on Software Defined Networks (SDN) in the Global Networking group, AWS DynamoDB, and IoT authentication systems.

Outside of work, I like playing my trombone, kicking around soccer balls, and skimming self-help books.

You can reach me by email or on LinkedIn if you'd like to talk!



Jun 2021 - Present

Software Engineer ( @algochoi)

Wentworth Institute of Technology

Spring 2023

Adjunct Instructor

Boston University

Winter 2022, Summer 2023

Adjunct Facilitator


Summer 2020

Software Engineer Intern

Amazon Web Services (AWS)

Summer 2018, 2019

Software Development Engineer Intern


Summer 2017

Research Student at the Statistical and Relational Artificial Intelligence Lab


Summer 2016

Software Engineering Intern

University of Toronto Engineering Outreach

Summer 2015

Python Instructor


University of Toronto

Master (M.Sc.) in Computer Science

University of Toronto

Bachelor (B.A.Sc.) in Computer Engineering with Honours


Friendle logo


In a team of 4, I worked on the backend for a matching/hangout suggestion app to help those with social anxiety. We deployed the backend on GCP Cloud Functions (serverless) and integrated it with Firestore. Our project received Telus' Best Mental Health Hack and 2nd place overall at UoftHacks VIII.

Check us out on Devpost! logo Face-tracking Video Recorder

In a team of 4, I developed a face-tracking video recording app for my capstone project. We used FaceNet and PoseNet, hosted on an EC2 instance, to recognize the speakers' faces and detect gesture-based commands. To communicate with the neural nets, we used AWS IoT to receive real-time commands from the EC2 and relay them to an Arduino motor with a smartphone mount.
You can see a demo of the project here: Google Drive