Jack Bodine

Artifacts are tangible pieces of evidence demonstrating the skills and knowledge I’ve acquired from various learning experiences. Links labeled “currently unavailable” indicate either that I haven’t gotten around to publishing it or that the project contains exam/coursework which must remain private for furture iterations of the course. However, if you email me, I’ll try to expedite it.

November 2024

Comparative Analysis of Graph Neural Networks

Advanced Topics in Deep Learning

Report and Code Available Upon Request

As a final assignment in Advanced Topics in Deep Learning, I worked with a team to write a research report testing how different Graph Neural Network (GNN) architectures scale under computational constraints. We compared conventional GCNs, JKNet, and DropEdge, examining their performance while measuring FLOPs and MACs during inference. By varying hidden dimensions and layer counts, we assessed both accuracy and scalability across multiple graph datasets (Cora, Citeseer, PubMed).

Machine Learning, Research

September 2024

NLP Toolkit

Natural Language Processing

Currently Unavailble

This NLP toolkit was made to implement various concepts from my NLP course. It includes n-gram models, logisitc regression classifiers, and sequence labeling models.

Natural Language Processing, Python, Machine Learning

March 2024

Reinforcement Learning Algorithms in Python

Online and Reinforcement Learning

Code Available on Github

This course delved into online learning and reinforcement learning algorithms. I implemented several key algorithms, including multi-armed bandit algorithms like EXP3 and Hedge, as well as value iteration, policy evaluation, and temporal difference methods.

Machine Learning, Reinforcement Learning

May 2022

Drunk Philosophers

Artificial Neural Networks and Deep Learning

Project Report

Drunk Philosophers bridges the gap between ancient philosophical discourse and modern computational techniques using generative neural networks to recreate conversations between historical philosophers. The objective was to bring the insights of great thinkers like Aristotle, Hume, Kant, and Nietzsche into contemporary debates, leveraging the power of AI to generate new dialogues and potentially uncover new insights.

We used a comprehensive dataset from Kaggle containing over 300,000 sentences from 51 philosophical texts across 10 major schools of philosophy. After pre-processing the data, we trained Long Short-Term Memory (LSTM) networks for each philosopher to generate text segments, which were then refined using GPT-3 for coherent conversations. Despite initial challenges, our philosopher-bots eventually produced text that not only reflected the style and themes of each philosopher but also engaged in somewhat coherent debates. This outcome demonstrates that the nuances of each philosopher’s writing style can be effectively captured by neural networks, opening new avenues for exploring and understanding philosophical ideas.

Philosophy, Machine Learning, Natural Language Processing

Skills

clear