As a final assignment in Advanced Topics in Deep Learning, I worked with a team to write a research report testing how different Graph Neural Network (GNN) architectures scale under computational constraints. We compared conventional GCNs, JKNet, and DropEdge, examining their performance while measuring FLOPs and MACs during inference. By varying hidden dimensions and layer counts, we assessed both accuracy and scalability across multiple graph datasets (Cora, Citeseer, PubMed).