Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published:
Normalization layers enable a surprising degree of spatial communication, and we suggest caution in employing them for applications requiring a limited receptive field.
Recommended citation: Samuel Pfrommer, George Ma, Yixiao Huang, Somayeh Sojoudi (2025). Spooky Action at a Distance: Normalization Layers Enable Side-Channel Spatial Communication. arXiv preprint arXiv:2507.04709. https://www.arxiv.org/abs/2507.04709
Published:
SpecAgent proactively explores software repositories at indexing time to build speculative context that eliminates inference-time retrieval latency, improves LLM code-generation accuracy by up to 11% absolute, and introduces a new leakage-free benchmark construction method for realistic evaluation.
Recommended citation: George Ma, Anurag Koul, Qi Chen, Yawen Wu, Sachit Kuhar, Yu Yu, Aritra Sengupta, Varun Kumar, Murali Krishna Ramanathan (2025). SpecAgent: A Speculative Retrieval and Forecasting Agent for Code Completion. arXiv preprint arXiv:2510.17925. https://arxiv.org/abs/2510.17925
Published in NeurIPS, 2023
We designed the Laplacian Canonization algorithm to address the sign and basis ambiguities of Laplacian eigenvectors.
Recommended citation: Jiangyan Ma, Yifei Wang, Yisen Wang (2023). Laplacian Canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding. In Thirty-seventh Conference on Neural Information Processing Systems. https://openreview.net/forum?id=1mAYtdoYw6
Published in NeurIPS-AI4Science, 2023
We proposed to incorporate state and action symmetries into GFlowNets.
Recommended citation: Jiangyan Ma, Emmanuel Bengio, Yoshua Bengio, Dinghuai Zhang (2023). Baking Symmetry into GFlowNets. In NeurIPS 2023 AI for Science: from Theory to Practice. https://openreview.net/forum?id=CZGHAeeBk3
Published in NeurIPS, 2024
We analysed the efficiency and expressiveness of invariant and equivariant networks from a canonicalization perspective.
Recommended citation: George Ma, Yifei Wang, Derek Lim, Stefanie Jegelka, Yisen Wang (2024). A Canonicalization Perspective on Invariant and Equivariant Learning. In Thirty-eighth Conference on Neural Information Processing Systems. https://openreview.net/forum?id=jjcY92FX4R¬eId=jjcY92FX4R
Published in NeurIPS, 2025
We developed new methods to refine and falsify sparse autoencoder feature explanations, yielding higher-quality interpretability of large language models.
Recommended citation: George Ma, Samuel Pfrommer, Somayeh Sojoudi (2025). Revising and Falsifying Sparse Autoencoder Feature Explanations. In Thirty-ninth Conference on Neural Information Processing Systems. https://openreview.net/forum?id=OJAW2mHVND
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.