My name is Stephen Chung, who graduated from the University of Massachusetts Amherst with a master's degree in 2021. My primary research interest includes reinforcement learning (RL), biologically-inspired machine learning, and deep learning.
During my master's years, I was fortunate to be guided by Professor Andrew G. Barto, a pioneer in RL. Under his guidance, I studied methods to train a deep neural network without backpropagation efficiently. Despite being used in almost all deep learning methods, backpropagation is generally regarded as being biologically implausible. A more biologically plausible way of training a deep network is through treating each unit in the network as an RL agent that receives the same reward signal, but the learning speed of this method is very slow. Therefore, I investigated methods that can speed up learning while retaining the biological plausibility of this learning method. To this end, I proposed two novel algorithms: MAP Propagation (Chung, 2021) and Weight Maximization (Chung, 2022). I argue that both algorithms, built on RL methods, are more biologically plausible than backpropagation while maintaining a similar learning speed. As such, these algorithms may shed light on biological learning and substitute backpropagation in training deep learning models. My paper on MAP Propagation was recently accepted in NeurIPS 2021, and my paper on Weight Maximization was recently accepted in AAAI 2022. You can read more about this exciting research here!
In addition, I also worked with Professor Hava Siegelmann on the theoretical capability of recurrent neural networks (RNNs). My research on the theoretical capability RNNs led to our paper on the Turing-completeness of RNNs, which was recently accepted in NeurIPS 2021 (Chung & Sieglemann, 2021). In this paper, we proved the sufficient conditions for an RNN to be Turing-complete and demonstrate how to simulate a Turing machine with an RNN. This work thus allows the construction of an RNN that runs any algorithms without prior training and extends the fundamental theories on the computational power of RNNs. I also studied the training methods of spiking neural networks under the guidance of Professor Robert Kozma (Chung & Kozma, 2020).
As for my interest, I love reading Western and Chinese philosophy books, such as Zhuangzi and Nietzche. I enjoy thinking about the world and philosophical questions. I also like playing tennis and hiking!
Successfully published a sole-author paper on Weight Maximization in AAAI 2022.