I am a graduate student at the University of Miami working in the lab of Dr. Odelia Schwartz since the Fall of 2018. My main research interests are computational neuroscience, vision, and machine learning. Some central themes of my projects have included efficient coding, sparse coding, vector quantization methods, and information theory. My work is supported by a National Science Foundation (NSF) Graduate Research Fellowship.
I recieved a B.S. in computer science with a minor in mathematics from the University of Central Florida. During my time at UCF I joined the Evolutionary Complexity Research Group headed by Dr. Kenneth Stanley (co-founder of Geometric Intelligence). We worked on his newly-proposed Real-Time Autoencoder-Augmented Hebbian Network (RAAHN) algorithm for autonomous agent control. In 2016 I worked with Dr. R. Paul Wiegand of the Institute for Simulation and Training at UCF on brain-inspired image compression. During the summer of 2017 I worked at the Princeton Neuroscience Institute in the lab of Dr. Jonathan Pillow on hierarchical and non-negative sparse coding.
Joshua A. Bowren, Justin, K. Pugh, and Kenneth O. Stanley In: Proceedings of the Fifteenth International Conference on the Synthesis and Simulation of Living Systems (ALIFE XV). Cambridge, MA: MIT Press, 2016. 8 pages.
Sparse coding is a theory and method claiming the early visual system codes sensory input with only a few neurons in a population. The sparse coding method developed by Olshausen and Field (1996) seeks latent variables such that only few are needed to combine linearly to reconstruct some portion of sensory input. Sparse coding is typically used to learn an overcomplete representation (more latent variables than sensory input) due to this method having higher representational capacity. When sparse coding is applied to whitened natural image data (statistical dependencies reduced and variance normalized in all directions) basis functions that resemble the receptive fields of corticial simple cells are found (on the right). These basis functions can be modeled with the two-dimensional gabor filter, thus sparse coding is said to find gabor filters (sometimes referred to as edges).
The eye has certain cells that are able to transduce color in light waves. These cells, called cones, are sensitive to the persence or abscence of certain colors. The receptive field is the portion of what the cell can see, and is divided into two regions: the center and the surround. The center and surround are excited or inhibited by different colors, thus these colors act as opponents. These cells either detect red and green light, or blue and yellow light (Atick, Li, and Redlich, 1992). Other cells called rods are sensitive for light and dark information (we may think of a gray-scale representation). The function of the cones and rods seems to imply that the brain learns three channels to represent visual information: the luminance channel (black versus white) and two color difference channels (red versus green and blue versus yellow; Olshausen, 2014). The luminance channel has high resolution while the color difference channels are significantly downsampled (Olshausen, 2014). The figure on the right depicts a similar representation, but with different color opponents (blue versus yellow and red versus yellow). This method was used early in color televisions mostly because it was more efficient than sending RGB information (Olshausen, 2014). RGB information would require three times the information, while the low resolution color difference channels save significant space.
Introduced by Pugh, Soltoggio, and Stanley (2014), the Real-Time Autoencoder-Augmented Hebbian Network (RAAHN) algorithm combines an autoencoder with a Hebbian neural network into a single neural network where the autoencoder encodes the raw input for use by the Hebbian component. Pugh, Soltoggio, and Stanley (2014) showed that a RAAHN-controled simulated agent can learn a control policy and avoid detours in a two dimensional domain when using an inital autopilot phase (keeping the agent on the correct path for a certain amount of time). RAAHN uses what is called a "history buffer" to save experiences in a memory to train its autoencoder in a real-time context. Pugh, Soltoggio and Stanley used a history buffer that saved its experiences in a queue. When the queue reaches capacity it deletes the oldest experience to make room for the current experience.
My research consists of advancing RAAHN. I developed an agent simulator (shown on the right) similar to that used by Pugh, Soltoggio, and Stanley, and used the simulator to test a new type of history buffer called a "novelty buffer." A novelty buffer saves the experiences that are the most different (novel) from each other. Along with Pugh and Stanley, I found RAAHN to be able to navigate a two dimensional domain without the autopilot that was needed in previous research. We also found RAAHN to perform better than pure Hebbian learning when the sensory data is highly active.