Google Scholar Page



Architecture

Joshua Bowren
B.S. Candidate
Computer Science
University of Central Florida

jbowren@cs.ucf.edu

About

Curiculum Vitae (CV)

I am an incoming graduate student at the University of Miami where I will be working in the lab of Dr. Odelia Schwartz starting in the Fall of 2018. My main research interests are computational neuroscience, vision, and machine learning. Some central themes of my projects have included efficient coding, sparse coding, vector quantization methods, and information theory. My work is supported by a National Science Foundation (NSF) Graduate Research Fellowship.

I did my undergraduate study in computer science at the University of Central Florida. During my time at UCF I joined the Evolutionary Complexity Research Group headed by Dr. Kenneth Stanley (co-founder of Geometric Intelligence). We worked on his newly-proposed Real-Time Autoencoder-Augmented Hebbian Network (RAAHN) algorithm for autonomous agent control. In 2016 I worked with Dr. R. Paul Wiegand of the Institute for Simulation and Training at UCF on brain-inspired image compression. During the summer of 2017 I worked at the Princeton Neuroscience Institute in the lab of Dr. Jonathan Pillow on hierarchical and non-negative sparse coding.

Projects

Positions

Institute for Simulation and Training

Institute for Simulation and Training at UCF

Research Assistant

Advisor: Dr. R. Paul Wiegand

August 2016 – Present

Image and Video Compression

Princeton Neuroscience Institute

Princeton Neuroscience Institute

Research Intern

Advisor: Dr. Jonathan Pillow

June 2017 – August 2017

Hierarchical and Non-Negative Sparse Coding.

University of Central Florida

University of Central Florida

Research Assistant

Advisor: Dr. Kenneth Stanley

June 2014 – May 2017

Real-Time Autoencoder-Augmented Hebbian Network

Publications

(2016) Fully Autonomous Real-Time Autoencoder-Augmented Hebbian Learning through the Collection of Novel Experiences

Joshua A. Bowren, Justin, K. Pugh, and Kenneth O. Stanley In: Proceedings of the Fifteenth International Conference on the Synthesis and Simulation of Living Systems (ALIFE XV). Cambridge, MA: MIT Press, 2016. 8 pages.

Research

Sparse Coding

Sparse coding is a theory and method claiming the early visual system codes sensory input with only a few neurons in a population. The sparse coding method developed by Olshausen and Field (1996) seeks latent variables such that only few are needed to combine linearly to reconstruct some portion of sensory input. Sparse coding is typically used to learn an overcomplete representation (more latent variables than sensory input) due to this method having higher representational capacity. When sparse coding is applied to whitened natural image data (statistical dependencies reduced and variance normalized in all directions) basis functions that resemble the receptive fields of corticial simple cells are found (on the right). These basis functions can be modeled with the two-dimensional gabor filter, thus sparse coding is said to find gabor filters (sometimes referred to as edges).

Non-Negative and Hierarchical Sparse Coding

I performed sparse coding with a exponential prior (no negative values) on the coefficients for the latent variables, and compared these to sparse coding with a laplacian prior (both positive and negative values). I then fit gabor filters to the discovered basis functions. I found that the basis functions found by non-negative sparse coding have interesting structural differences compared to regular sparse coding. I have also spent some time trying to replicate the work of Karklin and Lewicki (2003) on hierarchical sparse coding with hopes of extending the model to the time domain.

Color Opponent Coding

The eye has certain cells that are able to transduce color in light waves. These cells, called cones, are sensitive to the persence or abscence of certain colors. The receptive field is the portion of what the cell can see, and is divided into two regions: the center and the surround. The center and surround are excited or inhibited by different colors, thus these colors act as opponents. These cells either detect red and green light, or blue and yellow light (Atick, Li, and Redlich, 1992). Other cells called rods are sensitive for light and dark information (we may think of a gray-scale representation). The function of the cones and rods seems to imply that the brain learns three channels to represent visual information: the luminance channel (black versus white) and two color difference channels (red versus green and blue versus yellow; Olshausen, 2014). The luminance channel has high resolution while the color difference channels are significantly downsampled (Olshausen, 2014). The figure on the right depicts a similar representation, but with different color opponents (blue versus yellow and red versus yellow). This method was used early in color televisions mostly because it was more efficient than sending RGB information (Olshausen, 2014). RGB information would require three times the information, while the low resolution color difference channels save significant space.

My research consists of exploring this aspect of the visual system through psychophyscial testing. I am interested in exactly how removing certain information in these channels effects are ability to notice a difference.

Real-Time Autoencoder-Augmented Hebbian Network

Introduced by Pugh, Soltoggio, and Stanley (2014), the Real-Time Autoencoder-Augmented Hebbian Network (RAAHN) algorithm combines an autoencoder with a Hebbian neural network into a single neural network where the autoencoder encodes the raw input for use by the Hebbian component. Pugh, Soltoggio, and Stanley (2014) showed that a RAAHN-controled simulated agent can learn a control policy and avoid detours in a two dimensional domain when using an inital autopilot phase (keeping the agent on the correct path for a certain amount of time). RAAHN uses what is called a "history buffer" to save experiences in a memory to train its autoencoder in a real-time context. Pugh, Soltoggio and Stanley used a history buffer that saved its experiences in a queue. When the queue reaches capacity it deletes the oldest experience to make room for the current experience.

My research consists of advancing RAAHN. I developed an agent simulator (shown on the right) similar to that used by Pugh, Soltoggio, and Stanley, and used the simulator to test a new type of history buffer called a "novelty buffer." A novelty buffer saves the experiences that are the most different (novel) from each other. Along with Pugh and Stanley, I found RAAHN to be able to navigate a two dimensional domain without the autopilot that was needed in previous research. We also found RAAHN to perform better than pure Hebbian learning when the sensory data is highly active.