Joshua Aaron Bowren Google Scholar Profile

Joshua Bowren
B.S. Candidate, Computer Science
University of Central Florida

Curiculum Vitae (CV)


Research Interests

Computational Neuroscience, Machine Learning

Sparse Coding


Sparse coding is a theory and method claiming the early visual system codes sensory input with only a few neurons in a population. The sparse coding method developed by Olshausen and Field (1996) seeks latent variables such that only few are needed to combine linearly to reconstruct some portion of sensory input. Sparse coding is typically used to learn an overcomplete representation (more latent variables than sensory input) due to this method having higher representational capacity. When sparse coding is applied to whitened natural image data (statistical dependencies reduced and variance normalized in all directions) basis functions that resemble the receptive fields of corticial simple cells are found (on the right). These basis functions can be modeled with the two-dimensional gabor filter, thus sparse coding is said to find gabor filters (sometimes referred to as edges).

Non-Negative and Hierarchical Sparse Coding

I performed sparse coding with a exponential prior (no negative values) on the coefficients for the latent variables, and compared these to sparse coding with a laplacian prior (both positive and negative values). I then fit gabor filters to the discovered basis functions. I found that the basis functions found by non-negative sparse coding have interesting structural differences compared to regular sparse coding. I have also spent some time trying to replicate the work of Karklin and Lewicki (2003) on hierarchical sparse coding with hopes of extending the model to the time domain.

Color Opponency

The eye has certain cells that are able to transduce color in light waves. These cells, called cones, are sensitive to the persence or abscence of certain colors. The receptive field is the portion of what the cell can see, and is divided into two regions: the center and the surround. The center and surround are excited or inhibited by different colors, thus these colors act as opponents. These cells either detect red and green light, or blue and yellow light (Atick, Li, and Redlich, 1992). Other cells called rods are sensitive for light and dark information (we may think of a gray-scale representation). The function of the cones and rods seems to imply that the brain learns three channels to represent visual information: the luminance channel (black versus white) and two color difference channels (red versus green and blue versus yellow; Olshausen, 2014). The luminance channel has high resolution while the color difference channels are significantly downsampled (Olshausen, 2014). The figure on the right depicts a similar representation, but with different color opponents (blue versus yellow and red versus yellow). This method was used early in color televisions mostly because it was more efficient than sending RGB information (Olshausen, 2014). RGB information would require three times the information, while the low resolution color difference channels save significant space.

My research consists of exploring this aspect of the visual system through psychophyscial testing. I am interested in exactly how removing certain information in these channels effects are ability to notice a difference.

Real-Time Autoencoder Augmented Hebbian Network (RAAHN)

Introduced by Pugh, Soltoggio, and Stanley (2014), the Real-Time Autoencoder-Augmented Hebbian Network (RAAHN) algorithm combines an autoencoder with a Hebbian neural network into a single neural network where the autoencoder encodes the raw input for use by the Hebbian component. Pugh, Soltoggio, and Stanley (2014) showed that a RAAHN-controled simulated agent can learn a control policy and avoid detours in a two dimensional domain when using an inital autopilot phase (keeping the agent on the correct path for a certain amount of time). RAAHN uses what is called a "history buffer" to save experiences in a memory to train its autoencoder in a real-time context. Pugh, Soltoggio and Stanley used a history buffer that saved its experiences in a queue. When the queue reaches capacity it deletes the oldest experience to make room for the current experience.

My research consists of advancing RAAHN. I developed an agent simulator (shown on the right) similar to that used by Pugh, Soltoggio, and Stanley, and used the simulator to test a new type of history buffer called a "novelty buffer." A novelty buffer saves the experiences that are the most different (novel) from each other. Along with Pugh and Stanley, I found RAAHN to be able to navigate a two dimensional domain without the autopilot that was needed in previous research. We also found RAAHN to perform better than pure Hebbian learning when the sensory data is highly active.


Fully Autonomous Real-Time Autoencoder-Augmented Hebbian Learning through the Collection of Novel Experiences
Joshua A. Bowren, Justin, K. Pugh, and Kenneth O. Stanley
In: Proceedings of the Fifteenth International Conference on the Synthesis and Simulation of Living Systems (ALIFE XV). Cambridge, MA: MIT Press, 2016. 8 pages.


Research Related

sparsecoding - A sparse coding implementation in C++ with OpenCV using iterative soft thresholding (ISTA) to infer sparse coefficients.

raahnsimulation - A simulator for testing and developing the Real-Time Autoencoder-Augmented Hebbian Network (RAAHN) algorithm

libraahn - A RAAHN implementation

SimpleMLP - A MLP implementation


AsteroidShooting - An Asteroids clone for Android


University of Central Florida Department of Computer Science:
UCF Computer Science

Evolutionary Complexity Research Group:
UCF Computer Science

Princeton Neuroscience Institute:
Princeton Neuroscience Institute

Pillow Lab:
Pillow lab