Joshua Aaron Bowren Google Scholar Profile

Joshua Bowren
B.S. Candidate, Computer Science
University of Central Florida

Curiculum Vitae (CV)


Research Interests

Computational Neuroscience, Machine Learning

Theory of the Visual Cortex

Efficient Coding

It has been theorized that brains have envolved to efficently process natural images by reducing redundancies (called whitening) while still maintaining meaningful representations of these images (Atick, Li, and Redlich, 1992). Noise reduction must also be used to avoid losing the useful signal of images after reducing their redundancies (Atick, Li, and Redlich, 1992).

Sparse Coding

Advanced by Olshausen and Field (1996) and based on findings in neuroscience, sparse coding is a theory and method describing part of what the brain does when encoding sensory inputs. The sparse coding method learns a feature set which is both sparse (contains mostly zeros) and can be used to reconstruct sensory input with a dictionary matrix of weights. Sparse coding is typically used to learn and overcomplete feature set (more features than inputs) due to having a higher capacity of representations. A gabor filter (shown on the right) can be seen through visualizing the basis functions learned from natural images (when used in combination with noise reduction and whitening).

At the moment I am trying to employ sparse coding in conjunction with Hebbian learning for real-time control tasks. Hebbian learning is better able to learn from sparse representations (Olshausen and Field, 2004).

Color Opponency

The brain has photoreceptive retinal cells called cones that are sensitive to the persence or abscence of certain colors in portions of their receptive fields called the center and the surround (Atick, Li, and Redlich, 1992). These cells either detect red and green light, or blue and yellow light (Atick, Li, and Redlich, 1992). The brain learns three different channels to represent image: lumunious (a grey-scale representation, black versus white) and two color difference channels (red versus green and blue versus yellow; Olshausen, 2014). The luminious channel is very high bandwidth (resolution) while the color difference channels are significantly downsampled (Olshausen, 2014). The figure on the right depicts a similar representation but with different color opponents (blue versus yellow and red versus yellow). This method was used early on in color televisions mostly because it was much more efficient than sending RGB information (Olshausen, 2014). RGB information would require three times the information while the downsampling of the color difference channels saves much space.

My research aims to utilize this aspect of visual perception to obtain more compact image compression.

Real-Time Autoencoder Augmented Hebbian Network (RAAHN)

Introduced by Pugh, Soltoggio, and Stanley (2014), the Real-Time Autoencoder-Augmented Hebbian Network (RAAHN) algorithm combines an autoencoder with a Hebbian neural network into a single neural network where the autoencoder encodes the raw input for use by the Hebbian component. Pugh, Soltoggio, and Stanley (2014) showed that a RAAHN-controled simulated agent can learn a control policy and avoid detours in a two dimensional domain when using an inital autopilot phase (keeping the agent on the correct path for a certain amount of time). RAAHN uses what is called a "history buffer" to save experiences in a memory to train its autoencoder in a real-time context. Pugh, Soltoggio and Stanley used a history buffer that saved its experiences in a queue. When the queue reaches capacity it deletes the oldest experience to make room for the current experience.

My research consists of advancing RAAHN. I developed an agent simulator (shown on the right) similar to that used by Pugh, Soltoggio, and Stanley, and used the simulator to test a new type of history buffer called a "novelty buffer." A novelty buffer saves the experiences that are the most different (novel) from each other. Along with Pugh and Stanley, I found RAAHN to be able to navigate a two dimensional domain without the autopilot that was needed in previous research. We also found RAAHN to perform better than pure Hebbian learning when the sensory data is highly active.


Fully Autonomous Real-Time Autoencoder-Augmented Hebbian Learning through the Collection of Novel Experiences
Joshua A. Bowren, Justin, K. Pugh, and Kenneth O. Stanley
In: Proceedings of the Fifteenth International Conference on the Synthesis and Simulation of Living Systems (ALIFE XV). Cambridge, MA: MIT Press, 2016. 8 pages.


Research Related

raahnsimulation - A simulator for testing and developing the Real-Time Autoencoder-Augmented Hebbian Network (RAAHN) algorithm

libraahn - A RAAHN implementation

SimpleMLP - A MLP implementation


AsteroidShooting - An Asteroids clone for Android


University of Central Florida Department of Computer Science:
UCF Computer Science

Evolutionary Complexity Research Group:
UCF Computer Science