Basically, I got tired of modifying the command line every time I wanted to test new values. To that end, I spent a small bit of time coding up a GUI to make further experiments easier.
The first algorithm that I’ve chosen to implement is a simple cosine difference between the n-gram vectors. This was the first method used in multiple of the papers that I’ve read and it seems like a good benchmark.
Essentially, this method gives the similarity of two n-gram documents (either Documents or Authors) as an angle ranging from 0 (identical documents) to \pi/2 (completely different documents). Documents written by the same author should have the lowest values.
The first portion of the framework that it was necessary to code was the ability to load documents. To reduce the load on the processor when first loading the document, only a minimal amount of computation is done. Further computation is pushed off until necessary.
To avoid duplicating work, the n-grams are stored using memoization. The basic idea is that when a function (in this case, a particular length of n-gram) is first requested, the calculation is done and the result is stored in memory. During any future calls, the cached result is directly returned, greatly increasing speed at the cost of memory. Luckily, modern computers have more than sufficient memory for the task at hand.
For my senior thesis at Rose-Hulman Institute of Technology, I am attempting to combine the fields of Computational Linguistics and Artificial Intelligence in a new and useful manner. Specifically, I am planning on making use of Artificial Neural Networks to enhance the performance of n-gram based document classification. Over the next few months, I will be updating this category with background and information and further progress.
First, I’ll start with some basic background information.
Added a few new features that I’ve been hoping to add for a bit now.