Comparing Performances of Different NLP Models on Quote Generation

Comparing Performances of Different NLP Models on Quote Generation” is a project that I worked on during the Neural Systems lecture. The goal of the project was to explore different methods for generating quotes and measure their performances in terms of accuracy and fluency.
To achieve this goal, we used four different models: Markov Chains, Recurrent Neural Networks (RNNs), transformer architecture, and raw and fine-tuned versions of the GPT-2 model. We selected a quotes dataset and preprocessed it for each model to make sure that the input was in a suitable format.
To measure the accuracy and fluency of the generated quotes, we used the Rouge-1 and Bleu scores. These metrics allowed us to compare the performance of the different models and determine which model was the most accurate and fluent.
After conducting our experiments, we found that the GPT-2 model performed the best, especially when fine-tuned on the quotes dataset. However, we also found that the transformer architecture performed almost as well as the GPT-2 model.
Overall, this project allowed us to gain a better understanding of different NLP models and how they can be used for quote generation. It also showed us the importance of selecting the appropriate metrics to evaluate the performance of these models.