Update README_CS310.md

This commit is contained in:
Avi Vajpeyi 2017-05-06 17:48:31 -04:00 committed by GitHub
parent 4671652c77
commit 453bf9a3bd

View File

@ -14,22 +14,24 @@ The files modified were:
***To create a music sentiment detection model:*** **To create a music sentiment detection model:**
**-Store happy/sad songs in seperate folders.**
***-Store happy/sad songs in seperate folders.***
**-Run XXX path/to/folder/with/midi/files** ***-Run XXX path/to/folder/with/midi/files***
(This will generate a text file with the notes normailized to the C and c minor scales, and represents the notes as numbers.) (This will generate a text file with the notes normailized to the C and c minor scales, and represents the notes as numbers.)
**-Run createMusicalFeatureSets.py** ***-Run createMusicalFeatureSets.py***
(This takes the text files and generates musical feature sets, which is the input data of frequencies of notes, and the labels for each set of note frequencies. This is then saved as a pickle, with the testing and training data sepearated) (This takes the text files and generates musical feature sets, which is the input data of frequencies of notes, and the labels for each set of note frequencies. This is then saved as a pickle, with the testing and training data sepearated)
**-Run trainMusicNN.py** ***-Run trainMusicNN.py***
(This takes the pickle created and passes the data through a NN. It saves the NN's weights as a model) (This takes the pickle created and passes the data through a NN. It saves the NN's weights as a model)
***To use the sentiment detection model to classify a song according it its sentiment:*** **To use the sentiment detection model to classify a song according it its sentiment:**
**-Run usingMusicNN.py**
***-Run usingMusicNN.py***
(This takes the sentiment detection model and a midi file. It converts the MIDI file to a feature set of fequencies, which it passes through the trained NN. It prints the classification of the set.) (This takes the sentiment detection model and a midi file. It converts the MIDI file to a feature set of fequencies, which it passes through the trained NN. It prints the classification of the set.)