« Paranoia  
  Creating a slide show shouldn’t be this hard »

Shakespeare vs. Markov

By Jim, on February 19th, 2009

I’m developing some code to work with Markov models, and as an early test I created a model of Shakespeare’s sonnets.  Then I had the program generate some verse from the model.  For example:

If they in thou, thy love, that old,
Thought, that which yet do the heat did excess
My love concord never saw my woeful taste
At ransoms you for that so unprovident:
For thereby beauty show’st I am fled,
Althoughts to the dead, that care
With should thus shall by their fair,
Where too much my jade,
Her loves, my heart.

The Markov model generates text one character at a time.  The next character is selected based on the distribution of characters in the original source text, and also on the “order” of the model.  In an order 0 model, character selection is based entirely on the frequency of characters in the source text.  If the letter ‘e’ appears 12% of the time in the source text, then it will appear approximately 12% of the time in the output.  Output from an order 0 model is just a weighted random sampling of the source text.  For example:

vl
cbr
  oaa,stpr iyt eimhr  rttntnettohe  Tt
chShag  flsoTa
btecl)btatef
ebshenuAievien lffetNthfean
nnnfIhwste  aek oppe

An order 1 model bases character selection on the frequency of characters that follow the current character.  That is, given the current character, the next character selected is guaranteed to be one that followed that character in the source text.  The model cannot generate a bigram (sequence of two characters) that did not exist in the source.  So, for example, the sequence ‘qz’ can’t appear in the generated text unless it was also in the source.   Verse generated from an order 1 model is slightly better than what you’ll get from an order 0 model:

Myouivey inn f th peroeethe,
Anete pe!
Touss s o by thy!
Ar temye corste d w.
Be wh s t, n rverose d th ms t,
I s s Ifiththin ay past lll s farenoulofan r, t thonard areall ngain,
Butod,
Theas,
Foutea cthakedinore,
Yot te,
Lorowor d thoodoref pathe,

Things begin to improve quickly as you move up in order of the model.  Order 2 bases character selection on the two preceeding characters, and you start to see recognizable English words although overall it looks more like some freaky foreign language:

Thich Must I winter,
The I do loodye ming to lin
So me lie stat so hou faut forse trom fore.
Whe be.
That, all shart and’s mor thee my swed thy head.
Whoun
And thawarit, a strand sabudge,
To ale whal wart,

Order 3 verse contains primarily recognizable English words.  At order 5 (the first sample above), almost all words are valid and you begin to see word pairs that existed in the original source text.  The resulting verse is reminiscent of a drunk and confused dramatics major trying to impress a group of girls with his knowledge of Shakespeare.

Order 10 verse is like Shakespeare in a blender:

Thou blind fool Love, what dost thou abuse,
The bounteous largess given thee to give?
Profitless usurer why dost thou that I do fawn upon,
Nay if thou live remembered
My deepest sense, how hard true sorrow hits,
And sealed false bonds of love doth well denote,
Love’s fire heats water, water cools not love
Which alters when it alteration finds,
Or bends with thee, wherever I abide,
The guilty goddess of my passion,
A woman’s face with nature’s truth,
And delves the parallels in beauty dyed?
Both truth and beauty should look so.

Here you sometimes see complete sonnet lines.  Other times you’ll see the first half of one line morph into the second half of another, or just pure randomness. 

Things get less interesting beyond order 10, and by order 15 the thing starts spitting out complete sonnets exactly as they were fed in.

I probably spent entirely too much time “testing” my program:  generating verse and laughing at the results.  But it was a welcome bit of humor in an otherwise somewhat dull day.

spacer Odds 'n Ends, Programming
« Paranoia  
  Creating a slide show shouldn’t be this hard »

2 comments to Shakespeare vs. Markov

  • spacer Ricky
    19 February 2009 at 21:18

    Wow, that is really cool.

    Any ideas as to how that could be applied to music? I’ll have to think about it for a bit… I may be taking a “Music Composition with Computers” class in the next couple semesters (of college), which is not in fact using music-making software, but writing software that generates music.

    Anyway, good job! I find it really cool that you can get a computer to be coherent :)

  • spacer Darrin Chandler
    20 February 2009 at 11:36

    I suspected someone had applied Markov stuff to music. Sure enough. Top 3 Google results for “markov chain music”:

    peabody.sapp.org/class/dmp2/lab/markov1/

    en.wikipedia.org/wiki/Markov_chain

    ccrma-www.stanford.edu/~jacobliu/254report/

« Paranoia  
  Creating a slide show shouldn’t be this hard »

Links

  • Podly.TV
  • 100 Birds Project
  • Mischel.com Home
  • Old Random Notes
  • .NET Reference Guide
  • Jeff Duntemann
  • Michael Covington
  • Ms. Mona

Categories

  • Administration
  • Book Reviews
  • Carving
  • Computers
  • Cycling
  • Debugging
  • Finance
  • Hobbies
  • Homebrew
  • Idiocy
  • Internet
  • Memories
  • Movies
  • Music
  • Odds 'n Ends
  • Oops
  • Pets
  • Programming
  • Rants
  • Uncategorized
  • User Interface
  • Web Crawling
  • Windows Vista
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.