Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Hangman Assistant

by Lawliet (Curate)
on Jul 12, 2009 at 02:50 UTC ( [id://779296]=CUFP: print w/replies, xml ) Need Help??

Edit: Updated code in a reply to this reply. Use it instead of the one I posted in this node. It probably works better.

Before explaining my cool use for Perl, I would like to tell you that Perl has been absent from my life for a few months now and me, being unhappy with this fact, have tried to become more involved and active with the language. Consider this paragraph the "me telling you that Perl has been absent from my life".

Moving on. If you are like me in the fact that you enjoy playing a nice game of hangman over the Internet, then you might like the cool use for Perl I have (re)found. You might not like it too, if you dislike cheatingbeing assisted.

I have here a hangman helper, ready to find the word that you are looking for. Simply follow the simple instructions provided in the comments, and you will be on your simple way.

Take note of the Todo. The helper works fine without me implementing the feature, but it is still something that should be done to give maximum help.

#!/usr/bin/perl # Todo: # Narrow possibilties by eliminating words with repeat letters when I +have # guessed one of the repeats letters but the letter I guessed is eithe +r not a # repeat in the target word or is in a different position. # Example: # "rustlers" is a possible word, I guess 'r' and am presented with # "r _ _ _ r _ _ _", meaning the word has two r's, just in the wrong s +pot, # and therefore rustlers should be eliminated. use warnings; use strict; use 5.010; # Simple instructions: # perl $0 "w _ r d" "previousfailedguesses" say $ARGV[0]; my @word = split(/ /, $ARGV[0]); my $guessed = $ARGV[1] ? join('|', split(//, $ARGV[1])) : "0"; say $guessed; my %wordlist; # Hash of word-length arrays open(WORD, '<', '/usr/share/dict/american-english') or die $!; while (<WORD>) { chomp; next if /[^a-z]/; # Lazy way out~ my @chars = split(//, $_); push @{$wordlist{$#chars}}, $_; } close WORD; my @narrowed = @{$wordlist{$#word}}; # Narrowed possible answers by si +ze OUTER: for (my $i = 0; $i <= $#narrowed; $i++) { my @chars = split(//, $narrowed[$i]); # Narrowed by previous guesses if ($narrowed[$i] =~ /$guessed/) { splice(@narrowed, $i, 1); $i--; # Decrement counter now that word has been removed next OUTER; } # Narrowed by matching characters for (my $pos = 0; $pos <= $#word; $pos++) { next if $word[$pos] eq '_'; if ($word[$pos] ne $chars[$pos]) { splice(@narrowed, $i, 1); $i--; next OUTER; } } } # %alphabet holds the number of times a letter occurs within all words # %seen holds the number times a letter occurs in one word my %alphabet; $alphabet{$_} = 0 foreach ('a'..'z'); foreach my $word (@narrowed) { my %seen; $seen{$_} = 0 foreach ('a'..'z'); my @chars = split(//, $word); foreach my $char (@chars) { $alphabet{$char}++ if $seen{$char} == 0; # Limit 1 increment f +or each letter once per word $seen{$char}++; } undef %seen; } say $_ foreach @narrowed; # Word list say sort { $alphabet{$b} <=> $alphabet{$a} } keys %alphabet; # Most co +mmon letter, including ones already guessed

I have provided an example to go with it (for free!) too.

I have removed my first guess (which was the letter e) because the list of possible words was extremely large.

Word: _ _ _ _ _ _ _ _

I guess 'e' because the hangman helper told me it is the most common letter in all the words.

Word : e _ _ e _ _ _ _

I once again enter it, along with any incorrect guesses (none so far) into the hangman helper.

$ perl hangman.pl "e _ _ e _ _ _ _" "" e _ _ e _ _ _ _ 0 eagerest eateries echelons eclectic edgeways edgewise effected egresses embedded embezzle emceeing emperors endeared endeavor endemics enfeeble engender enmeshed enmeshes ensemble ententes entering envelope envelops especial essences esteemed ethereal eugenics eutectic exceeded excelled excepted excerpts excesses expected expedite expelled expended expenses expertly extended exterior external ensdtcxrlpimagohbvwyufzjkq

The most common letter I have not guessed is 'n', so that is what I will guess.

Word: e _ _ e _ _ _ _
Incorrect: 'n'

Aha! A letter not in the word. Let me tell Mr. Hangman.pl this and see what he thinks.

$ perl hangman.pl "e _ _ e _ _ _ _" "n" e _ _ e _ _ _ _ n eagerest eateries eclectic edgeways edgewise effected egresses embedded embezzle emperors especial esteemed ethereal eutectic exceeded excelled excepted excerpts excesses expected expedite expelled expertly exterior etdxscrpilagmwybouhfzjknvq

He thinks the next letter should be 't', which I guessed.

Word: e _ _ e _ t _ _
Incorrect: 'n'

$ perl hangman.pl "e _ _ e _ t _ _" "n" e _ _ e _ t _ _ n eclectic effected eutectic excepted expected expertly tecxdpilryufwajkhgnvmsqbzo

'x' is the next letter I should guess.

Word: e _ _ e _ t _ _
Incorrect: 'n', 'x'

$ perl hangman.pl "e _ _ e _ t _ _" "nx" e _ _ e _ t _ _ n|x eclectic effected eutectic teciduflwraxjykhgnvmspqbzo

I guess 'i' as it suggests.

Word: e _ _ e _ t i _
Incorrect: 'n', 'x'

$ perl hangman.pl "e _ _ e _ t i _" "nx" e _ _ e _ t i _ n|x eclectic eutectic tieculwraxdjykhgfnvmspqbzo

It wants me to guess 'u', I guessed 'u'.

Word: e _ _ e _ t i _
Incorrect: 'n', 'x', 'u'

$ perl hangman.pl "e _ _ e _ t i _" "nxu" e _ _ e _ t i _ n|x|u eclectic tielcwraxdjyukhgfnvmspqbzo

The word was eclectic! With only three wrong guesses, Mr. hangman.pl and I won! Yay!

I don't mind occasionally having to reinvent a wheel; I don't even mind using someone's reinvented wheel occasionally. But it helps a lot if it is symmetric, contains no fewer than ten sides, and has the axle centered. I do tire of trapezoidal wheels with offset axles. --Joseph Newcomer

Replies are listed 'Best First'.
Re: Hangman Assistant
by blokhead (Monsignor) on Jul 12, 2009 at 03:47 UTC
    No comments on your code, but wouldn't the optimal strategy be to pick not the most common letter, but the letter that is closest to appearing in exactly half of the possible remaining words? That way you eliminate ~1/2 of the candidates in each turn.

    At least in your example, a letter never appears in more than half of the candidates, so the most frequent letter and the closest-to-half-half letter coincide. But imagine if you found out that the actual word contained Q. Then for sure the next most common letter will be a U; but guessing U and getting it right will not give you much new information. Aiming for half-half lets your correct and incorrect guesses both contribute information.

    blokhead

      You make a good point, and I made the following change to the my program:

      # Find how close each letter is to half of the total word possibilitie +s to ensure maximum gain every guess after being sorted foreach my $occur (keys %alphabet) { $alphabet{$occur} = abs($#narrowed/2 - abs($alphabet{$occur} - $#n +arrowed + 1)); } say $_ foreach @narrowed; # Word list say $#narrowed + 1; say sort { $alphabet{$a} <=> $alphabet{$b} } keys %alphabet; # Sort as +cendingly; whichever letter is closest to 0, i.e., and therefore whic +hever letter will eliminate the most words.

      However, as I play, I notice that although it does eliminate a lot of words very quickly, when it gets to a low amount of words, it becomes useless, telling me to guess letters that are not in any of the words, and telling me to guess letters that are in all of the words last.

      Surely, when it comes to this point, the user can easily guess on his own but that is not really the point. I want the program to be able to find the individual word in a small amount of guesses. Perhaps I should use your method when there are more than, say, 10 possibilities, and mine from there on out.

      Example illustrating my point:

      I am kind of speaking to myself here, so this node is just publishing my own mental thoughts. Feel free to comment or object them.

      I don't mind occasionally having to reinvent a wheel; I don't even mind using someone's reinvented wheel occasionally. But it helps a lot if it is symmetric, contains no fewer than ten sides, and has the axle centered. I do tire of trapezoidal wheels with offset axles. --Joseph Newcomer

        it becomes useless, telling me to guess letters that are not in any of the words

        That would indicate a bug in the implementation, not a problem in the approach as you claim.

      blokhead,
      I disagree with your strategy based on my limited knowledge of the game hangman. It is my understanding that the game continues until you have either revealed the word or made too many incorrect guesses. I think the best strategy then would be to guess the letter that appears in the most of the candidate words. I too haven't looked at the OP's code but my strategy doesn't necessarily pick the most popular letter but the one that appears in the most words. If you are correct you get new information (position of that letter) and are no closer to losing the game. If you are wrong, you eliminate the most possible wrong answers. I will code this strategy up to see how it does in a follow on post.

      Cheers - L~R

        Those were my thoughts too. However, if the user picked an extremely common word, with common letters (which makes it a common word), then shaving off 50% each time is more helpful than eliminating the 10% that don't have the recommended, most common letter.

        Of course this is all theoretical and should be tested before changes are made (which I failed to do initially).

        I don't mind occasionally having to reinvent a wheel; I don't even mind using someone's reinvented wheel occasionally. But it helps a lot if it is symmetric, contains no fewer than ten sides, and has the axle centered. I do tire of trapezoidal wheels with offset axles. --Joseph Newcomer

      blokhead,
      Some more thoughts:

      Your strategy is effectively a binary search. Assuming a perfect distribution (it is always possible to find a letter that is in exactly half of the remaining words), you should be able to guess (on average) up to twice as many times as you are allowed wrong guesses before losing the game. Of course, the last guess still has a 50% chance of being wrong so I believe your strategy is guaranteed to work when the total number of initial candidates is 2^(2G - 1) or less, where G represents the number of allowed wrong guesses. For instance, when G = 7 then you are guaranteed (100%) to win when the initial number of candidates is 8192 or less and 50% when it is up to 16,384. For eclectic, there are 9638 initial words in my word list so only a 50% chance of success.

      My strategy is optimized not to guess wrong. After only 5 guesses (2 right and 3 wrong) it had narrowed the search space down to exactly 1 word. This is because I prune by position not just by presence of letter so even successful guesses on a popular letter can still effectively decrease the search space. Your approach would be improved with this strategy as well. I don't think the opportunities stop there.

      In a private /msg with Lawliet the idea of finding the solution with the least number of guesses was proposed. I indicated that my strategy would change - but to what? A binary search is already optimal. There needs to be some balance then between improving your odds from 50/50 of guessing wrong while still effectively reducing your search space each time. This way you can survive long enough to win but not guess ad nauseum.

      I am kicking around some ideas where you still look for a very popular letter (say in 70% of the remaining words) but by position would still split that 70% in half if you guessed right. The result then would be that you would guess wrong 30% of the time and remove 70% of your search space or guess right 70% of the time but reduce your search space by 65%. Does this sound viable to you?

      Cheers - L~R

        That makes sense. I hadn't considered also taking into account the positions of the letters when you guess a correct letter. So in my example, when the word has a Q, and you guess U, you can still potentially get some useful information if many of the candidate words have U appearing in different places (i.e., you could distinguish QUEUE from QUEST). In this case, it would be "best" (if minimizing total # of guesses) to try to choose a letter whose absence / presence at all positions will partition the candidates into a large number of sets, each with size as small as possible.

        Update: expanding with an example: Suppose the word to be guessed matches S T _ _ _. Then suppose we are considering E for our next guess. All of the candidate words will then fall into one of these 8 classifications:

        S T _ _ _ (no E in the word) S T _ _ E (E in last position ONLY) S T _ E _ (etc..) S T _ E E S T E _ _ S T E _ E S T E E _ S T E E E
        So we have 8 buckets, and we put all of the candidate words into the appropriate bucket. Suppose the bucket with most words has n words in it. Then in the worst case, after guessing E, we will have n remaining candidates. So you can take n to be the worst-case score of guessing E. Now compute this score for every letter, and take the letter with lowest score.

        Note that there might be other ways to score each possible next-letter guess. Number of non-empty buckets comes to mind as an "average case" measure (to be maximized). Again, this is all assuming we're minimizing the total number of guesses. That way, all of the possible outcomes are (i.e., the guessed letter appears or doesn't appear in the word) are treated the same. To minimize the number of wrong guesses, you have to treat the "doesn't appear in the word" outcome differently and weight things in some better way.

        blokhead

      The goal of hangman isn't to minimize the number of guesses (if it was, your approach would make sense), but to minimize the number of wrong guesses. Or, to be more specific, have no more than a set number of incorrect guesses.

      That actually means that the hardest hangman games are where you have to guess short words. Given that /usr/share/dict/words has 23 three letter words ending in at (only "aat", "iat" and "uat" are missing) only luck determines whether you "win" guessing a word like "mat", "cat" or "hat".

Re: Hangman Assistant
by jwkrahn (Abbot) on Jul 12, 2009 at 06:32 UTC
    while (<WORD>) { chomp; next if /[^a-z]/; # Lazy way out~ my @chars = split(//, $_); push @{$wordlist{$#chars}}, $_; }

    You appear to have an off-by-one error where @chars contains the characters from the word in $_ so $#chars will be one less than the number of characters in $_.   Perhaps you meant:

    while ( <WORD> ) { chomp; next if /[^a-z]/; # Lazy way out~ push @{ $wordlist{ length() } }, $_; }

      I thought about that when writing it. I decided that it didn't really matter as long as I was consistent. Though I do like the look of yours more than mine.

      I don't mind occasionally having to reinvent a wheel; I don't even mind using someone's reinvented wheel occasionally. But it helps a lot if it is symmetric, contains no fewer than ten sides, and has the axle centered. I do tire of trapezoidal wheels with offset axles. --Joseph Newcomer

Re: Hangman Assistant
by Jorge_de_Burgos (Beadle) on Jul 12, 2009 at 11:22 UTC
    An easy internationalization step that won't trouble you Americans a lot ;). Just change

    /usr/share/dict/american-english

    to

    /usr/share/dict/words

    This will make your program use the default user dictionary instead of hardcoding the American-English. At least under Debian.

      Ah, good call.

      Now what to do about the next if =~ /[^a-z]/;...

      I don't mind occasionally having to reinvent a wheel; I don't even mind using someone's reinvented wheel occasionally. But it helps a lot if it is symmetric, contains no fewer than ten sides, and has the axle centered. I do tire of trapezoidal wheels with offset axles. --Joseph Newcomer

        A good line for a perl program aimed at running under various languages under modern Linux systems would be this:

        use encoding ':locale';

        That way, all calculations about string length (among others) are base on characters and not on bytes. For example, many Spanish characters, under a UTF-8 system such as mine (Ubuntu 9.04), are coded in a 2-byte scheme--but this should be transparent to the programmer. So that length('aņo') yields 4 before adding use encoding ':locale'; and length('aņo') yields 3 after adding use encoding ':locale';.
Re: Hangman Assistant
by Lawliet (Curate) on Jul 12, 2009 at 05:54 UTC

    Updated code:

    #!/usr/bin/perl # Todo: # Narrow possibilties by eliminating words with repeat letters when I +have # guessed one of the repeats letters but the letter I guessed is eithe +r not a # repeat in the target word or is in a different position. # Example: # "rustlers" is a possible word, I guess 'r' and am presented with # "r _ _ _ r _ _ _", meaning the word has two r's, just in the wrong s +pot, # and therefore rustlers should be eliminated. use warnings; use strict; use 5.010; # Simple instructions: # perl $0 "w _ r d" "previousfailedguesses" say $ARGV[0]; my @word = split(/ /, $ARGV[0]); my $guessed = $ARGV[1] ? join('|', split(//, $ARGV[1])) : "0"; say $guessed; my %wordlist; # Hash of word-length arrays open(WORD, '<', '/usr/share/dict/words') or die $!; # Edited to /words + as per request while (<WORD>) { chomp; next if /[^a-z]/; # Lazy way out~ my @chars = split(//, $_); push @{$wordlist{$#chars}}, $_; } close WORD; my @narrowed = @{$wordlist{$#word}}; # Narrowed possible answers by si +ze OUTER: for (my $i = 0; $i <= $#narrowed; $i++) { my @chars = split(//, $narrowed[$i]); # Narrowed by previous guesses if ($narrowed[$i] =~ /$guessed/) { splice(@narrowed, $i, 1); $i--; # Decrement counter now that word has been removed next OUTER; } # Narrowed by matching characters for (my $pos = 0; $pos <= $#word; $pos++) { next if $word[$pos] eq '_'; if ($word[$pos] ne $chars[$pos]) { splice(@narrowed, $i, 1); $i--; next OUTER; } } } # %alphabet holds the number of times a letter occurs within all words # %seen holds the number times a letter occurs in one word my %alphabet; $alphabet{$_} = 0 foreach ('a'..'z'); foreach my $word (@narrowed) { my %seen; $seen{$_} = 0 foreach ('a'..'z'); my @chars = split(//, $word); foreach my $char (@chars) { $alphabet{$char}++ if $seen{$char} == 0; # Limit 1 increment f +or each letter once per word $seen{$char}++; } undef %seen; } say $#narrowed + 1; if ($#narrowed <= 10) { say $_ foreach @narrowed; # Word list say sort { $alphabet{$b} <=> $alphabet{$a} } keys %alphabet; # Mos +t common letter, including ones already guessed } else { # Find how close each letter is to half of the total word possibil +ities to ensure maximum gain every guess after being sorted foreach my $occur (keys %alphabet) { $alphabet{$occur} = abs($#narrowed/2 - abs($alphabet{$occur} - + $#narrowed + 1)); } say sort { $alphabet{$a} <=> $alphabet{$b} } keys %alphabet; }

    Updated example:

    $ perl hangman.pl "_ _ _ _ _ _ _ _" "" _ _ _ _ _ _ _ _ 0 10588 rantislodecgupmhbyfkwvxzqj
    $ perl hangman.pl "_ _ _ _ _ _ _ _" "r" _ _ _ _ _ _ _ _ r 5252 atlnsieodcgumhpbyfkwvxzqjr
    $ perl hangman.pl "_ _ _ _ _ _ _ _" "ra" _ _ _ _ _ _ _ _ r|a 2761 tolnsdgueichpmbfykwvxzqjra
    $ perl hangman.pl "_ _ _ _ _ t _ _" "ra" _ _ _ _ _ t _ _ r|a 165 isncdolupmghbfykvxejqwtraz
    $ perl hangman.pl "_ _ _ _ _ t i _" "ra" _ _ _ _ _ t i _ r|a 17 slhpodungmxytieqbwrajkfvcz
    $ perl hangman.pl "_ _ _ _ _ t i _" "ras" _ _ _ _ _ t i _ r|a|s 9 bulletin dietetic eclectic ecliptic elliptic eutectic hypnotic phonetic quixotic ticelpunohxdyqbwrajkgfvmsz
    $ perl hangman.pl "_ c _ _ c t i c" "ras" _ c _ _ c t i c r|a|s 1 eclectic tielcwraxdjyukhgfnvmspqbzo

    Same amount of guesses as before, but a better way to get there (I think).

    I don't mind occasionally having to reinvent a wheel; I don't even mind using someone's reinvented wheel occasionally. But it helps a lot if it is symmetric, contains no fewer than ten sides, and has the axle centered. I do tire of trapezoidal wheels with offset axles. --Joseph Newcomer

Re: Hangman Assistant
by Limbic~Region (Chancellor) on Jul 13, 2009 at 01:55 UTC
    Lawliet,
    Here is some proof of concept code I wrote for an algorithm I outlined here. It is rather naive and I didn't spend a lot of time making it efficient but I figured I would share anyway.

    Cheers - L~R

      Thanks for the effort!

      I think your algorithm is the same as the one I initially used, suggesting the letter that is most common throughout the words. I think further investigation should be done, testing all the words and then seeing the data produced for each algorithm.

      I don't mind occasionally having to reinvent a wheel; I don't even mind using someone's reinvented wheel occasionally. But it helps a lot if it is symmetric, contains no fewer than ten sides, and has the axle centered. I do tire of trapezoidal wheels with offset axles. --Joseph Newcomer

        Lawliet,
        ...testing all the words and then seeing the data produced for each algorithm

        I suggested the very thing elsewhere in the thread. I can easily modify this code to only produce "lose" if it is unable to guess the word in the allowed number of wrong guesses or "win <total_guesses> <wrong_guesses>" if it does win. I haven't looked at your code so I would think it would be easier for you to modify it accordingly.

        Cheers - L~R

Re: Hangman Assistant
by ambrus (Abbot) on Jul 13, 2009 at 10:38 UTC

    Great tool!

    I did grep wordlists when I played hangsnoot, but I didn't have a sophisticated tool like this for it.

Re: Hangman Assistant
by Limbic~Region (Chancellor) on Jul 14, 2009 at 04:00 UTC
    Lawliet,
    I outlined a new approach here that may be worth pursuing if your goal is to minimize the total number of guesses. It may seem counter intuitive that there is a better approach than a binary search but I outlined an example where you have better than 50% chance of guessing correctly and a 100% chance of pruning more than 50% of the remaining candidates. I was starting to work on it when I realized I had missed an opportunity for pruning in my original. This now guesses 'eclectic' with only 2 incorrect guesses using a dictionary of 60,388 words (9,638 of them being the same length). Here is that modified code:

    As a result of the code above, I didn't bother finishing the weighted solution that considers probability of guessing correct and percentage of words pruned (for right or wrong). If you are interested, I can give you my code up till then. Why did I lose interest?

    length total won lost percent_won ave_wrong_guess_from_ +wins 4 2360 1495 865 63.35 4.07 5 4479 3732 747 83.32 3.67 6 6954 6550 404 94.19 3.07 7 9222 9031 191 97.93 2.45 8 9639 9623 16 99.83 1.74 9 8687 8687 0 100.00 1.19 10 6999 6999 0 100.00 0.83 11 4884 4884 0 100.00 0.58 12 3135 3135 0 100.00 0.44 13 1800 1800 0 100.00 0.30 14 861 861 0 100.00 0.19 15 413 413 0 100.00 0.16 16 165 165 0 100.00 0.10 17 83 83 0 100.00 0.11 18 25 25 0 100.00 0.12 19 9 9 0 100.00 0.00 20 6 6 0 100.00 0.17 21 1 1 0 100.00 0.00 tot 59722 57499 2223 96.28 1.74

    Cheers - L~R

      Oohh, definitely interesting results! It seems you are still using the 'find most common letter' method of finding the next best letter though. Is the only thing you changed the way you prune the wordlist?

      A few tweaks here and there, the insertion of perl capabilities into a contact lens, and I think we may have a viable cheating method at our disposal!

      I don't mind occasionally having to reinvent a wheel; I don't even mind using someone's reinvented wheel occasionally. But it helps a lot if it is symmetric, contains no fewer than ten sides, and has the axle centered. I do tire of trapezoidal wheels with offset axles. --Joseph Newcomer

        Lawliet,
        Let me back into the pruning opportunity by explaining where I was going with the other (now abandoned) method. In my discussion with blokhead, I explained that is should both be possible to guarantee a better than 50/50 chance of guessing correct as well as pruning the remaining candidates by more than 50%. I was setting out to do just that.

        Now let's assume we were going with 'eclectic'. There were 9,638 words in my dictionary with a length of 8. The letter that appeared in the most of those words was the letter 'e' at 6,666. Note that I didn't count total occurrences of 'e' but only 1 per word. This equated to 69%. Now what I set out to do was map the different ways the letter 'e' appeared across those 6,666 words. In the word 'eclectic' it appears in positions 1 and 4 where as in 'envelope' it appears in positions 1, 4 and 8. After guessing and seeing which positions were filled in, I could even eliminate words with letter 'e' even if they shared the common position because they didn't share all positions. That last part (all positions) was the piece I hadn't considered in my previous exercise. So here is the mind blowing part. The most common set of positions across the 6,666 words with the letter 'e' still had less than 1,600 possible words. This means that by selecting the letter 'e' (69% chance of being right) I will reduce the candidate list from 9,638 to less than 1,600 (and probably a lot further). It seemed pointless then to come up with some weights for determining letter based on probability of being correct and degree to which the candidate list is reduced because the "dumb" method was still doing a superb job.

        I do have one last final revision to make. I choose the letter that appears in the most candidate words but I don't break ties. Currently it is the result of sort. I plan to add total count as a secondary tie breaking condition to see if that improves results. I should post something later today.

        Cheers - L~R

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: CUFP [id://779296]
Approved by GrandFather
Front-paged by planetscape
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others rifling through the Monastery: (7)
As of 2024-03-28 11:20 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found