In case no one has seen it yet, there is a clone similar to threes called 1024 available, but you do multiples of 2 and there is a stationary "0" on the play field that you have to work around. Not as good as threes, but worth checking out.
The developer of said clone is quite impudent. He made his clone deliberately free, quote "FREE FOR A LIMITED TIME!! No need to pay for ThreesGames.", copy/pasted two lines almost verbatim from Threes' app description and recreated several screenshots. This clone is as cocky as it gets. To quote Charles Celeb Colton: "Imitation is the sincerest form of flattery." Edit: I won't support this clone by downloading it to take a closer look but so far I'd wager that it's a pretty dull experience. There is no 1/2-merging mechanic and the dead 0 looks like an artificial drag to increase the difficulty somewhat to compensate for that.
Had an odd experience that maybe someone can clarify for me. I have Threes! on both my iphone and ipad mini though usually I'm playing on the phone. As mentioned before, I was finding that my 6 lowest scores on the iphone didn't seem to be replaced by higher scores. I do think some higher scores were replaced when I did better, but I never tracked them carefully. So, last night I was playing on the ipad and after awhile I checked and I was down to just 2 scores below 3000! I noticed that the ipad was asking me to enter my name each time so I was suspicious that maybe it hadn't updated. Closer check showed it had been so that left me with the question of whether turning on the auto-signin affected registration of scores. Went back to my iphone and was surprised to see I had to sign in each time. I haven't had much time to experiment further but does anyone else have any experience with this?
Played some more on the iphone and it wouldn't replace the two scores below 3000. Switched to the ipad and I did. Very strange.
Downloaded 1024 to have a go and see what it was like: (posted it on that thread as well) Yes it is a clone, but not entirely; 1 - a swipe here moves you as far as you can go in the direction of the swipe, until you hit a wall or another card. If the card is the same you merge. 2 - the new cards appear randomly on the boards so planning is very difficult. 3 - there is no preview of what happens with your move. 4 - there is no GC at all. So there is no way if I know my score is any good! 5 - scoring is the face value of the cards added together. 6 - there is nothing really to aim for whilst playing. No cards to aim for an 'unlock'. Doesn't seem to be any highscore recorded. Basically I have no reason to play this again. 7 - ADS. I know it is free, but on my 4S it blocks the top portion of the screen so I can't see what's actually going on up there. I think it just shows the score but who knows. The best way of describing this game is that it's sort of like an alpha version of Threes! The basic gameplay is similar, but Threes! has it polished and (in my opinion) perfected. Personally I can't see me playing this game again.
Guess I'll have to now... Edit - got a 256 on go 2. That's a bit like getting a 1536 I guess, so this game seems to be easier. I also found the GC leaderboard so there's that. After go two I enjoyed it more than the first go. I don't think this interest will be as log as with threes, but I'll give it a couple more goes...
Hey all, I've been working on my own AI to play threes. Here's stats for how it plays so far, and I'm steadily working on improving it: Move Search Depth: 6 Card Count Depth: 3 100 games completed! Total time: 02:13:08.2477053 Low Score: 10041 Median Score: 88653 High Score: 774996 % of games with at least a 384: 100% % of games with at least a 768: 98% % of games with at least a 1536: 92% % of games with at least a 3072: 27% % of games with at least a 6144: 3% I'm considering open-sourcing the code so that the whole community can help to improve it. Would this be of interest to anyone?
Holy fuzz, that's amazing!! Very impressive results. Open-sourcing is a great idea! You could build a teaching tool that analyzes real-world games and shows people their 10 "worst" moves (by some metric), and what the corresponding "best" moves are, according to the AI. I'm curious how different your AI's results would be if you run it without counting cards, and/or without the '+' hints? Also, it would be interesting to go head-to-head against Nicola's bot in sort of a "Duplicate Scrabble" style, and see where the bots differ the most. Incidentally, your high score (774996) in base 3 is 1110101002120, which means that the board had { 6144, 3072, 1536, 384, 96 } plus 69 points' worth of smaller numbers at the time of your demise. (Or possibly three 48's instead of a 96.) Wow! Edit: Base-3 conversion calcuator here: http://www.unitconversion.org/numbers/base-10-to-base-3-conversion.html
Update on 1024 (and likely my last one unless there are specific requests) It has surprised me the amount I am actually enjoying it. Not half as much as Threes, but still enough to keep trying. The algorithms are definitely less complex, making the game simpler. CloudPuff put it nicely saying that 1024 is definitely aimed at a younger audience. I will keep playing it until I get a '1024' though...
Thanks There's a surprising amount of information encoded in there. The lowest possible score in Threes is 12, corresponding to e.g. a diagonal wall of four 3's, separating six 1's from six 2's. From there on up, every multiple of 3 is achievable in principle, until you reach: (spoiler, for those who want to figure it out themselves: ) Spoiler 39363 . A score of Spoiler 39363 (base 3: 122222220) is impossible, because Spoiler it would require 17 tiles: { 768, 384, 384, 192, 192, 96, 96, 48, 48, 24, 24, 12, 12, 6, 6, 3, 3 }. In a nutshell, any score with Spoiler a base-3 "popcount" of 17 or more is unachievable.
My nine year old loves it, she's had the occasional play of threes but because she didn't get high matches and therefore a high score she lost interest quickly but she's asked for 1024 on her device and has a play almost every day, she thinks the animations and symbols on the characters such as the Minnie Mouse ears are fun. I think because you progress quicker in 1024 she feels like she is achieving something whereas in threes it was rare for her to get a 96. My husband and I both love threes the best though.
Hi holyfuzz, good job with your bot. Good that you revived the thread, because I also had some new results from the latest version of my bot. I spent some time optimising the code to make it as fast as possible following the suggestions that nneonneo made few pages back (store the board state in a 64-bit integer, use lookup tables for the moves, etc.), and also tried to further improve the scoring function. I think I have now reached a plateau where increasing the search depth doesn't seem to give significantly better results. First, let's clarify the terminology so that we are all on the same page. When I describe the search depth of my bot I think of it as "1+X+Y", where it looks at: - 1 time: all possible moves of the player, followed by all possible placements of the next card (whose value is known, if it's not a bonus card; if it's a bonus card, try all possible values) - X times: all possible moves of the player, followed by all possible placements of all possible normal cards (I ignore bonus cards at this stage), weighted by their probability according to the 12-card deck model - Y times: all possible moves by the player, NOT followed by any new card the scoring function used to evaluate the final position evaluates one row/column at a time and gives points for: - empty cells (encourages leaving space on the board) - cards that can be merged with a neighbor (encourages making pairs ) - cards that are twice the value of a neighbor (encourages making ladders) and subtracts points for: - cards that are lower than both neighbors (discourages making checkerboard patterns) The exact values of the points were fine tuned by running a simplified version of the simulation with random parameters and picking the ones that maximised the median score. The median score in this simplified version, however, was only around 8,000 points, so the scoring function might not be optimal for game states where there are many high cards. The best results I had with my bot where with 1+3+2 and 1+4+2 search depths. Given the above description, if I understand the usual game theory terminology, "1+3+2" should be equivalent to "10 plies", and "1+4+2" to "12 plies" (though the last two plies are two consecutive moves from the player). I made one run of 100 games with both configurations; 1+3+2 took less than a hour to complete. 1+4+2 took about 15 hours to complete, so still less than 10 minutes per game on average. However I should also note that there's some problem with the heat sink of my CPU, which overheats while running the simulation. Without having to throttle down to avoid thermal damage it might be able to run a bit faster. The results were: 1+3+2: 384: 100% 768: 99% 1536: 85% 3072: 27% 6144: 1% min score = 11,808 median score = 88,608 max score = 563,034 1+4+2: 384: 100% 768: 100% 1536: 88% 3072: 34% 6144: 5% min score = 29,553 median score = 89,235 max score = 733,119 Comparing my results (both scores and run time) with holyfuzz's, I'd say that his bot seems to be roughly equivalent to mine in the 1+3+2 configuration. The important thing to note here is that after increasing by one level the search depth of my bot, the run time increased by 15+ times, but the median score didn't improve significantly (the small change can be just a statistical variation), and there was no significant change in the number of 1536 and 3072 reached. The increase in the number of 6144's could be significant, but it comes at a high cost in terms of computing power needed. So it seems that with my algorithm the sweet spot is 9 plies, and going above that is not particularly effective. I speculate that the randomness of the game becomes dominant at that point, making it very difficult to formulate a strategy only based on the next card.
One interesting piece of information to add: I have done many tests using the same random seed, so that different versions of the bot played with the same sequence of cards, and it was apparent that even if a version statistically played better than the other, it was very common for the worse bot to get a higher score in a few of the games. This is, I think, further proof that the randomness of the game is dominant, so that even a move which statistically would be better than another, can turn out to be worse if you are served the wrong cards. It might have been better if the game actively played against you. That way, you would know that you couldn't rely on luck, and would need to make your moves accordingly. The way it is, all you can do is pick the statistically best move, and hope.
I've shown this game to some of my friends who have Androids. Are there plans to port this to the Play Store?