Nice job. Unfortunately I'd say (just like with AI generated crosswords) we're not quite at the state where we can automate this without any manual supervision.
Case in point:
The image clue from "Alexandrian who found area from mere sides" is more likely to confuse the player than help them.
Same with the image clue from "Fewer candles to count" - it looks to me like a woodcut maze and lead me completely in the wrong direction.
If you're pre-generating these, I'd suggest using a more powerful image model such as NB Pro.
Fair points, thank you for the very thoughtful feedback! "Alexandrian who found area from mere sides" is a bad hint because it is fairly obscure, and really just a history trivia check. And the image is a bit random. You can view the explanation for the hint by clicking the ? button in the upper right corner after a word is completed if you are curious about the models reasoning.
As a bit of an explanation I generated these before Nano banana pro came out, and at the time I made a large comparison grid for various image and text models. For this style qwen image performed very well. LLM wise I started with 5.1 and updated to 5.2. Of course with the rate of model release my choices are pretty much already obsolete... Expense is also a factor for a hobby project, and NB pro is 7.5x more expensive than Qwen image.
Nice job. Unfortunately I'd say (just like with AI generated crosswords) we're not quite at the state where we can automate this without any manual supervision.
Case in point:
The image clue from "Alexandrian who found area from mere sides" is more likely to confuse the player than help them.
Same with the image clue from "Fewer candles to count" - it looks to me like a woodcut maze and lead me completely in the wrong direction.
If you're pre-generating these, I'd suggest using a more powerful image model such as NB Pro.
Fair points, thank you for the very thoughtful feedback! "Alexandrian who found area from mere sides" is a bad hint because it is fairly obscure, and really just a history trivia check. And the image is a bit random. You can view the explanation for the hint by clicking the ? button in the upper right corner after a word is completed if you are curious about the models reasoning.
As a bit of an explanation I generated these before Nano banana pro came out, and at the time I made a large comparison grid for various image and text models. For this style qwen image performed very well. LLM wise I started with 5.1 and updated to 5.2. Of course with the rate of model release my choices are pretty much already obsolete... Expense is also a factor for a hobby project, and NB pro is 7.5x more expensive than Qwen image.