It’s nearly Thanksgiving, and you know what that means: stuffing your face to the point of comatosis. Well, that and entertaining all the unsavory relatives you manage to keep at bay the rest of the year.
For those of us lucky (unlucky?) enough to be tasked with fixing this year’s feast, there isn’t much time left to settle on a menu. It’s always a tough decision, what to prepare. So, here’s a clever idea: ask a chatbot.
Yes, yes, it’s been done before — turning to AI for desperate Thanksgiving assistance. (The New York Times tried ChatGPT recipes in 2022.) The results have generally been middling. But perhaps the prompts were the issue.
Curiosity got the best of me. So I asked some of the more popular chatbots, ChatGPT and Claude, for a Thanksgiving menu “so unique it’d wow positively any family member.” That’d do the trick, I reckoned.
Let me tell you, reader, the AI didn’t disappoint.
AI Recommendations for Thanksgiving Menu
ChatGPT recommended starting with a cocktail hour — fancy! — featuring whipped sweet potato and goat cheese crostini. Claude, meanwhile, shot for the moon, suggesting an appetizer — “butternut squash bisque with sage foam” — that certainly checked the “unique” box.
“Pumpkin soup shooters with cinnamon crème fraîche” sound good? That’s what ChatGPT proposed for the appetizer, followed by a main course of miso-butter turkey with a ginger-soy glaze. Claude, once again the wild card, suggested “lavender and fennel dry-brined turkey with a honey-thyme glaze.” The chatbot described it as an herbaceous departure from classic roast turkey. Indeed.
What about sides? ChatGPT recommended a chili-lime cornbread and pistachio risotto. Claude said to whip out the fine liquor for a “wild mushroom and chestnut stuffing with aged sherry.”
For the big finish, both chatbots would have you stick to staples: pie, cheesecake, and healthy scoops of ice cream. The twist? The ice cream is saffron-flavored, and the cheesecake is chai-spiced.
“This menu takes familiar Thanksgiving flavors and elevates them through unexpected ingredients, techniques, and combinations,” Claude writes of its creations. “Each dish tells a story and invites conversation, making the meal not just about food, but about shared experience and creativity.”
News
– OpenAI’s Sora leaks
– Amazon backs Anthropic, again
– AI app connectors
– OpenAI funds “AI morality” research
– YouTube gets AI backgrounds
– Brave adds AI chat
– Ai2 open sources Tülu 3
– Crusoe raises cash
– Threads tests AI summaries
Research Paper of the Week
DeepMind, Google’s AI research org, has developed a new AI system called AlphaQubit it claims can accurately identify errors inside of quantum computers.
Quantum computers are potentially far more powerful than conventional machines for particular workloads. But they’re also more prone to “noise,” or general errors.
AlphaQubit identifies these errors so that they can be mitigated and corrected for, helping make quantum computers more reliable.
Model of the Week
Runway, a startup building AI tools for content creators, has released a new image-generation model that the company claims offers better stylistic control than most.
Called Frames, the model, which is slowly rolling out to users of Runway’s Gen-3 Alpha video generator, can reliably create images that stay true to a particular aesthetic, Runway says.
Now, it’s worth noting that Runway may be playing fast and loose with copyright rules. A 404 Media report earlier this year suggested the company scraped YouTube footage from channels belonging to Disney and creators like MKBHD without permission to train its models.
When reached for comment, a Runway spokesperson declined to reveal the source of Frames’ training data.
Like many generative AI companies, Runway asserts its data-scraping practices are protected under fair use doctrine. That theory is being tested in a number of courtroom battles, including a class action suit filed against Runway and several of its art-generator rivals.
Grab Bag
Nvidia has unveiled a model it’s calling “the world’s most flexible sound machine.”
Dubbed Fugatto, the chip giant’s model can create a mix of music, voices, and sounds from a text description and a collection of audio files. For example, Fugatto can create a music snippet based on a prompt, remove or add instruments from/to a song, and change the accent or emotion in a vocal performance.
Trained on millions of openly licensed sounds and songs, Fugatto can even generate things that don’t exist in the real world, Nvidia claims.
“For instance, Fugatto can make a trumpet bark or a saxophone meow,” the company wrote in a blog post. “With fine-tuning and small amounts of singing data, researchers found it could handle tasks it was not [trained] on, like generating a high-quality singing voice from a text prompt.”
Nvidia hasn’t released Fugatto, fearing it might be misused. But according to Reuters, the company is considering how it might launch the model “responsibly” were it to make it available.
