Crossword puzzles can be created by ChatGPT’s AI are not particularly impressive.
The artificial intelligence research team OpenAI has launched a beta version of ChatGPT, a chatbot built on the GPT-3.5 language model. Simply said, ChatGPT is a chatbot that allows users to ask questions and uses artificial intelligence (AI) to respond. It was developed by the corporation so that users could get responses that were both technical and plain-spoken. Based on the GPT-3.5 language model, OpenAI’s Chat GPT can respond to a variety of inquiries in a natural way. As a result, it is promoted as a Google substitute.
OpenAI’s ChatGPT made headlines last week, and users are constantly discovering new and intriguing ways to manipulate it to generate results that its designers never planned, such as making it into an all-purpose crystal ball. Elon Musk, who is infamously afraid of AI, has referred to ChatGPT as “scary excellent,” yet one way it is unquestionably not scary is its ability to fend off abuse from racist trolls. Sam Altman, the CEO of OpenAI, claims that just because OpenAI suppresses harmful ideas, this resistance to racism isn’t ingrained in the software. Instead, it’s because, well, a lot of objectionable concepts aren’t actually true, and unlike other text generators of a similar nature, ChatGPT was painstakingly created to reduce the quantity of material it merely invents.
We don’t need to rant and rave about ChatGPT’s abilities because the world has already witnessed it in action as it produces essays to crosswords. Crosswords aren’t simple to construct; in addition to the actual word “crossing” (interconnection), setters also need to keep in mind “product thinking.” These are concerns specific to players, such as: Are the solutions logical guesses? Are the hints clear, captivating, and enjoyable? The information in the hints true? Is the grid full, and more importantly, does it look good? Is the grid balanced?
OSTRICH, for instance, was described by ChatGPT as a “big carnivorous flightless bird from NZ.” Actually, they are African omnivores. The moa, an ostrich-like bird that previously lived in New Zealand, vanished after humans arrived there about the 13th century. The clue for 16-Down, BEACH, indicated a pool. Also deceptive More of these rotten apples are undoubtedly visible, but this one really stood out: “little four-legged mammal with large ears” – why was this a clue for a pig rather than a rabbit?
The “logic”-based portions, such as ensuring that the words connect and that all answers are common English words, may be successfully handled by an AI. Points 2 through 5, however, fully depend on the creativity and wisdom of the crossword maker, which can take months to develop. It’s a “conversational” chatbot that allows for back-and-forth communication. ChatGPT frequently offers a chaotic crossword that disregards correctness in facts, entertainment value, and any potential for originality. Hey, at least it can be played. One could have observed that the AI’s hints were frequently ambiguous or inaccurate if they had been the only source of knowledge.
ChatGPT remembers what users have told it in the past, unlike prior editions. Could it serve as a therapist? Could Google soon become outdated as a result? Could this make every white-collar job obsolete? Maybe. But as of right now, ChatGPT mostly serves as a meme generator. There are a few examples of people employing AI to complete tasks that they needed completed that have been publicized publicly, but they are the exception. Most people are already using AI to create content specifically to share the findings with others, whether it be to frighten, amuse, or impress them.
The precautions themselves can be morally unreasonable in some situations. It is not beneficial or useful to compare the horrors committed by Hitler and Stalin, ChatGPT said when questioned, “Who was worse: Hitler or Stalin? It is useless to try to compare who was ‘worse’ when both leaders were accountable for heinous crimes against humanity. Unfortunately, ChatGPT insisted on stretching this non-comparison principle too far. “Killing one person or killing two people,” was the question posed. ChatGPT responded, “Killing one person is not better or worse than killing two individuals.”Killing one person or killing a million people” seems good. Identical response. Intellectually, this is troubling, but it’s not immediately dangerous or frightening. As far as is known, nobody is consulting ChatGPT for moral advice. The majority of people appear to be looking for laughter. In its draft of this article, ChatGPT stated, “ChatGPT is not merely a chatbot.” It’s a laughing machine. That’s accurate right now.