Good and bad questions for language models

language models like ChatGPT boast an astounding ability to generate useful responses to a wide variety of prompts. But just like mastering a musical instrument, there are notes that make sweet harmony and others that can strike a discordant tone. By understanding the right 'notes' or questions to ask, you can truly create a masterpiece of an interaction.

Strengths of ChatGPT: Hitting the High Notes

1. AI as an Information Repository:
Think of ChatGPT as your trusted library of widely known facts pre-dating ChatGPT’s 2021 training data cut off. It’s ready to provide insights into historical facts and general knowledge that fits this criteria.

2. AI as a Synthetic Intellect:
This is where ChatGPT shines. Questions that require synthesizing broad knowledge and making connections across multiple facts and concepts tend to yield useful responses. It's akin to a conductor, deftly orchestrating connections across various facts and concepts to generate coherent, insightful responses. It also shines when tasked with creative and hypothetical scenarios - perfect for those conceptual explorations.

3. AI as a Facilitator of Generic Insights:
If you're looking for some broad, non-specific advice, ChatGPT can offer useful insights. It can provide generic, topical advice based on the data it was trained on. However, it should not be used for specific personal, legal or professional advice. Remember it's a tool, not a personal adviser or legal expert.

Limitations of language models like ChatGPT. 

Here is a list of 'bad questions' or types of queries that even GPT-4 class models don’t handle well:

  1. Complex Mathematical Calculations: Models can autocomplete simple math problems similar to those that were common in their training, but cannot actually execute calculations. Therefore, it may struggle with complex mathematical problems that likely weren’t common in their draining data.
  2. Counting and Enumeration: Models can’t accurately count items, especially if the task requires keeping track of a large number of elements.
  3. Forecasting Future Events: Models cannot predict specific future events. They are better suited for hypothetical questioning or thinking through scenarios.
  4. Intentionally Misleading or Contradictory Questions: Queries that are designed to be misleading, contain contradictions, or are nonsensical can lead to incorrect or nonsensical responses. Models don't have true understanding, so they may try to respond even if the query itself makes no sense.
  5. Rigorous Logical Reasoning / Complex, Multi-Step Problem Solving: Models are not doing true logical reasoning, so queries that require rigorously following logical steps and rules of inference may lead to errors and inconsistencies. Formal logic, proofs, and strict deductive reasoning are likely to exceed the capabilities of LLMs that rely on statistical pattern matching rather than formal logic systems.
  6. Real-Time or Up-to-Date Information: Questions that require up-to-date facts, recent news, or knowledge of current events beyond the model's last training data cut-off will not be able to be answered accurately.
  7. Niche Facts and Specific Knowledge: Models may "hallucinate" or fabricate information when asked about obscure facts, sources, statistics, or quotes. It's better to use models for reasoning rather than for recalling precise, uncommon factual information.
  8. Quantitative Precision: Models are not reliable for market sizing numbers or other quantitative facts, as they are better at qualitative analysis than numerical precision.
  9. Culturally-Unbiased Knowledge: The training data has cultural and geographic biases, which bias models’ responses. E.g. They overrepresent Western perspectives - and particularly 🇺🇸 US Culture - and underrepresent others.
  10. Personalization and Identity Consistency: Models may not consistently maintain a specific identity or persona throughout an interaction, especially in more open-ended or creative contexts.
  11. Conversational Consistency: Models may not accurately remember the context of earlier parts of a conversation over long interactions - partly because they will forget data outside their context window and partly because they may be unsure where to give attention over long context.
  12. Difficulty Adapting to New Data: Models excel at tasks and datasets they were trained on, but struggle with newer, unseen data after their training phase. This is especially problematic for domains like programming where languages frequently update.
  13. Queries with Inherent Ambiguity or Subjectivity: Models may struggle with questions that are inherently ambiguous, ill-defined, or highly subjective. Queries where there is no clear "correct" answer, such as "What is the meaning of life?" or "Is pineapple an acceptable pizza topping?" may lead to inconsistent or unsatisfactory responses.

Note: this is about constraints that the core models themselves have in ‘words in-words out’ mode. Additional constraints come in to play when you extend the scope to include tool use: seeing, drawing, browsing, reading imported documents, running Python etc.

Conclusion:
By considering the types of questions that play to ChatGPT's strengths, you can make the most of what this AI assistant has to offer. With the right questions framed in the proper context, ChatGPT can provide thought-provoking ideas, suggestions and possible scenarios to fuel your thinking and aid in decision-making. The key is understanding what ChatGPT does best - and what it cannot replace.

With the right queries framed in the proper context, ChatGPT can offer a concerto of insights to enrich your knowledge and aid your decision-making process. However, it is crucial to remember that ChatGPT does not replace the need for human expertise. After all, who better to create music than the composer themselves? Hurrah for human ingenuity!

Back to blog

Leave a comment