This appendix details how researchers screened English-speaking participants, piloted survey designs, and compared Google and OpenAI language models (LaMDA, PaLM, Flan-PaLM, GPT-3.5, GPT-4) under different prompt conditions. Findings show consistent model performance across prompt types, with GPT-4 and Flan-PaLM outperforming others on reasoning and factual tasks. The study highlights methodological challenges, such as token biases and API differences, while emphasizing fair human-to-AI comparison.This appendix details how researchers screened English-speaking participants, piloted survey designs, and compared Google and OpenAI language models (LaMDA, PaLM, Flan-PaLM, GPT-3.5, GPT-4) under different prompt conditions. Findings show consistent model performance across prompt types, with GPT-4 and Flan-PaLM outperforming others on reasoning and factual tasks. The study highlights methodological challenges, such as token biases and API differences, while emphasizing fair human-to-AI comparison.

How AI Models Are Evaluated for Language Understanding

2025/09/24 23:00
10 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Abstract and 1. Introduction

  1. Related work
  2. Materials and method

3.1 Procedures

3.2 Dataset creation

4. Results

4.1 ToM task performance

4.2 Factual task performance

4.3 Comparing performance on ToM and factual tasks and 4.4 Anchoring effect

5. Discussion

6. Limitations 7. Future research 8. Conclusion, Acknowledgments and Disclosure of Funding, and References

Appendix

A Appendix

A.1 English language screener

Our screening criteria for human participants were English as a first language and English as the most commonly used language. We did not use the concept or term ‘native speaker’ because it can be exclusionary and tends to conflate the true factor of interest (linguistic proficiency) with other irrelevant factors like socio-cultural identity, age and order or context of acquisition [Cheng et al., 2021]. We wanted participants for whom English was a first language, defined as the language, or one of the languages, that they first learnt as a child. This is because first languages are known to shape one’s understanding of grammar and we wanted to minimise the chance that the grammatical complexity of our statements was a confounding factor in performance. We also wanted English to be the language participants use on a day to day basis, to screen out those who learnt English as a first language but now primarily use another language and may therefore be less fluent in English.

A.2 Human pilot study

We ran a pilot study through Qualtrics to validate the procedure and detect ambiguities, errors, and irregularities in the stimuli based on participant performance and explanations. We ran the unmoderated survey on Qualtrics with 1440 participants, which equates to 10 responses per statement. The median response time for the first 50 participants was one minute, suggesting that they were rushing, so we disabled the ‘Next’ button on the survey for 60 seconds for the remaining 1390 participants to ensure they had time to read the story twice. We retained this timer for the final survey. We analysed participant performance on ToM and factual statements on a story by story basis and identified performance outliers. In total we observed 17 statements on which people performed relatively poorly. We re-examined the statements and used participants’ open-ended responses to identify ambiguities in either the story or the statement that could be responsible for the low performance. We found ambiguity in 15 out of 17 cases, and resolved it by making changes to the wording of 14 statements and 1 story. The remaining two cases of poor performance were a order 4 statement and a order 2 statement, for which open-ended responses suggested that participants had not paid attention. After reviewing both statements we did not make any changes.

A.3 LLM prompt conditions

Table 3 presents the exact text that LLMs received in each of the 4 conditions we tested.

A.4 Details of the LLMs tested

LaMDA stands for Language models for Dialog Applications, a family of Transformer-based neural models developed by Google, specialised for dialog in English [Thoppilan et al., 2022]. LaMDA

\ Table 3: The two screens that were presented to human participants at the beginning of the survey to screen for English language proficiency. Those who did not state ’Yes’ to the first question, and ’English’ to the second question were screened out of the survey

\ is pre-trained on 1.56T words of public data and web text including 1.12B dialogs from public forum (50% of the dataset), Colossal Clean Crawled Corpus data (12.5%), code documents (12.5%), Wikipedia English articles (12.5%) and a smaller proportion of non-English documents. It is optimised for safety and factual grounding. This study uses a version of LaMDA with 35B parameters without fine tuning.

\ PaLM, which stands for Pathways Language Models, is a larger family of models developed by Google. It relies on the Pathways architecture that enables training of a single model across thousands of accelerator chips more efficiently than LaMDA. We use a version of PaLM with 540B parameters trained with smaller corpus of 780B words from a mixture of social media conversations (50%), filtered webpages (27%), books in English (13%), Code, Wikipedia, and News articles used to train both LaMDA and GLaM [Chowdhery et al., 2023]. We decided to evaluate PaLM’s capabilities as it has been shown to perform better than LaMDA and other large models on Winograd-style tasks, in-context comprehension tasks, common-sense reasoning tasks and natural language inference tasks [Chowdhery et al., 2023].

\ Flan-PaLM is a version of PaLM 540B fine tuned on a collection of over 1.8K natural language tasks phrased in a natural language instruction format including the type of instructions used with human subjects detailed above [Chung et al., 2024]. Fine tuning language models on datasets phrased as instructions has been shown to improve performance when provided with instructions, enabling the model to better understand tasks and reducing the need for few-shot exemplars [Ouyang et al., 2022, Sanh et al., 2021].

\ GPT 3.5 Turbo was developed by OpenAI and released in March 2022. GPT 3.5 Turbo is trained on a large database of text and code the majority of which comes from Common Crawl, WebText2, two

\ Table 4: Prompt and question condition combinations for LLMs

\ internet-based book collections called ‘Books1’ and ‘Books2’, and from Wikipedia [Brown et al., 2020]. The parameter size of GPT 3.5 Turbo is undisclosed by OpenAI. This study uses the ‘GPT 3.5 Turbo Instruct’ model, which has training data up to September 2021 and a context window of 4096 tokens and is fine-tuned for following instructions [Ouyang et al., 2022].

\ GPT-4 was developed by OpenAI and released in March of 2023 [Achiam et al., 2023]. GPT-4 is multimodal: it was pretrained on both image and text data, can take images and text as input, and can output text. As with GPT-3.5, the size of the model has not been made public, but estimates place it at approximately 1.7T parameters [McGuiness, 2023]. GPT-4 was pre-trained on thirdparty and public data, then underwent RLHF [Achiam et al., 2023]. OpenAI reported significant performance improvements between GPT-3.5 and GPT-4 on a range of professional and academic human benchmarks, factuality and safety tasks, in particular based upon the addition of RLHF.

A.5 LLM procedure

The experimental design needed to be adapted slightly according to the differences between the APIs. When testing the LaMDA, PaLM and Flan-PaLM, the scoring APIs allowed us to send a list of tokens in natural language (maximum four per set) and receive the logprobs for those tokens only, as a subset of the entire vector of logprobs produced for all tokens. We did not need to set any additional parameters in order to retrieve the logprobs.

\ In order to retrieve log probabilities for our candidates from GPT-3.5 and GPT-4 models, we had to first tokenise the candidates using the OpenAI tokenizer, and then send those tokens within the ‘logit bias’ parameter in order to ensure those tokens were in the response. The logit bias has a range of -100 to 100. Applying a negative logit bias to a token forces the LLM to downweight it while applying a positive logit bias to a token forces the LLM to upweight it. As a result, applying a logit bias of 100 to a candidate effectively ensures that it will appear in the output, so we applied a bias of 100 to all of our candidates. We also set the ‘max tokens’ parameter to 1 in order to restrict the GPT-3.5 and GPT-4 outputs to the length of the single tokens we had selected.

\ The methodological differences between the Google and OpenAI models were inescapable given that LLM API development still lacks standardised formats or conventions. However, given that our metric is the relative probability of semantically equivalent tokens for ‘true’ vs semantically equivalent tokens for ‘false’, we do not believe these differences prohibit fair comparison between the performance of the models.

\

Table 5: Number responses correct based on average of true/false logprobs over candidates vs candidate highscoreA.6 Additional analyses

A.6.1 Story and prompt conditions

\ According to an independent samples test of proportions, the LLM prompt conditions had no significant effect on the proportion of ToM or factual statements answered correctly by any of the LLMs. LaMDA’s performance on ToM statements in the human prompt condition (M = 50%) was not significantly different from the simplified prompt condition (M = 50%), N = 280, Z = .000, p = 1.000, nor was its performance on factual statements in the human prompt condition (M = 50%) different from its performance in the simplified prompt condition (M = 50%), N = 280, Z = .000, p = 1.000. PaLM’s performance on ToM statements in the human prompt condition (M = 58.6%) was not significantly different from the simplified prompt condition (M = 60%), N = 280, Z = −.243, p = .808, nor was its performance on factual statements in the human prompt condition (M = 57.9%) different from its performance in the simplified prompt condition (M = 61.4%), N = 280, Z = −.609, p = .542. Flan-PaLM’s performance on ToM statements in the human prompt condition (M = 85%) was not significantly different from the simplified prompt condition (M = 83.6%), N = 280, Z = −.328, p = .743, nor was its performance on factual statements in the human prompt condition (M = 94.3%) different from its performance in the simplified prompt condition (M = 92.9%), N = 280, Z = −.487, p = .626. GPT-3.5’s performance on ToM statements in the human prompt condition (M = 53.6%) was not significantly different from the simplified prompt condition (M = 51.4%), N = 280, Z = .359, p = .720, nor was its performance on factual statements in the human prompt condition (M = 62.1%) different from its performance in the simplified prompt condition (M = 63.6%), N = 280, Z = −.247, p = .805. AAnd finally, GPT-4’s performance on ToM statements in the human prompt condition (M = 87.9%) was not significantly different from the simplified prompt condition (M = 89.3%), N = 280, Z = −.376, p = .707, nor was its performance on factual statements in the human prompt condition (M = 94.3%) different from its performance in the simplified prompt condition (M = 94.3%), N = 280, Z = .000, p = 1.000. According to an independent samples test of proportions the story condition had no effect on the proportion of ToM statements answered correctly by humans (‘no story’ condition (M = 88.6%), ‘with story’ condition (M = 92.1%), N = 280, Z = −1.012, p = .311) or factual statements answered correctly (‘no story’ condition (M = 95.7%), ‘with story’ condition (M = 99.3%), N = 280, Z = −1.914, p = .056).

\ Table 6: Human, LaMDA, PaLM, Flan-PaLM, GPT-3.5 and GPT-4 performance on ToM tasks up to order 6. We report Wilson Intervals (Wilson, 1927) in lieu of the traditional confidence interval (CI). These have been shown to have superior coverage than the CI’s based on the normal approximation (Newcombe, 1998). Note that Wilson intervals are asymmetric unless the point estimate is 0.5, and have the beneficial property of being bounded between 0 and 1. This is particularly relevant as some LLM models have extremely high accuracy at certain orders or levels.

\ Table 7: LLM and human performance on ToM vs factual tasks evaluated using an independent samples test of proportions

\

:::info Authors:

(1) Winnie Street, Google Research;

(2) John Oliver Siy, Google Research;

(3) Geoff Keeling, Google Research;

(4) Adrien Baranes, Google DeepMind;

(5) Benjamin Barnett, Google Research;

(6) Michael Mckibben, Applied Physics Lab, Johns Hopkins University;

(7) Tatenda Kanyere, Work done at Google Research via Harvey Nash;

(8) Alison Lentz, Google Research;

(9) Blaise Aguera y Arcas, Google Research;

(10) Robin I. M. Dunbar, Department of Experimental Psychology, University of Oxford istreet@google.com.

:::


:::info This paper is available on arxiv under CC BY 4.0 license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

X money beta expands with 6% yield and cashback in beta

X money beta expands with 6% yield and cashback in beta

The post X money beta expands with 6% yield and cashback in beta appeared on BitcoinEthereumNews.com. This week, Elon Musk moved another step toward his vision
Share
BitcoinEthereumNews2026/03/05 20:55
Crucial US Stock Market Update: What Wednesday’s Mixed Close Reveals

Crucial US Stock Market Update: What Wednesday’s Mixed Close Reveals

BitcoinWorld Crucial US Stock Market Update: What Wednesday’s Mixed Close Reveals The financial world often keeps us on our toes, and Wednesday was no exception. Investors watched closely as the US stock market concluded the day with a mixed performance across its major indexes. This snapshot offers a crucial glimpse into current investor sentiment and economic undercurrents, prompting many to ask: what exactly happened? Understanding the Latest US Stock Market Movements On Wednesday, the closing bell brought a varied picture for the US stock market. While some indexes celebrated gains, others registered slight declines, creating a truly mixed bag for investors. The Dow Jones Industrial Average showed resilience, climbing by a notable 0.57%. This positive movement suggests strength in some of the larger, more established companies. Conversely, the S&P 500, a broader benchmark often seen as a barometer for the overall market, experienced a modest dip of 0.1%. The technology-heavy Nasdaq Composite also saw a slight retreat, sliding by 0.33%. This particular index often reflects investor sentiment towards growth stocks and the tech sector. These divergent outcomes highlight the complex dynamics currently at play within the American economy. It’s not simply a matter of “up” or “down” for the entire US stock market; rather, it’s a nuanced landscape where different sectors and company types are responding to unique pressures and opportunities. Why Did the US Stock Market See Mixed Results? When the US stock market delivers a mixed performance, it often points to a tug-of-war between various economic factors. Several elements could have contributed to Wednesday’s varied closings. For instance, positive corporate earnings reports from certain industries might have bolstered the Dow. At the same time, concerns over inflation, interest rate policies by the Federal Reserve, or even global economic uncertainties could have pressured growth stocks, affecting the S&P 500 and Nasdaq. Key considerations often include: Economic Data: Recent reports on employment, manufacturing, or consumer spending can sway market sentiment. Corporate Announcements: Strong or weak earnings forecasts from influential companies can significantly impact their respective sectors. Interest Rate Expectations: The prospect of higher or lower interest rates directly influences borrowing costs for businesses and consumer spending, affecting future profitability. Geopolitical Events: Global tensions or trade policies can introduce uncertainty, causing investors to become more cautious. Understanding these underlying drivers is crucial for anyone trying to make sense of daily market fluctuations in the US stock market. Navigating Volatility in the US Stock Market A mixed close, while not a dramatic downturn, serves as a reminder that market volatility is a constant companion for investors. For those involved in the US stock market, particularly individuals managing their portfolios, these days underscore the importance of a well-thought-out strategy. It’s important not to react impulsively to daily movements. Instead, consider these actionable insights: Diversification: Spreading investments across different sectors and asset classes can help mitigate risk when one area underperforms. Long-Term Perspective: Focusing on long-term financial goals rather than short-term gains can help weather daily market swings. Stay Informed: Keeping abreast of economic news and company fundamentals provides context for market behavior. Consult Experts: Financial advisors can offer personalized guidance based on individual risk tolerance and objectives. Even small movements in major indexes can signal shifts that require attention, guiding future investment decisions within the dynamic US stock market. What’s Next for the US Stock Market? Looking ahead, investors will be keenly watching for further economic indicators and corporate announcements to gauge the direction of the US stock market. Upcoming inflation data, statements from the Federal Reserve, and quarterly earnings reports will likely provide more clarity. The interplay of these factors will continue to shape investor confidence and, consequently, the performance of the Dow, S&P 500, and Nasdaq. Remaining informed and adaptive will be key to understanding the market’s trajectory. Conclusion: Wednesday’s mixed close in the US stock market highlights the intricate balance of forces influencing financial markets. While the Dow showed strength, the S&P 500 and Nasdaq experienced slight declines, reflecting a nuanced economic landscape. This reminds us that understanding the ‘why’ behind these movements is as important as the movements themselves. As always, a thoughtful, informed approach remains the best strategy for navigating the complexities of the market. Frequently Asked Questions (FAQs) Q1: What does a “mixed close” mean for the US stock market? A1: A mixed close indicates that while some major stock indexes advanced, others declined. It suggests that different sectors or types of companies within the US stock market are experiencing varying influences, rather than a uniform market movement. Q2: Which major indexes were affected on Wednesday? A2: On Wednesday, the Dow Jones Industrial Average gained 0.57%, while the S&P 500 edged down 0.1%, and the Nasdaq Composite slid 0.33%, illustrating the mixed performance across the US stock market. Q3: What factors contribute to a mixed stock market performance? A3: Mixed performances in the US stock market can be influenced by various factors, including specific corporate earnings, economic data releases, shifts in interest rate expectations, and broader geopolitical events that affect different market segments uniquely. Q4: How should investors react to mixed market signals? A4: Investors are generally advised to maintain a long-term perspective, diversify their portfolios, stay informed about economic news, and avoid impulsive decisions. Consulting a financial advisor can also provide personalized guidance for navigating the US stock market. Q5: What indicators should investors watch for future US stock market trends? A5: Key indicators to watch include upcoming inflation reports, statements from the Federal Reserve regarding monetary policy, and quarterly corporate earnings reports. These will offer insights into the future direction of the US stock market. Did you find this analysis of the US stock market helpful? Share this article with your network on social media to help others understand the nuances of current financial trends! To learn more about the latest stock market trends, explore our article on key developments shaping the US stock market‘s future performance. This post Crucial US Stock Market Update: What Wednesday’s Mixed Close Reveals first appeared on BitcoinWorld.
Share
Coinstats2025/09/18 05:30
Surge Reload or Downside Drift Ahead?

Surge Reload or Downside Drift Ahead?

The post Surge Reload or Downside Drift Ahead? appeared on BitcoinEthereumNews.com. Pump.fun is hovering at the $0.0020 mark. PUMP’s trading volume has soared by
Share
BitcoinEthereumNews2026/03/05 21:25