Fortune | FORTUNE 09月17日
ChatGPT使用研究揭示用户行为与AI未来影响
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一项关于ChatGPT使用情况的研究显示,该工具拥有庞大的用户群体,每周活跃用户达7亿,占全球人口的近10%。大部分(70%)用户将ChatGPT用于非工作目的,主要集中在实用指导、写作帮助和信息查询。在工作场景中,受教育程度高、从事高薪职业的专业人士更倾向于使用ChatGPT。研究还指出,工作场景中的ChatGPT使用模式更侧重于“执行”任务而非“询问”,凸显了AI在自动化方面的潜力,但也引发了关于AI加剧经济不平等的担忧。此外,研究还探讨了ChatGPT在性别使用差异消失、以及与其他AI工具(如Claude.ai)在编码任务上的不同应用。

👥 **ChatGPT的广泛用户基础与使用模式**:研究表明,ChatGPT拥有高达7亿的周活跃用户,约占全球人口的10%。其中,70%的用户将ChatGPT用于非工作目的,主要集中在寻求实用指导、写作辅助和信息获取。非工作用途中,超过三分之一的查询与教学或辅导相关,尤其值得关注的是年轻用户(26岁以下)占近一半,暗示了其在教育领域的潜在影响。

💼 **专业人士与工作场景中的AI应用**:在工作领域,ChatGPT主要被高学历、高收入的专业人士使用。与非工作场景的“提问”模式不同,工作场景中的用户更倾向于让ChatGPT“执行”具体任务,占比达56%。这种“自动化”而非“增强”的工作模式,可能对就业市场产生深远影响,并可能加剧经济不平等,使高技能人才与其他人之间的差距进一步拉大。

💡 **AI的潜在社会经济影响与发展趋势**:研究提出了两种AI的未来愿景:一种是AI作为“公平的促进者”,帮助低技能人群提升工作能力;另一种是AI加剧经济不平等,使高技能人群更具优势。ChatGPT的数据似乎更倾向于后者。此外,研究还发现,尽管AI公司在编码能力上激烈竞争,但ChatGPT在用户端的编码任务使用率相对较低(4.2%),而Claude.ai在这方面则更为突出(39%)。

📈 **其他观察与数据洞察**:研究还揭示了一些有趣的现象,例如,ChatGPT用户中的性别差距已基本消失。同时,尽管AI公司大力宣传其模型在编码方面的能力,但实际用户在工作场景中更偏向于使用AI来自动化任务,而非寻求决策支持或专家建议,这可能对经济结构和就业产生重大影响。

The ChatGPT study confirmed the huge reach OpenAI has, with 700 million active weekly users, or almost 10% of the global population, exchanging some 18 billion messages with the chatbot every week. And the majority of those messages—70%—were classified by the study’s authors as “non-work” queries. Of these, about 80% of the messages fell into three big categories: practical guidance, writing help, and seeking information. Within practical guidance, teaching or tutoring queries accounted for more than a third of messages. How many of these were students using ChatGPT to “help” with homework or class assignments was unclear—but ChatGPT has a young user base, with nearly half of all messages coming from those under the age of 26.

Educated professionals more likely to be using ChatGPT for work

When ChatGPT was used for work, it was most likely to be used by highly-educated users working in high-paid professions. While this is perhaps not surprising, it is a bit depressing.

There is a vision of our AI future, one which I outline in my book, Mastering AI, in which the technology becomes a leveling force. With the help of AI copilots and decision-support systems, people with fewer qualifications or experience could take on some of the work currently performed by more skilled and experienced professionals. They might not earn as much as those more qualified individuals, but they could still earn a good middle-class income. To some extent, this already happens in law, with paralegals, and in medicine, with nurse practitioners. But this model could be extended to other professions, for instance accounting and finance—democratizing access to professional advice and helping shore up the middle class.

There’s another vision of our AI future, however, where the technology only makes economic inequality worse, with the most educated and credentialed using AI to become even more productive, while everyone else falls farther behind. I fear that, as this ChatGPT data suggests, that’s the way things may be heading.

While there’s been a lot of discussion lately of the benefits and dangers of using chatbots for companionship, or even romance, OpenAI’s research showed messages classified as being about relationships constituted just 2.4% of messages, personal reflection 1.9%, and role-playing and games 0.4%.

Interestingly, given how fiercely all the leading AI companies—including OpenAI—compete with one another on coding benchmarks and tout the coding performance of their models, coding was a relatively small use case for ChatGPT, constituting just 4.2% of the messages the researchers analyzed. (One big caveat here is that the research only looked at the consumer versions of ChatGPT—its free, premium, and pro tiers—but not usage of the OpenAI API or enterprise ChatGPT subscriptions, which is how many business users may access ChatGPT for professional use cases.)

Meanwhile, coding constituted 39% of Claude.ai’s usage. Software development tasks also dominated the use of Anthropic’s API.

Automation rather than augmentation dominates work usage

Read together, both studies also hinted at an intriguing contrast in how people were using chatbots in work contexts, compared to more personal ones.

ChatGPT messages classified as non-work related were more about what the researchers called “asking”—which involved seeking information or advice—as opposed to “doing” prompts, where the chatbot was asked to complete a task for the user. But in work-related messages, “doing” prompts were more common, constituting 56% of message traffic.

For Anthropic, where work-related messages seemed more dominant to begin with, there was a clear trend for users to ask the chatbot to complete tasks for them, and in fact the majority of Anthropic’s API usage (some 77%) was classified as automation requests. Anthropic’s research also indicated that many of the tasks that were most popular with business users of Claude also were those that were most expensive to run, indicating that companies are probably finding—despite some other survey and anecdotal evidence to the contrary—that the value of automating tasks with AI is indeed worth the money.

The studies also indicate that in business contexts people increasingly want AI models to automate tasks for them, not necessarily offer decision support or expert advice. This could have significant implications for economies as a whole: If companies mostly use the technology to automate tasks, the negative effect of AI on jobs is likely to be far greater.

There were lots of other interesting tidbits in the two studies. For instance, whereas previous usage data had shown a significant gender gap, with men far more likely than women to be using ChatGPT, the new study shows that gap has now disappeared. Anthropic’s research shows interesting geographic divergence in Claude usage too—usage is concentrated on the coasts, which is to be expected, but there are also hotspots in Utah and Nevada.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

FORTUNE ON AI

China says Nvidia violated antitrust laws as it ratchets up pressure ahead of U.S. trade talks—by Jeremy Kahn

AI chatbots are harming young people. Regulators are scrambling to keep up.—by Beatrice Nolan

OpenAI’s deal with Microsoft could pave the way for a potential IPO—by Beatrice Nolan

EYE ON AI NEWS

Alphabet announces $6.8 billion investment in U.K.-based AI initiatives, other tech companies also announce U.K. investments alongside Trump's state visit. Google’s parent company announced a £5 billion ($6.8 billion) investment in the U.K. over the next two years, funding AI infrastructure, a new $1 billion AI data center that is set to open this week, and more funding for research at Google DeepMind, its advanced AI lab that continues to be headquartered in London. The BBC reports that the investments were unveiled ahead of President Trump’s state visit to Britain. Many other big U.S. tech companies are expected to make similar investments over the next few days. For instance, Nvidia, OpenAI and U.K. data center provider Nscale also announced a multi-billion-dollar data center project this week. More on that here from Bloomberg. Meanwhile, Salesforce said it was increasing a previously announced package of investments in the U.K., much of it around AI, from $4 billion to $6 billion.

FTC launches inquiry into AI chatbot effects on children amid safety concerns. The U.S. Federal Trade Commission has started an inquiry into how AI chatbots affect children, sending detailed questionnaires to six major companies including OpenAI, Alphabet, Meta, Snap, xAI, and Character.AI. Regulators are seeking information on issues such as sexually themed responses, safeguards for minors, monetization practices, and how companies disclose risks to parents. The move follows rising concerns over children’s exposure to inappropriate or harmful content from chatbots, lawsuits and congressional scrutiny, and comes as firms like OpenAI have pledged new parental controls. Read more here from the New York Times.

Salesforce backtracks, reinstates team that helped customers adopt AI agents. The team, called Well-Architected, had displeased Salesforce CEO Marc Benioff by suggesting to customers that deploying AI agents successfully would take extensive planning and significant work, a position that contradicted Benioff’s own pitch to customers that, with Salesforce, deploying AI agents was a cinch. Now, according to a story in The Information, the software company has had to reconstitute the team, which provided advisory and consulting help to companies implementing Agentforce. The company is finding Agentforce adoption is lagging its expectations—with fewer than 5% of its 150,000 clients currently paying for the AI agent product, the publication reported—amid complaints that the product is too expensive, too difficult to implement, and too prone to accuracy issues and errors. Having invested heavily in the pivot to Agentforce, Benioff is now under pressure from investors to deliver.

Humanoid robotics startup Figure AI valued at $39 billion in new funding deal. Figure AI, a startup developing humanoid robots, has raised over $1 billion in a new funding round that values the company at $39 billion, making it one of the world’s most valuable startups, Bloomberg reports. The round was led by Parkway Venture Capital with participation from major backers including Nvidia, Salesforce, Brookfield, Intel, and Qualcomm, alongside earlier supporters like Microsoft, OpenAI, and Jeff Bezos. Founded in 2022, Figure aims to build general-purpose humanoid robots, though Fortune’s Jason del Rey questioned whether the company was exaggerating the extent to which its robots were being deployed with BMW.

EYE ON AI RESEARCH

Can AI replace my job? Journalists are certainly worried about what AI is doing to the profession. Mostly, though, after some initial concerns that AI would directly replace journalists, the concern has shifted to fears that AI will further undermine the business models that fund good journalism (see Brain Food below). But recently a group of AI researchers in Japan and Taiwan created a benchmark called NEWSAGENT to see how well LLMs can do at actually taking source material and composing accurate news stories. It turned out that the models could, in many cases, do an ok job.

But the most interesting thing about the research is how the scientists, none of whom were journalists, characterized the results. They found that Alibaba’s open weight model, Qwen-3 32B, did best stylistically, but that GPT 4-o did better on metrics like objectivity and factual accuracy. And they write that human-written stories did not consistently outperform those drafted by the AI models in overall win rates, but that the human-written stories “emphasize factual accuracy.” The human-written stories were also often judged to be more objective than the AI-written ones.

The problem here is that in the real world, factual accuracy is the bedrock of journalism, and objectivity would be a close second. If the models fall down on accuracy, they should lose in every case to the human-written stories, even if evaluators preferred the AI-written ones stylistically.

This is why computer scientists should not be left to create benchmarks for real world professional tasks without deferring to expert advice from people working in those professions.  Otherwise you get distorted views of what AI models can and can’t do. You can read the NEWSAGENT research here on arxiv.org.

AI CALENDAR

Oct. 6-10: World AI Week, Amsterdam

Oct. 21-22: TedAI San Francisco.

Nov. 10-13: Web Summit, Lisbon. 

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

BRAIN FOOD

Is Google the most malevolent AI actor? A lot of publishing execs are starting to say so. At Fortune Brainstorm Tech in Deer Valley, Utah, last week, Neil Vogel, the CEO of magazine publisher People Inc. said that Google was “the worst” when it came to using publishers’ content without permission to train AI models. The problem, Vogel said, is that Google used the same web crawlers to index sites for Google Search as it did to scrape content to feed its Gemini AI models. While other AI vendors have increasingly been cutting multi-million dollar annual licensing deals to pay for publishers’ content, Google has refused to do so. And publishers’ can’t block Google’s bots without losing search traffic on which they currently depend for revenue.
You can read more on Vogel’s comments here

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

ChatGPT OpenAI AI使用 用户研究 AI未来 经济影响 自动化 ChatGPT usage user study AI future economic impact automation
相关文章