VentureBeat 前天 02:59
市场研究拥抱人工智能:效率提升与可靠性挑战并存
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一项最新的行业调查显示,市场研究人员正以前所未有的速度采用人工智能,98%的专业人士已将AI工具融入日常工作,72%的人更是每天或更频繁地使用。这项由The Harris Poll旗下研究平台QuestDIY于2025年8月对219位美国市场研究专业人士进行的调查揭示了AI在提升效率方面的巨大潜力,超过半数的研究人员每周节省至少五小时。然而,调查也指出了AI带来的可靠性问题,近四成受访者表示技术错误增加,另有37%认为AI引入了数据质量风险,31%则需要花费更多时间验证AI输出。这种生产力提升与信任隐忧之间的矛盾,正在重塑市场研究行业的工作模式,从业者在享受AI带来的便利的同时,也需警惕并应对其潜在的失误。

🚀 市场研究领域AI应用迅猛:高达98%的专业人士已采用AI工具,72%的人每天或更频繁使用,显示出AI技术在行业内的快速普及和深度融入。

⏱️ AI显著提升工作效率:超过半数(56%)的研究人员表示每周因使用AI工具而节省至少五小时,这突显了AI在处理大量数据和自动化任务方面的强大能力。

⚠️ 可靠性问题引发担忧:尽管效率提升显著,但近四成(39%)的研究人员指出对易出错技术的依赖增加,37%认为AI带来了数据质量和准确性的新风险,31%则需要投入更多精力验证AI的输出结果,表明AI的可靠性仍是行业面临的重要挑战。

🤝 人工与智能协作成为趋势:行业专家认为,未来市场研究将是人机协作的模式,AI负责加速任务和发现初步结果,而研究人员则专注于确保质量和提供高层次的咨询见解,强调人类判断在AI应用中仍然至关重要。

🔒 数据隐私与安全是AI应用的最大障碍:33%的研究人员认为数据隐私和安全问题是限制AI在工作中应用的最大因素,这反映了行业对处理敏感数据的谨慎态度以及对AI系统数据处理方式的担忧。

Market researchers have embraced artificial intelligence at a staggering pace, with 98% of professionals now incorporating AI tools into their work and 72% using them daily or more frequently, according to a new industry survey that reveals both the technology's transformative promise and its persistent reliability problems.

The findings, based on responses from 219 U.S. market research and insights professionals surveyed in August 2025 by QuestDIY, a research platform owned by The Harris Poll, paint a picture of an industry caught between competing pressures: the demand to deliver faster business insights and the burden of validating everything AI produces to ensure accuracy.

While more than half of researchers — 56% — report saving at least five hours per week using AI tools, nearly four in ten say they've experienced "increased reliance on technology that sometimes produces errors." An additional 37% report that AI has "introduced new risks around data quality or accuracy," and 31% say the technology has "led to more work re-checking or validating AI outputs."

The disconnect between productivity gains and trustworthiness has created what amounts to a grand bargain in the research industry: professionals accept time savings and enhanced capabilities in exchange for constant vigilance over AI's mistakes, a dynamic that may fundamentally reshape how insights work gets done.

How market researchers went from AI skeptics to daily users in less than a year

The numbers suggest AI has moved from experiment to infrastructure in record time. Among those using AI daily, 39% deploy it once per day, while 33% use it "several times per day or more," according to the survey conducted between August 15-19, 2025. Adoption is accelerating: 80% of researchers say they're using AI more than they were six months ago, and 71% expect to increase usage over the next six months. Only 8% anticipate their usage will decline.

“While AI provides excellent assistance and opportunities, human judgment will remain vital,” Erica Parker, Managing Director Research Products at The Harris Poll, told VentureBeat. “The future is a teamwork dynamic where AI will accelerate tasks and quickly unearth findings, while researchers will ensure quality and provide high level consultative insights.”

The top use cases reflect AI's strength in handling data at scale: 58% of researchers use it for analyzing multiple data sources, 54% for analyzing structured data, 50% for automating insight reports, 49% for analyzing open-ended survey responses, and 48% for summarizing findings. These tasks—traditionally labor-intensive and time-consuming — now happen in minutes rather than hours.

Beyond time savings, researchers report tangible quality improvements. Some 44% say AI improves accuracy, 43% report it helps surface insights they might otherwise have missed, 43% cite increased speed of insights delivery, and 39% say it sparks creativity. The overwhelming majority — 89% — say AI has made their work lives better, with 25% describing the improvement as "significant."

The productivity paradox: saving time while creating new validation work

Yet the same survey reveals deep unease about the technology's reliability. The list of concerns is extensive: 39% of researchers report increased reliance on error-prone technology, 37% cite new risks around data quality or accuracy, 31% describe additional validation work, 29% report uncertainty about job security, and 28% say AI has raised concerns about data privacy and ethics.

The report notes that "accuracy is the biggest frustration with AI experienced by researchers when asked on an open-ended basis." One researcher captured the tension succinctly: "The faster we move with AI, the more we need to check if we're moving in the right direction."

This paradox — saving time while simultaneously creating new work — reflects a fundamental characteristic of current AI systems, which can produce outputs that appear authoritative but contain what researchers call "hallucinations," or fabricated information presented as fact. The challenge is particularly acute in a profession where credibility depends on methodological rigor and where incorrect data can lead clients to make costly business decisions.

"Researchers view AI as a junior analyst, capable of speed and breadth, but needing oversight and judgment," said Gary Topiol, Managing Director at QuestDIY, in the report.

That metaphor — AI as junior analyst — captures the industry's current operating model. Researchers treat AI outputs as drafts requiring senior review rather than finished products, a workflow that provides guardrails but also underscores the technology's limitations.

Why data privacy fears are the biggest obstacle to AI adoption in research

When asked what would limit AI use at work, researchers identified data privacy and security concerns as the greatest barrier, cited by 33% of respondents. This concern isn't abstract: researchers handle sensitive customer data, proprietary business information, and personally identifiable information subject to regulations like GDPR and CCPA. Sharing that data with AI systems — particularly cloud-based large language models — raises legitimate questions about who controls the information and whether it might be used to train models accessible to competitors.

Other significant barriers include time to experiment and learn new tools (32%), training (32%), integration challenges (28%), internal policy restrictions (25%), and cost (24%). An additional 31% cited lack of transparency in AI use as a concern, which could complicate explaining results to clients and stakeholders.

The transparency issue is particularly thorny. When an AI system produces an analysis or insight, researchers often cannot trace how the system arrived at its conclusion — a problem that conflicts with the scientific method's emphasis on replicability and clear methodology. Some clients have responded by including no-AI clauses in their contracts, forcing researchers to either avoid the technology entirely or use it in ways that don't technically violate contractual terms but may blur ethical lines.

"Onboarding beats feature bloat," Parker said in the report. "The biggest brakes are time to learn and train. Packaged workflows, templates, and guided setup all unlock usage faster than piling on capabilities."

Inside the new workflow: treating AI like a junior analyst who needs constant supervision

Despite these challenges, researchers aren't abandoning AI — they're developing frameworks to use it responsibly. The consensus model, according to the survey, is "human-led research supported by AI," where AI handles repetitive tasks like coding, data cleaning, and report generation while humans focus on interpretation, strategy, and business impact.

About one-third of researchers (29%) describe their current workflow as "human-led with significant AI support," while 31% characterize it as "mostly human with some AI help." Looking ahead to 2030, 61% envision AI as a "decision-support partner" with expanded capabilities including generative features for drafting surveys and reports (56%), AI-driven synthetic data generation (53%), automation of core processes like project setup and coding (48%), predictive analytics (44%), and deeper cognitive insights (43%).

The report describes an emerging division of labor where researchers become "Insight Advocates" — professionals who validate AI outputs, connect findings to stakeholder challenges, and translate machine-generated analysis into strategic narratives that drive business decisions. In this model, technical execution becomes less central to the researcher's value proposition than judgment, context, and storytelling.

"AI can surface missed insights — but it still needs a human to judge what really matters," Topiol said in the report.

What other knowledge workers can learn from the research industry's AI experiment

The market research industry's AI adoption may presage similar patterns in other knowledge work professions where the technology promises to accelerate analysis and synthesis. The experience of researchers — early AI adopters who have integrated the technology into daily workflows — offers lessons about both opportunities and pitfalls.

First, speed genuinely matters. One boutique agency research lead quoted in the report described watching survey results accumulate in real-time after fielding: "After submitting it for fielding, I literally watched the survey count climb and finish the same afternoon. It was a remarkable turnaround." That velocity enables researchers to respond to business questions within hours rather than weeks, making insights actionable while decisions are still being made rather than after the fact.

Second, the productivity gains are real but uneven. Saving five hours per week represents meaningful efficiency for individual contributors, but those savings can disappear if spent validating AI outputs or correcting errors. The net benefit depends on the specific task, the quality of the AI tool, and the user's skill in prompting and reviewing the technology's work.

Third, the skills required for research are changing. The report identifies future competencies including cultural fluency, strategic storytelling, ethical stewardship, and what it calls "inquisitive insight advocacy" — the ability to ask the right questions, validate AI outputs, and frame insights for maximum business impact. Technical execution, while still important, becomes less differentiating as AI handles more of the mechanical work.

The strange phenomenon of using technology intensively while questioning its reliability

The survey's most striking finding may be the persistence of trust issues despite widespread adoption. In most technology adoption curves, trust builds as users gain experience and tools mature. But with AI, researchers appear to be using tools intensively while simultaneously questioning their reliability — a dynamic driven by the technology's pattern of performing well most of the time but failing unpredictably.

This creates a verification burden that has no obvious endpoint. Unlike traditional software bugs that can be identified and fixed, AI systems' probabilistic nature means they may produce different outputs for the same inputs, making it difficult to develop reliable quality assurance processes.

The data privacy concerns — cited by 33% as the biggest barrier to adoption — reflect a different dimension of trust. Researchers worry not just about whether AI produces accurate outputs but also about what happens to the sensitive data they feed into these systems. QuestDIY's approach, according to the report, is to build AI directly into a research platform with ISO/IEC 27001 certification rather than requiring researchers to use general-purpose tools like ChatGPT that may store and learn from user inputs.

"The center of gravity is analysis at scale — fusing multiple sources, handling both structured and unstructured data, and automating reporting," Topiol said in the report, describing where AI delivers the most value.

The future of research work: elevation or endless verification?

The report positions 2026 as an inflection point when AI moves from being a tool researchers use to something more like a team member — what the authors call a "co-analyst" that participates in the research process rather than merely accelerating specific tasks.

This vision assumes continued improvement in AI capabilities, particularly in areas where researchers currently see the technology as underdeveloped. While 41% currently use AI for survey design, 37% for programming, and 30% for proposal creation, most researchers consider these appropriate use cases, suggesting significant room for growth once the tools become more reliable or the workflows more structured.

The human-led model appears likely to persist. "The future is human-led, with AI as a trusted co-analyst," Parker said in the report. But what "human-led" means in practice may shift. If AI handles most analytical tasks and researchers focus on validation and strategic interpretation, the profession may come to resemble editorial work more than scientific analysis — curating and contextualizing machine-generated insights rather than producing them from scratch.

"AI gives researchers the space to move up the value chain – from data gatherers to Insight Advocates, focused on maximising business impact," Topiol said in the report.

Whether this transformation marks an elevation of the profession or a deskilling depends partly on how the technology evolves. If AI systems become more transparent and reliable, the verification burden may decrease and researchers can focus on higher-order thinking. If they remain opaque and error-prone, researchers may find themselves trapped in an endless cycle of checking work produced by tools they cannot fully trust or explain.

The survey data suggests researchers are navigating this uncertainty by developing a form of professional muscle memory — learning which tasks AI handles well, where it tends to fail, and how much oversight each type of output requires. This tacit knowledge, accumulated through daily use and occasional failures, may become as important to the profession as statistical literacy or survey design principles.

Yet the fundamental tension remains unresolved. Researchers are moving faster than ever, delivering insights in hours instead of weeks, and handling analytical tasks that would have been impossible without AI. But they're doing so while shouldering a new responsibility that previous generations never faced: serving as the quality control layer between powerful but unpredictable machines and business leaders making million-dollar decisions.

The industry has made its bet. Now comes the harder part: proving that human judgment can keep pace with machine speed — and that the insights produced by this uneasy partnership are worth the trust clients place in them.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 市场研究 效率 可靠性 数据隐私
相关文章