AWS Machine Learning Blog 05月16日
How Qualtrics built Socrates: An AI platform powered by Amazon SageMaker and Amazon Bedrock
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Qualtrics是一家经验管理(XM)领域的软件公司,致力于帮助企业改善客户和员工体验。文章介绍了Qualtrics如何利用Amazon SageMaker和Amazon Bedrock构建其AI平台Socrates,该平台旨在赋能研究人员、科学家、工程师和知识工作者,充分利用AI和ML的力量。Socrates平台通过科学工作台、AI数据基础设施、AI游乐场等组件,简化了模型训练、部署和推理流程,并提供统一的GenAI网关,方便用户访问各种LLM模型,最终将先进的AI和ML功能无缝集成到Qualtrics的工作流程和决策过程中。

🧠Qualtrics AI平台的核心是Socrates平台,它利用SageMaker和Amazon Bedrock等AWS托管服务,支持整个ML生命周期,旨在赋能不同角色的用户,包括研究人员、科学家、工程师和知识工作者。

🧪 Socrates科学工作台专为Qualtrics数据和知识工作者设计,提供JupyterLab界面,支持多种编程语言,并集成SageMaker,提供安全、可扩展的基础设施,用于模型训练和超参数优化。

🗄️Socrates的AI数据基础设施提供了一个全面的端到端ML数据生态系统,具有安全可扩展的数据存储,并集成了科学工作台,方便用户存储、管理和共享数据集,并提供数据匿名化、模式化和聚合功能。

🎮AI游乐场是一个用户友好的界面,允许Socrates用户直接访问Socrates平台上托管的强大语言模型和其他生成式AI功能,无需大量的编码或技术专业知识即可试验和快速原型化新想法。

⚙️Socrates平台还提供模型部署基础设施,支持在各种硬件选项上托管模型,简化了模型托管的复杂性,并提供统一的GenAI网关,简化了对LLM和嵌入模型的访问。

This post is co-authored by Jay Kshirsagar and Ronald Quan from Qualtrics. The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

Qualtrics, founded in 2002, is a pioneering software company that has spent over two decades creating exceptional frontline experiences, building high-performing teams, and designing products that people love. As the creators and stewards of the Experience Management (XM) category, Qualtrics serves over 20,000 clients globally, bringing humanity, connection, and empathy back to businesses across various industries, including retail, government, and healthcare.

Qualtrics’s comprehensive XM platform enables organizations to consistently understand, measure, and improve the experiences they deliver for customers, employees, and the broader market. With its three core product suites—XM for Customer Experience, XM for Employee Experience, and XM for Research & Strategy—Qualtrics provides actionable insights and purpose-built solutions that empower companies to deliver exceptional experiences.

Qualtrics harnesses the power of generative AI, cutting-edge machine learning (ML), and the latest in natural language processing (NLP) to provide new purpose-built capabilities that are precision-engineered for experience management (XM). These AI capabilities are purpose-built to help organizations of all sizes deeply understand and address the needs of every customer, employee, and stakeholder—driving stronger connections, increased loyalty, and sustainable growth.

In this post, we share how Qualtrics built an AI platform powered by Amazon SageMaker and Amazon Bedrock.

AI at Qualtrics

Qualtrics has a deep history of using advanced ML to power its industry-leading experience management platform. Early 2020, with the push for deep learning and transformer models, Qualtrics created its first enterprise-level ML platform called Socrates. Built on top of SageMaker, this new platform enabled ML scientists to efficiently build, test, and deliver new AI-powered capabilities for the Qualtrics XM suite. This strong foundation in ML and AI has been a key driver of Qualtrics’s innovation in experience management.

Qualtrics AI, a powerful engine that sits at the heart of the company’s XM platform, harnesses the latest advances in ML, NLP, and AI. Trained on Qualtrics’s expansive database of human sentiment and experience data, Qualtrics AI unlocks richer, more personalized connections between organizations and their customers, employees, and stakeholders. Qualtrics’s unwavering commitment to innovation and customer success has solidified its position as the global leader in experience management.

To learn more about how AI is transforming experience management, visit this blog from Qualtrics.

Socrates platform: Powering AI at Qualtrics

Qualtrics AI is powered by a custom-built  ML platform, a synergistic suite of tools and services designed to enable a diverse set of Qualtrics personae—researchers, scientists, engineers, and knowledge workers—to harness the transformative power of AI and ML.  Qualtrics refers to it internally as the “Socrates” platform. It uses managed AWS services like SageMaker and Amazon Bedrock to enable the entire ML lifecycle. Knowledge workers can source, explore, and analyze Qualtrics data using Socrates’s ML workbenches and AI Data Infrastructure. Scientists and researchers are enabled to conduct research, prototype, develop, and train models using a host of SageMaker features. ML engineers can test, productionize, and monitor a heterogeneous set of ML models possessing a wide range of capabilities, inference modes, and production traffic patterns. Partner application teams are provided with an abstracted model inference interface that makes the integration of an ML model into the Qualtrics product a seamless engineering experience. This holistic approach enables internal teams to seamlessly integrate advanced AI and ML capabilities into their workflows and decision-making processes.

Science Workbench

The Socrates Science Workbench, purpose-built for Qualtrics Data and Knowledge Workers, provides a powerful platform for model training and hyperparameter optimization (HPO) with a JupyterLab interface, support for a range of programming languages, and secure, scalable infrastructure through SageMaker integration, giving users the flexibility and reliability to focus on their core ML tasks. Users can take advantage of the robust and reliable infrastructure of SageMaker to maintain the confidentiality and integrity of their data and models, while also taking advantage of the scalability that SageMaker provides to handle even the most demanding ML workloads.

AI Data Infrastructure

Socrates’s AI Data Infrastructure is a comprehensive and cohesive end-to-end ML data ecosystem. It features a secure and scalable data store integrated with the Socrates Science Workbench, enabling users to effortlessly store, manage, and share datasets with capabilities for anonymization, schematization, and aggregation. The AI Data Infrastructure also provides scientists with interfaces for distributed compute, data pulls and enrichment, and ML processing.

AI Playground

The AI Playground is a user-friendly interface that provides Socrates users with direct access to the powerful language models and other generative AI capabilities hosted on the Socrates platform using backend tools like SageMaker Inference, Amazon Bedrock, and OpenAI GPT, allowing them to experiment and rapidly prototype new ideas without extensive coding or technical expertise. By continuously integrating the latest models, the AI Playground empowers Socrates users to stay at the forefront of advancements in large language models (LLMs) and other cutting-edge generative AI technologies, exploring their potential and discovering new ways to drive innovation.

Model deployment for inference

The Socrates platform features a sophisticated model deployment infrastructure that is essential for the scalable implementation of ML and AI models. This infrastructure allows users to host models across the variety of hardware options available for SageMaker endpoints, providing the flexibility to select a deployment environment that optimally meets their specific needs for inference, whether those needs are related to performance optimization, cost-efficiency, or particular hardware requirements.

One of the defining characteristics of the Socrates model deployment infrastructure is its capability to simplify the complexities of model hosting. This allows users to concentrate on the essential task of deploying their models for inference within the larger Socrates ecosystem. Users benefit from an efficient and user-friendly interface that enables them to effortlessly package their models, adjust deployment settings, and prepare them for inference use.

By offering an adaptable model deployment solution, the Socrates platform makes sure ML models created within the system are smoothly integrated into real-world applications and workflows. This integration not only speeds up the transition to production but also maximizes the usage of Qualtrics’s AI-driven features, fostering innovation and providing significant business value to its customers.

Model capacity management

Model capacity management is a critical component that offers efficient and reliable delivery of ML models to Qualtrics users by providing oversight of model access and the allocation of computing resources across multiple consumers. The Socrates team closely monitors resource usage and sets up rate limiting and auto scaling policies, where applicable, to meet the evolving demands of each use case.

Unified GenAI Gateway

The Socrates platform’s Unified GenAI Gateway simplifies and streamlines access to LLMs and embedding models across the Qualtrics ecosystem. The Unified GenAI Gateway is an API that provides a common interface for consumers to interact with all of the platform-supported LLMs and embedding models, regardless of their underlying providers or hosting environments. This means that Socrates users can use the power of cutting-edge language models without having to worry about the complexities of integrating with multiple vendors or managing self-hosted models.

The standout feature of the Unified GenAI Gateway is its centralized integration with inference platforms like SageMaker Inference and Amazon Bedrock. which allows the Socrates team to handle the intricate details of model access, authentication, and attribution on behalf of users. This not only simplifies the user experience but also enables cost attribution and control mechanisms, making sure the consumption of these powerful AI resources is carefully monitored and aligned with specific use cases and billing codes. Furthermore, the Unified GenAI Gateway boasts capabilities like rate-limiting support, making sure the system’s resources are efficiently allocated, and an upcoming semantic caching feature that will further optimize model inference and enhance overall performance.

Managed Inference APIs (powered by SageMaker Inference)

The Socrates Managed Inference APIs provide a comprehensive suite of services that simplify the integration of advanced ML and AI capabilities into Qualtrics applications. This infrastructure, built on top of SageMaker Inference, handles the complexities of model deployment, scaling, and maintenance, boasting a growing catalog of production-ready models.

Managed Inference APIs offer both asynchronous and synchronous modes to accommodate a wide range of application use cases. Importantly, these managed APIs come with guaranteed production-level SLAs, providing reliable performance and cost-efficiency as usage scales. With readily available pre-trained Qualtrics models for inference, the Socrates platform empowers Qualtrics application teams to focus on delivering exceptional user experiences, without the burden of building and maintaining AI infrastructure.

GenAI Orchestration Framework

Socrates’s GenAI Orchestration Framework is a collection of tools and patterns designed to streamline the development and deployment of LLM-powered applications within the Qualtrics ecosystem. The framework consists of such tools/frameworks such as:

These tools all contribute to the overall reliability, scalability, and performance of the LLM-powered applications built upon it. Capabilities of the Socrates AI App Framework are anticipated to grow and evolve alongside the rapid advancements in the field of LLMs. This means that Qualtrics users always have access to the latest and most cutting-edge AI capabilities from generative AI inference platforms like SageMaker Inference and Amazon Bedrock, empowering them to harness the transformative power of these technologies with greater ease and confidence.

Ongoing enhancements to the Socrates platform using SageMaker Inference

As the Socrates platform continues to evolve, Qualtrics is continuously integrating the latest advancements in SageMaker Inference to further enhance the capabilities of their AI-powered ecosystem:

“By seamlessly integrating SageMaker Inference into our Socrates platform, we’re able to deliver inference advancements in AI to our global customer base. The generative AI inference from capabilities in SageMaker like inference components, faster auto scaling, easy LLM deployment, and the optimization toolkit have been a game changer for Qualtrics to reduce the cost and improve the performance for our generative AI workloads. The level of sophistication and ease of use that SageMaker Inference brings to the table is remarkable.”

– James Argyropoulos, Sr AI/ML Engineer at Qualtrics.

Partnership with SageMaker Inference

Since adopting SageMaker Inference, the Qualtrics Socrates team has been a key collaborator in the development of AI capabilities in SageMaker Inference. Building on expertise to serve Socrates users, Qualtrics has worked closely with the SageMaker Inference team to enhance and expand the platform’s generative AI functionalities. From the early stages of generative AI, they offered invaluable insights and expertise to the SageMaker team. This has enabled the introduction of several new features and optimizations that have strengthened the platform’s generative AI offerings, including:

The partnership between Qualtrics and the SageMaker Inference team has been instrumental in advancing the state-of-the-art in generative AI within the AWS ecosystem. Qualtrics’s deep domain knowledge and technical proficiency have played a crucial role in shaping the evolution of this rapidly developing field on the SageMaker Inference.

“Our partnership with the SageMaker Inference product team has been instrumental in delivering incredible performance and cost benefits for Socrates platform consumers running AI Inference workloads. By working hand in hand with the SageMaker team, we’ve been able to introduce game changing optimizations that have reduced AI inference costs multiple folds for some of our use cases. We look forward to continued innovation through valuable partnership to improve state-of-the-art AI inference capabilities.”

–  Jay Kshirsagar, Senior Manager, Machine Learning

Conclusion

The Socrates platform underscores Qualtrics’s commitment to advancing innovation in experience management by flawlessly integrating advanced AI and ML technologies. Thanks to a strong partnership with the SageMaker Inference team, the platform has seen enhancements that boost performance, reduce costs, and increase the accessibility of AI-driven features within the Qualtrics XM suite. As AI technology continues to develop rapidly, the Socrates platform is geared to empower Qualtrics’s AI teams to innovate and deliver exceptional customer experiences.


About the Authors

Jay Kshirsagar is a seasoned ML leader driving GenAI innovation and scalable AI infrastructure at Qualtrics. He has built high-impact ML teams and delivered enterprise-grade LLM solutions that power key product features.

Ronald Quan is a Staff Engineering Manager for the Data Intelligence Platform team within Qualtrics. The team’s charter is to enable, expedite and evolve AI and Agentic developments on the Socrates platform. He focuses on the team’s technical roadmap and strategic alignment with the business needs.

Saurabh Trikande is a Senior Product Manager for Amazon Bedrock and SageMaker Inference. He is passionate about working with customers and partners, motivated by the goal of democratizing AI. He focuses on core challenges related to deploying complex AI applications, inference with multi-tenant models, cost optimizations, and making the deployment of Generative AI models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch, and spending time with his family.

Micheal Nguyen is a Senior Startup Solutions Architect at AWS, specializing in using AI/ML to drive innovation and develop business solutions on AWS. Michael holds 12 AWS certifications and has a BS/MS in Electrical/Computer Engineering and an MBA from Penn State University, Binghamton University, and the University of Delaware.

Ranga Malaviarachchi is a Sr. Customer Solutions Manager in the ISV Strategic Accounts organization at AWS. He has been closely associated with Qualtrics over the past 4 years in supporting their AI initiatives. Ranga holds a BS in Computer Science and Engineering and an MBA from Imperial College London.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Qualtrics Amazon SageMaker Amazon Bedrock AI平台 经验管理
相关文章