Kavita Ganesan 09月25日
AI伦理:责任与风险
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了AI伦理的核心概念,强调负责任地使用AI的重要性。文章分析了数据道德、可解释性、使用规范和开发风险四个关键方面,指出这些问题在法律、医疗等领域尤为关键。同时,文章还讨论了AI工具如何融入工作流程以及潜在风险,旨在引导企业在享受AI便利的同时,规避其潜在危害。

📊 数据道德:AI系统的性能受训练数据影响巨大,因此必须确保数据来源透明、代表性强,并尊重用户隐私。无论是自研还是第三方模型,数据获取和使用都需符合道德规范。

🤔 可解释性:在医疗、执法等高风险领域,AI系统的决策过程必须透明,以便建立信任。例如,AI预测患者患病风险时,应能提供决策依据,帮助专业人士判断其可靠性。

🔧 使用规范:AI工具的使用方式需谨慎,例如自动过滤邮件可能误判,而医疗诊断则需AI辅助而非独立决策。模型精度和风险等级决定其适用场景,高风险场景需更可靠的模型。

🛠️ 开发风险:AI工具的开发可能产生意想不到的后果,如密码破解工具可能被滥用。开发者需考虑工具的传播方式及其潜在影响,确保其仅用于正当目的。

🌐 全局视角:AI伦理涉及数据、决策、开发等多个层面,企业需全面评估风险,确保AI技术的应用既安全又负责任。

As AI continues to become more prevalent in our lives, it is crucial to consider the ethical implications of its use. Although AI can augment and revolutionize how we live, work, and interact with each other, it can also cause harm if not used or developed correctly.

People can be wrongly imprisoned when facial recognition systems fail in law enforcement and the judicial system. People can be killed if self-driving cars fail to correctly see them as pedestrians on the road. Things can go awfully wrong if we fail to think about the implications of how we use and develop these AI-powered tools. 

This article is part of a series that will explore what AI ethics means, its implications to society, and how businesses can start leading the way by doing AI responsibly while also reaping its benefits. 

In this article, we’ll focus on what ethics means in the context of AI.

What is AI Ethics?

Ethics in the context of AI is all about doing AI responsibly, keeping questions as outlined below in consideration:

DATA ETIQUETTE

AI systems today are data-dependent; that’s how machine-learning-driven AI systems learn. The underlying data that you use can significantly impact how the model behaves. The question is are we using this data ethically? When thinking about ethics and data, we have to think about these questions:

How you source your data, combine, and use that data to train models will impact how your models behave downstream. We’ve seen how the underlying data used by algorithms can impact model behaviors.

Not to forget, if you’re using third-party models…the same applies. How the third-party vendor sources data, combines and uses that data to train their models impacts YOUR downstream applications. 

Bottom line: data etiquette applies to both custom-developed and third-party models you’re building on.

EXPLAINABILITY 

AI explainability is the ability for AI systems to provide reasoning as to why they arrived at a particular decision, prediction, or suggestion. The explainability of AI systems may not be critical for many AI applications, such as email spam filtering, grammar correction, and product recommendation systems. 

However, in domains such as healthcare, law enforcement, and any domain where a person’s life, livelihood, and safety are at stake, evidence and explainability are crucial for building trust with AI systems.

For example, if an AI system predicts that a patient has a high risk of lung cancer, why did it arrive at that prediction? The insights into the AI’s decision-making will help the physician decide if the recommendations are trustworthy.   

When it comes to AI explainability, you need to ask if your system must be explainable, and if it does, are you able to gain a glimpse into its reasoning? 

USAGE ETIQUETTE

AI usage etiquette relates to how you’re integrating AI into a workflow. Is it a sole-decision maker, a human assistant, or a second opinion? How you employ AI can make a massive difference in the risks to users and society when AI  gets it wrong.

For example, when you use AI to sift through emails to filter out spammy ones automatically, it’s the sole decision maker. Allowing AI to be the sole decision maker in this scenario can be considered low-risk if it makes a wrong decision. Spam may end up in your inbox, or valid emails may get filtered out as spam. Nevertheless, you can still label specific emails in your inbox as spam or browse through your spam emails.

However, when you take an application area such as medical diagnosis and treatment planning, asking the AI system to solely make decisions about a cancer treatment plan for a patient is a HUGE risk. Who is to blame if the treatment plan is ineffective by solely trusting the AI tool? The physician? The AI tool or the hospital that decided to employ AI in the first place? 

Further, usage etiquette is also a function of model accuracy. Using a low-accuracy model in a high-risk situation poses a higher risk than a high-accuracy model in a low-risk situation. 

Considering all of this, the question to ask here is: what are the risks of employing your AI tool in the way you envision, given its current performance? Ideally, you want it to have as few negative implications as possible on people when it comes to their safety (physical and cyber) and livelihood. If the risks are high, ask if the risk is worth taking. 

DEVELOPMENT RISKS 

In some cases, the development of an AI tool itself can cause unwanted trouble, even if not intended. For example, by developing an AI tool that can guess login passwords as an interesting R&D problem can cause undesirable consequences when it lands in the hands of a bad actor. 

It’s one thing if law enforcement is developing such a “dangerous” tool in a constrained environment to catch predators. It’s another thing altogether if the development team intends to open-source the tool, essentially distributing it to the public. The latter can have many unintended consequences, and developers should be responsible for not just how they will use the tool but also how they share the tool.

The question you want to ask when it comes to development risks is: Have you considered the risks of developing your AI tool using the intended distribution methods?

Last Word

As AI becomes increasingly integrated into our lives, we must consider its ethical ramifications. In this article, we specifically explored what AI ethics means in creating and using AI-powered tools and the questions to consider for each ethical element.    

In summary, there are four broad considerations when it comes to AI ethics, and they are:

Each of these considerations focuses on a different angle of how AI can potentially cause harm. In a future article, we’ll explore some of the common ethical challenges of AI systems.


Keep Learning & Succeed With AI

    Join my AI Integrated newsletterwhich clears the AI confusion and teaches you how to successfully integrate AI to achieve profitability and growth in your business.Read  The Business Case for AI to learn applications, strategies, and best practices to be successful with AI (select companies using the book: government agencies, automakers like Mercedes Benz, beverage makers, and e-commerce companies such as Flipkart).Work directly with me to improve AI understanding in your organization, accelerate AI strategy development and get meaningful outcomes from every AI initiative.

The post AI Ethics Series: What is AI Ethics appeared first on Opinosis Analytics.

AI Ethics Series: What is AI Ethics

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI伦理 数据道德 可解释性 使用规范 开发风险
相关文章