index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html
![]()
本教程演示了如何利用开源Hugging Face模型在Colab中构建一个自主代理,该代理能够使其行为与伦理和组织价值观保持一致。通过集成一个提出行动的“策略”模型和一个评估并调整行动的“伦理裁判”模型,我们展示了如何在不依赖API的情况下实现价值对齐。该过程包括生成多个行动候选,由伦理裁判进行评估,并根据组织价值观进行调整,最终选出最符合伦理的行动计划。这为构建更安全、更公平、更值得信赖的智能体系统提供了实践方法。
🤖 **自主代理的价值对齐框架**:教程提出了一种构建自主代理的框架,该代理能够使其决策过程与预设的伦理和组织价值观保持一致。核心在于结合一个负责生成行动建议的“策略”模型,以及一个用于评估这些行动是否符合价值观的“伦理裁判”模型。这种模块化的方法使得代理能够进行自我纠正和价值导向的决策。
💡 **本地化模型运行与模型选择**:为了实现价值对齐,教程使用了本地运行的开源Hugging Face模型,具体包括distilgpt2作为策略模型(生成行动)和google/flan-t5-small作为伦理裁判模型。这种选择避免了对外部API的依赖,并确保了在Colab等环境中也能高效运行,为实验和部署提供了灵活性。
⚖️ **行动生成、评估与调整流程**:代理首先会根据目标和上下文生成多个潜在行动(propose_actions)。随后,伦理裁判模型会评估每个行动的风险级别、潜在问题以及是否需要修改(judge_action)。最后,通过一个对齐模型(align_action),代理会根据伦理裁判的反馈和组织价值观,对行动进行必要的修改,确保最终选定的行动既有效又合乎伦理。
✅ **决策报告与价值体现**:教程最终输出一个详细的决策报告,清晰地展示了代理的目标、上下文、组织价值观、所有候选行动的评估结果、最终选定的行动计划以及支持该选择的理由。这使得用户能够直观地理解代理的决策逻辑,并验证其是否真正实现了价值对齐,从而增强了系统的透明度和可信度。
In this tutorial, we explore how we can build an autonomous agent that aligns its actions with ethical and organizational values. We use open-source Hugging Face models running locally in Colab to simulate a decision-making process that balances goal achievement with moral reasoning. Through this implementation, we demonstrate how we can integrate a “policy” model that proposes actions and an “ethics judge” model that evaluates and aligns them, allowing us to see value alignment in practice without depending on any APIs. Check out the FULL CODES here.
!pip install -q transformers torch accelerate sentencepieceimport torchfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM, AutoModelForCausalLMdef generate_seq2seq(model, tokenizer, prompt, max_new_tokens=128): inputs = tokenizer(prompt, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( **inputs, max_new_tokens=max_new_tokens, do_sample=True, top_p=0.9, temperature=0.7, pad_token_id=tokenizer.eos_token_id if tokenizer.eos_token_id is not None else tokenizer.pad_token_id, ) return tokenizer.decode(output_ids[0], skip_special_tokens=True)def generate_causal(model, tokenizer, prompt, max_new_tokens=128): inputs = tokenizer(prompt, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( **inputs, max_new_tokens=max_new_tokens, do_sample=True, top_p=0.9, temperature=0.7, pad_token_id=tokenizer.eos_token_id if tokenizer.eos_token_id is not None else tokenizer.pad_token_id, ) full_text = tokenizer.decode(output_ids[0], skip_special_tokens=True) return full_text[len(prompt):].strip()
We begin by setting up our environment and importing essential libraries from Hugging Face. We define two helper functions that generate text using sequence-to-sequence and causal models. This allows us to easily produce both reasoning-based and creative outputs later in the tutorial. Check out the FULL CODES here.
policy_model_name = "distilgpt2"judge_model_name = "google/flan-t5-small"policy_tokenizer = AutoTokenizer.from_pretrained(policy_model_name)policy_model = AutoModelForCausalLM.from_pretrained(policy_model_name)judge_tokenizer = AutoTokenizer.from_pretrained(judge_model_name)judge_model = AutoModelForSeq2SeqLM.from_pretrained(judge_model_name)device = "cuda" if torch.cuda.is_available() else "cpu"policy_model = policy_model.to(device)judge_model = judge_model.to(device)if policy_tokenizer.pad_token is None: policy_tokenizer.pad_token = policy_tokenizer.eos_tokenif judge_tokenizer.pad_token is None: judge_tokenizer.pad_token = judge_tokenizer.eos_token
We load two small open-source models—distilgpt2 as our action generator and flan-t5-small as our ethics reviewer. We prepare both models and tokenizers for CPU or GPU execution, ensuring smooth performance in Colab. This setup provides the foundation for the agent’s reasoning and ethical evaluation. Check out the FULL CODES here.
class EthicalAgent: def __init__(self, policy_model, policy_tok, judge_model, judge_tok): self.policy_model = policy_model self.policy_tok = policy_tok self.judge_model = judge_model self.judge_tok = judge_tok def propose_actions(self, user_goal, context, n_candidates=3): base_prompt = ( "You are an autonomous operations agent. " "Given the goal and context, list a specific next action you will take:\n\n" f"Goal: {user_goal}\nContext: {context}\nAction:" ) candidates = [] for _ in range(n_candidates): action = generate_causal(self.policy_model, self.policy_tok, base_prompt, max_new_tokens=40) action = action.split("\n")[0] candidates.append(action.strip()) return list(dict.fromkeys(candidates)) def judge_action(self, action, org_values): judge_prompt = ( "You are the Ethics & Compliance Reviewer.\n" "Evaluate the proposed agent action.\n" "Return fields:\n" "RiskLevel (LOW/MED/HIGH),\n" "Issues (short bullet-style text),\n" "Recommendation (approve / modify / reject).\n\n" f"ORG_VALUES:\n{org_values}\n\n" f"ACTION:\n{action}\n\n" "Answer in this format:\n" "RiskLevel: ...\nIssues: ...\nRecommendation: ..." ) verdict = generate_seq2seq(self.judge_model, self.judge_tok, judge_prompt, max_new_tokens=128) return verdict.strip() def align_action(self, action, verdict, org_values): align_prompt = ( "You are an Ethics Alignment Assistant.\n" "Your job is to FIX the proposed action so it follows ORG_VALUES.\n" "Keep it effective but safe, legal, and respectful.\n\n" f"ORG_VALUES:\n{org_values}\n\n" f"ORIGINAL_ACTION:\n{action}\n\n" f"VERDICT_FROM_REVIEWER:\n{verdict}\n\n" "Rewrite ONLY IF NEEDED. If original is fine, return it unchanged. " "Return just the final aligned action:" ) aligned = generate_seq2seq(self.judge_model, self.judge_tok, align_prompt, max_new_tokens=128) return aligned.strip()
We define the core agent class that generates, evaluates, and refines actions. Here, we design methods for proposing candidate actions, evaluating their ethical compliance, and rewriting them to align with values. This structure helps us modularize reasoning, judgment, and correction into clear functional steps. Check out the FULL CODES here.
def decide(self, user_goal, context, org_values, n_candidates=3): proposals = self.propose_actions(user_goal, context, n_candidates=n_candidates) scored = [] for act in proposals: verdict = self.judge_action(act, org_values) aligned_act = self.align_action(act, verdict, org_values) scored.append({"original_action": act, "review": verdict, "aligned_action": aligned_act}) def extract_risk(vtext): for line in vtext.splitlines(): if "RiskLevel" in line: lvl = line.split(":", 1)[-1].strip().upper() if "LOW" in lvl: return 0 if "MED" in lvl: return 1 if "HIGH" in lvl: return 2 return 3 scored_sorted = sorted(scored, key=lambda x: extract_risk(x["review"])) final_choice = scored_sorted[0] report = { "goal": user_goal, "context": context, "org_values": org_values, "candidates_evaluated": scored, "final_plan": final_choice["aligned_action"], "final_plan_rationale": final_choice["review"], } return report
We implement the complete decision-making pipeline that links generation, judgment, and alignment. We assign risk scores to each candidate action and automatically choose the most ethically aligned one. This section captures how the agent can self-assess and improve its choices before finalizing an action. Check out the FULL CODES here.
org_values_text = ( "- Respect privacy; do not access personal data without consent.\n" "- Follow all laws and safety policies.\n" "- Avoid discrimination, harassment, or harmful manipulation.\n" "- Be transparent and truthful with stakeholders.\n" "- Prioritize user well-being and long-term trust over short-term gain.")demo_goal = "Increase customer adoption of the new financial product."demo_context = ( "The agent works for a bank outreach team. The target customers are small family businesses. " "Regulations require honest disclosure of risks and fees. Cold-calling minors or lying about terms is illegal.")agent = EthicalAgent(policy_model, policy_tokenizer, judge_model, judge_tokenizer)report = agent.decide(demo_goal, demo_context, org_values_text, n_candidates=4)def pretty_report(r): print("=== ETHICAL DECISION REPORT ===") print(f"Goal: {r['goal']}\n") print(f"Context: {r['context']}\n") print("Org Values:") print(r["org_values"]) print("\n--- Candidate Evaluations ---") for i, cand in enumerate(r["candidates_evaluated"], 1): print(f"\nCandidate {i}:") print("Original Action:") print(" ", cand["original_action"]) print("Ethics Review:") print(cand["review"]) print("Aligned Action:") print(" ", cand["aligned_action"]) print("\n--- Final Plan Selected ---") print(r["final_plan"]) print("\nWhy this plan is acceptable (review snippet):") print(r["final_plan_rationale"])pretty_report(report)
We define organizational values, create a real-world scenario, and run the ethical agent to generate its final plan. Finally, we print a detailed report showing candidate actions, reviews, and the selected ethical decision. Through this, we observe how our agent integrates ethics directly into its reasoning process.
In conclusion, we clearly understand how an agent can reason not only about what to do but also about whether to do it. We witness how the system learns to identify risks, correct itself, and align its actions with human and organizational principles. This exercise helps us realize that value alignment and ethics are not abstract ideas but practical mechanisms we can embed into agentic systems to make them safer, fairer, and more trustworthy.
Check out the FULL CODES here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
The post How to Build Ethically Aligned Autonomous Agents through Value-Guided Reasoning and Self-Correcting Decision-Making Using Open-Source Models appeared first on MarkTechPost.