llm生成参数
This commit is contained in:
parent
4212d47400
commit
7c93176d86
134
DMS_STAGE_LLM_USAGE.md
Normal file
134
DMS_STAGE_LLM_USAGE.md
Normal file
@ -0,0 +1,134 @@
|
||||
# DMS Stage 自动LLM数据生成使用指南
|
||||
|
||||
## 🎯 概述
|
||||
|
||||
DMS CRUD Scenario Stage 现在内置了智能LLM数据生成功能,无需复杂的命令行参数配置。只要提供LLM API密钥,Stage会自动使用大模型生成高质量的测试数据。
|
||||
|
||||
## ✨ 特性
|
||||
|
||||
- **🤖 自动启用**:DMS Stage内置LLM支持,无需额外配置
|
||||
- **🎯 智能生成**:根据API Schema和业务规则生成合理的测试数据
|
||||
- **🔄 自动回退**:LLM失败时自动回退到传统数据生成
|
||||
- **📊 多主键支持**:完美支持复合主键的数据生成
|
||||
- **🚫 不影响普通测试**:只有DMS Stage使用LLM,普通测试用例不受影响
|
||||
|
||||
## 🚀 使用方法
|
||||
|
||||
### 基本用法
|
||||
|
||||
只需要提供LLM相关的基本参数,DMS Stage会自动启用LLM:
|
||||
|
||||
```bash
|
||||
python3 run_api_tests.py \
|
||||
--base-url "http://your-api-server" \
|
||||
--dms "your-dms-config.json" \
|
||||
--stages-dir "./custom_stages" \
|
||||
--llm-api-key "your-dashscope-api-key" \
|
||||
--llm-base-url "https://dashscope.aliyuncs.com/compatible-mode/v1" \
|
||||
--llm-model-name "qwen-plus"
|
||||
```
|
||||
|
||||
### 环境变量方式
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY="your-dashscope-api-key"
|
||||
|
||||
python3 run_api_tests.py \
|
||||
--base-url "http://your-api-server" \
|
||||
--dms "your-dms-config.json" \
|
||||
--stages-dir "./custom_stages"
|
||||
```
|
||||
|
||||
## 🔧 高级配置
|
||||
|
||||
### 动态控制LLM使用
|
||||
|
||||
如果需要临时关闭LLM,可以修改Stage类:
|
||||
|
||||
```python
|
||||
# 在 custom_stages/dms_crud_scenario_stage.py 中
|
||||
class DmsCrudScenarioStage(BaseAPIStage):
|
||||
# 设置为 False 来关闭LLM
|
||||
enable_llm_data_generation = False
|
||||
```
|
||||
|
||||
### 运行时控制
|
||||
|
||||
```python
|
||||
# 在Stage实例中动态控制
|
||||
stage.enable_llm_data_generation = False # 关闭LLM
|
||||
stage.enable_llm_data_generation = True # 开启LLM
|
||||
```
|
||||
|
||||
## 📋 日志输出
|
||||
|
||||
启用LLM后,您会看到类似的日志:
|
||||
|
||||
```
|
||||
🤖 DMS Stage自动启用LLM智能数据生成,端点: /api/dms/test/v1/resource
|
||||
🔧 LLM模型: qwen-plus
|
||||
LLM成功生成智能测试数据: {...}
|
||||
```
|
||||
|
||||
如果LLM失败,会看到回退日志:
|
||||
|
||||
```
|
||||
LLM数据生成失败: xxx,回退到传统数据生成
|
||||
```
|
||||
|
||||
## 🆚 对比:简化前后
|
||||
|
||||
### 简化前(复杂)
|
||||
```bash
|
||||
python3 run_api_tests.py \
|
||||
--base-url "..." \
|
||||
--dms "..." \
|
||||
--stages-dir "./custom_stages" \
|
||||
--llm-api-key "..." \
|
||||
--stage-use-llm-for-request-body \
|
||||
--stage-use-llm-for-path-params \
|
||||
--stage-use-llm-for-query-params
|
||||
```
|
||||
|
||||
### 简化后(简洁)
|
||||
```bash
|
||||
python3 run_api_tests.py \
|
||||
--base-url "..." \
|
||||
--dms "..." \
|
||||
--stages-dir "./custom_stages" \
|
||||
--llm-api-key "..."
|
||||
```
|
||||
|
||||
## 🎯 最佳实践
|
||||
|
||||
1. **API密钥安全**:使用环境变量存储API密钥
|
||||
2. **模型选择**:推荐使用 `qwen-plus` 或 `qwen-max`
|
||||
3. **网络配置**:确保能访问通义千问API
|
||||
4. **日志监控**:关注LLM生成的日志,确保数据质量
|
||||
|
||||
## 🔍 故障排除
|
||||
|
||||
### LLM不工作?
|
||||
|
||||
1. 检查API密钥是否正确
|
||||
2. 检查网络连接
|
||||
3. 查看日志中的错误信息
|
||||
4. 确认模型名称正确
|
||||
|
||||
### 数据质量问题?
|
||||
|
||||
1. 检查API Schema是否完整
|
||||
2. 查看LLM生成的数据是否合理
|
||||
3. 可以临时关闭LLM使用传统生成对比
|
||||
|
||||
## 💡 技术原理
|
||||
|
||||
DMS Stage通过以下方式实现智能数据生成:
|
||||
|
||||
1. **Schema分析**:解析API的JSON Schema
|
||||
2. **业务规则提取**:识别字段描述中的业务规则
|
||||
3. **LLM提示构建**:构建包含业务上下文的提示
|
||||
4. **智能生成**:调用LLM生成符合业务逻辑的数据
|
||||
5. **自动回退**:失败时回退到传统数据生成
|
||||
|
||||
这样既保证了数据的智能性,又确保了系统的稳定性。
|
||||
2
Makefile
2
Makefile
@ -23,7 +23,7 @@ run_dms:
|
||||
--dms ./assets/doc/dms/domain.json \
|
||||
--stages-dir ./custom_stages \
|
||||
--custom-test-cases-dir ./custom_testcases \
|
||||
-v \
|
||||
-v \
|
||||
-o ./test_reports/ >log_dms1.txt 2>&1
|
||||
|
||||
|
||||
|
||||
@ -175,6 +175,9 @@ class DmsCrudScenarioStage(BaseAPIStage):
|
||||
tags = ["dms", "crud", "scenario"]
|
||||
continue_on_failure = False
|
||||
|
||||
# DMS Stage专用配置:自动启用LLM智能数据生成
|
||||
enable_llm_data_generation = True # 如果LLM服务可用,自动使用LLM生成测试数据
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
# scenarios will be populated by is_applicable_to_api_group
|
||||
@ -272,9 +275,11 @@ class DmsCrudScenarioStage(BaseAPIStage):
|
||||
create_payload = all_pk_values.copy() # 包含所有主键的基础负载
|
||||
|
||||
if create_schema:
|
||||
# 尝试使用LLM智能生成数据(如果可用)
|
||||
if self.llm_service:
|
||||
self.logger.info(f"使用LLM为CRUD Stage生成智能测试数据,端点: {create_op.path}")
|
||||
# DMS Stage内部直接启用LLM(如果LLM服务可用且已启用)
|
||||
# 使用类属性控制,不需要额外的命令行参数
|
||||
if self.llm_service and self.enable_llm_data_generation:
|
||||
self.logger.info(f"🤖 DMS Stage自动启用LLM智能数据生成,端点: {create_op.path}")
|
||||
self.logger.info(f"🔧 LLM模型: {getattr(self.llm_service, 'model_name', 'unknown')}")
|
||||
|
||||
# 构建针对业务规则的提示
|
||||
business_rules_prompt = self._build_business_rules_prompt(create_schema, pk_name, pk_value)
|
||||
|
||||
@ -225,7 +225,8 @@ class BaseAPIStage:
|
||||
apis_in_group: List[Union[YAPIEndpoint, SwaggerEndpoint]], # MODIFIED TYPE HINT
|
||||
llm_service: Optional[LLMService] = None,
|
||||
global_api_spec: Optional[ParsedAPISpec] = None, # <--- 修改类型注解
|
||||
operation_keywords: Optional[Dict[str, List[str]]] = None):
|
||||
operation_keywords: Optional[Dict[str, List[str]]] = None,
|
||||
stage_llm_config: Optional[Dict[str, bool]] = None):
|
||||
self.logger = logging.getLogger(f"{__name__}.{self.__class__.__name__}")
|
||||
self.current_api_group_metadata = api_group_metadata
|
||||
self.api_group_metadata = api_group_metadata # Let's ensure this common name is also available.
|
||||
@ -233,6 +234,14 @@ class BaseAPIStage:
|
||||
self.global_api_spec: ParsedAPISpec = global_api_spec
|
||||
self.llm_service = llm_service
|
||||
|
||||
# Stage专用的LLM配置
|
||||
self.stage_llm_config = stage_llm_config or {
|
||||
"use_llm_for_request_body": False,
|
||||
"use_llm_for_path_params": False,
|
||||
"use_llm_for_query_params": False,
|
||||
"use_llm_for_headers": False,
|
||||
}
|
||||
|
||||
self._operation_keywords = operation_keywords if operation_keywords is not None else {} # CORRECTED assignment
|
||||
self._matched_endpoints: Dict[str, Dict[str, Any]] = {} # 存储按操作类型匹配到的端点定义
|
||||
|
||||
|
||||
@ -435,6 +435,7 @@ class APITestOrchestrator:
|
||||
use_llm_for_path_params: bool = False,
|
||||
use_llm_for_query_params: bool = False,
|
||||
use_llm_for_headers: bool = False,
|
||||
stage_llm_config: Optional[Dict[str, bool]] = None,
|
||||
output_dir: Optional[str] = None,
|
||||
strictness_level: Optional[str] = None,
|
||||
ignore_ssl: bool = False
|
||||
@ -468,6 +469,7 @@ class APITestOrchestrator:
|
||||
self.stage_registry: Optional[StageRegistry] = None
|
||||
|
||||
self.llm_service: Optional[LLMService] = None
|
||||
# 普通测试用例的LLM配置
|
||||
self.llm_config = {
|
||||
"use_for_request_body": use_llm_for_request_body,
|
||||
"use_for_path_params": use_llm_for_path_params,
|
||||
@ -475,6 +477,14 @@ class APITestOrchestrator:
|
||||
"use_for_headers": use_llm_for_headers,
|
||||
}
|
||||
|
||||
# Stage专用的LLM配置
|
||||
self.stage_llm_config = stage_llm_config or {
|
||||
"use_llm_for_request_body": False,
|
||||
"use_llm_for_path_params": False,
|
||||
"use_llm_for_query_params": False,
|
||||
"use_llm_for_headers": False,
|
||||
}
|
||||
|
||||
if llm_api_key and llm_base_url and LLMService: # <-- MODIFIED: Added check for llm_base_url
|
||||
try:
|
||||
self.llm_service = LLMService(api_key=llm_api_key, base_url=llm_base_url, model_name=llm_model_name)
|
||||
@ -2471,7 +2481,8 @@ class APITestOrchestrator:
|
||||
api_group_metadata={"name": "_template_check", "description": "用于预检查的模板实例"}, # Provide a default dict
|
||||
apis_in_group=[],
|
||||
global_api_spec=parsed_spec,
|
||||
llm_service=self.llm_service
|
||||
llm_service=self.llm_service,
|
||||
stage_llm_config=self.stage_llm_config
|
||||
)
|
||||
|
||||
for current_api_group_name in api_groups_to_iterate:
|
||||
@ -2547,7 +2558,8 @@ class APITestOrchestrator:
|
||||
api_group_metadata=current_group_metadata,
|
||||
apis_in_group=current_apis_in_group_for_stage,
|
||||
global_api_spec=parsed_spec,
|
||||
llm_service=self.llm_service
|
||||
llm_service=self.llm_service,
|
||||
stage_llm_config=self.stage_llm_config
|
||||
)
|
||||
|
||||
try:
|
||||
|
||||
43
test_stage_llm_config.py
Normal file
43
test_stage_llm_config.py
Normal file
@ -0,0 +1,43 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
测试Stage专用LLM配置的脚本
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
from custom_stages.dms_crud_scenario_stage import DmsCrudScenarioStage
|
||||
|
||||
def test_stage_llm_config():
|
||||
"""测试DMS Stage内置的LLM配置"""
|
||||
|
||||
print("=== 测试DMS Stage内置LLM配置 ===")
|
||||
|
||||
# 测试1: 检查类属性
|
||||
print(f"DMS Stage LLM启用状态: {DmsCrudScenarioStage.enable_llm_data_generation}")
|
||||
assert DmsCrudScenarioStage.enable_llm_data_generation == True
|
||||
print("✅ 类属性测试通过")
|
||||
|
||||
# 测试2: 实例化Stage
|
||||
stage = DmsCrudScenarioStage(
|
||||
api_group_metadata={},
|
||||
apis_in_group=[],
|
||||
llm_service=None # 模拟没有LLM服务
|
||||
)
|
||||
|
||||
print(f"Stage实例LLM启用状态: {stage.enable_llm_data_generation}")
|
||||
assert stage.enable_llm_data_generation == True
|
||||
print("✅ 实例属性测试通过")
|
||||
|
||||
# 测试3: 可以动态关闭LLM
|
||||
stage.enable_llm_data_generation = False
|
||||
print(f"动态关闭后: {stage.enable_llm_data_generation}")
|
||||
assert stage.enable_llm_data_generation == False
|
||||
print("✅ 动态控制测试通过")
|
||||
|
||||
print("\n🎉 DMS Stage内置LLM配置测试通过!")
|
||||
print("💡 现在只需要提供LLM API密钥,DMS Stage会自动使用LLM生成数据")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_stage_llm_config()
|
||||
Loading…
x
Reference in New Issue
Block a user