llm
This commit is contained in:
parent
00de3a880a
commit
2b1afbf47e
170
Framework_And_TestCase_Guide.md
Normal file
170
Framework_And_TestCase_Guide.md
Normal file
@ -0,0 +1,170 @@
|
||||
## 技术架构概览
|
||||
|
||||
本 API 合规性测试框架主要由以下几个核心组件构成,它们协同工作以完成测试的定义、发现、执行和报告:
|
||||
|
||||
1. **命令行接口 (`run_api_tests.py`)**:
|
||||
* 作为测试执行的入口。
|
||||
* 负责解析用户通过命令行传入的参数,例如 API 服务的基础 URL、API 规范文件路径(YAPI 或 Swagger)、测试用例目录、输出报告配置以及 LLM 相关配置。
|
||||
* 初始化并驱动 `APITestOrchestrator`。
|
||||
|
||||
2. **测试编排器 (`APITestOrchestrator` 在 `ddms_compliance_suite/test_orchestrator.py`)**:
|
||||
* **核心控制器**:是整个测试流程的指挥中心。
|
||||
* **组件初始化**:负责初始化和管理其他关键组件,如 `InputParser`(API 规范解析器)、`APICaller`(API 请求调用器)、`TestCaseRegistry`(测试用例注册表)以及可选的 `LLMService`(大模型服务)。
|
||||
* **测试流程管理**:
|
||||
* 调用 `InputParser` 解析指定的 API 规范文件,获取所有端点的定义。
|
||||
* 根据用户指定的过滤器(如 YAPI 分类或 Swagger 标签)筛选需要测试的 API 端点。
|
||||
* 对每一个选定的 API 端点:
|
||||
* 通过 `TestCaseRegistry` 获取所有适用于该端点的自定义测试用例类。
|
||||
* 实例化每个测试用例类。
|
||||
* 调用 `_prepare_initial_request_data` 方法准备初始请求数据(路径参数、查询参数、请求头、请求体)。此方法会根据全局配置和测试用例自身的配置决定是否使用 LLM 进行数据生成,并利用 `LLMService` 和动态 Pydantic 模型创建(`_create_pydantic_model_from_schema`)来实现。如果LLM未启用或不适用,则使用传统的基于 Schema 的数据生成逻辑(`_generate_params_from_list`, `_generate_data_from_schema`)。此阶段还实现了端点级别的LLM参数缓存。
|
||||
* 依次调用测试用例实例中定义的 `generate_*` 方法,允许测试用例修改生成的请求数据。
|
||||
* 调用测试用例实例中定义的 `validate_request_*` 方法,对即将发送的请求进行预校验。
|
||||
* 使用 `APICaller` 发送最终构建的 API 请求。
|
||||
* 接收到 API 响应后,调用测试用例实例中定义的 `validate_response` 和 `check_performance` 方法,对响应进行详细验证。
|
||||
* **结果汇总**:收集每个测试用例的执行结果 (`ExecutedTestCaseResult`),汇总成每个端点的测试结果 (`TestResult`),并最终生成整个测试运行的摘要 (`TestSummary`)。
|
||||
|
||||
3. **测试用例注册表 (`TestCaseRegistry` 在 `ddms_compliance_suite/test_case_registry.py`)**:
|
||||
* **动态发现**:负责在用户指定的目录 (`custom_test_cases_dir`) 下扫描并动态加载所有以 `.py` 结尾的测试用例文件。
|
||||
* **类识别与注册**:从加载的模块中,识别出所有继承自 `BaseAPITestCase` 的类,并根据其 `id` 属性进行注册。
|
||||
* **执行顺序排序**:在发现所有测试用例类后,会根据每个类的 `execution_order` 属性(主排序键,升序)和类名 `__name__`(次排序键,字母升序)对它们进行排序。
|
||||
* **适用性筛选**:提供 `get_applicable_test_cases` 方法,根据 API 端点的 HTTP 方法和路径(通过正则表达式匹配)筛选出适用的、已排序的测试用例类列表给编排器。
|
||||
|
||||
4. **测试框架核心 (`test_framework_core.py`)**:
|
||||
* **`BaseAPITestCase`**:所有自定义测试用例的基类。它定义了测试用例应具备的元数据(如 `id`, `name`, `description`, `severity`, `tags`, `execution_order`, `applicable_methods`, `applicable_paths_regex` 以及 LLM 使用标志位)和一系列生命周期钩子方法(如 `generate_*`, `validate_*`)。
|
||||
* **`APIRequestContext` / `APIResponseContext`**:数据类,分别用于封装 API 请求和响应的上下文信息,在测试用例的钩子方法间传递。
|
||||
* **`ValidationResult`**:数据类,用于表示单个验证点的结果(通过/失败、消息、详细信息)。
|
||||
* **`TestSeverity`**:枚举类型,定义测试用例的严重级别。
|
||||
|
||||
5. **API 规范解析器 (`InputParser` 在 `ddms_compliance_suite/input_parser/parser.py`)**:
|
||||
* 负责读取和解析 YAPI(JSON 格式)或 Swagger/OpenAPI(JSON 或 YAML 格式)的 API 规范文件。
|
||||
* 将原始规范数据转换成框架内部易于处理的结构化对象(如 `ParsedYAPISpec`, `YAPIEndpoint`, `ParsedSwaggerSpec`, `SwaggerEndpoint`)。
|
||||
|
||||
6. **API 调用器 (`APICaller` 在 `ddms_compliance_suite/api_caller/caller.py`)**:
|
||||
* 封装了实际的 HTTP 请求发送逻辑。
|
||||
* 接收一个 `APIRequest` 对象(包含方法、URL、参数、头部、请求体),使用如 `requests` 库执行请求,并返回一个 `APIResponse` 对象(包含状态码、响应头、响应体内容等)。
|
||||
|
||||
7. **LLM 服务 (`LLMService` 在 `ddms_compliance_suite/llm_utils/llm_service.py`)** (可选):
|
||||
* 如果配置了 LLM 服务(如通义千问的兼容 OpenAI 模式的 API),此组件负责与 LLM API 交互。
|
||||
* 主要用于根据 Pydantic 模型(从 JSON Schema 动态创建)智能生成复杂的请求参数或请求体。
|
||||
|
||||
这个架构旨在提供一个灵活、可扩展的 API 测试框架,允许用户通过编写自定义的 Python 测试用例来定义复杂的验证逻辑。
|
||||
|
||||
## 自定义 `APITestCase` 编写指南 (更新版)
|
||||
|
||||
此指南帮助您创建自定义的 `APITestCase` 类,以扩展 DDMS 合规性验证软件的测试能力。核心理念是 **代码即测试**。
|
||||
(您可以参考项目中的 `docs/APITestCase_Development_Guide.md` 文件获取更详尽的原始指南,以下内容基于该指南并加入了新特性。)
|
||||
|
||||
### 1. 创建自定义测试用例
|
||||
|
||||
1. **创建 Python 文件**:在您的自定义测试用例目录(例如 `custom_testcases/`)下创建一个新的 `.py` 文件。
|
||||
2. **继承 `BaseAPITestCase`**:定义一个或多个类,使其继承自 `ddms_compliance_suite.test_framework_core.BaseAPITestCase`。
|
||||
3. **定义元数据 (类属性)**:
|
||||
* `id: str`: 测试用例的全局唯一标识符 (例如 `"TC-MYFEATURE-001"`)。
|
||||
* `name: str`: 人类可读的名称。
|
||||
* `description: str`: 详细描述。
|
||||
* `severity: TestSeverity`: 严重程度 (例如 `TestSeverity.CRITICAL`, `TestSeverity.HIGH`, 等)。
|
||||
* `tags: List[str]`: 分类标签 (例如 `["smoke", "regression"]`)。
|
||||
* **`execution_order: int` (新增)**: 控制测试用例的执行顺序。**数值较小的会比较大的先执行**。如果多个测试用例此值相同,则它们会再根据类名的字母顺序排序。默认值为 `100`。
|
||||
```python
|
||||
class MyFirstCheck(BaseAPITestCase):
|
||||
execution_order = 10
|
||||
# ... other metadata
|
||||
|
||||
class MySecondCheck(BaseAPITestCase):
|
||||
execution_order = 20
|
||||
# ... other metadata
|
||||
```
|
||||
* `applicable_methods: Optional[List[str]]`: 限制适用的 HTTP 方法 (例如 `["POST", "PUT"]`)。`None` 表示所有方法。
|
||||
* `applicable_paths_regex: Optional[str]`: 限制适用的 API 路径 (Python 正则表达式)。`None` 表示所有路径。
|
||||
* **LLM 使用标志 (可选)**: 这些标志允许测试用例覆盖全局 LLM 配置。
|
||||
* `use_llm_for_body: bool = False`
|
||||
* `use_llm_for_path_params: bool = False`
|
||||
* `use_llm_for_query_params: bool = False`
|
||||
* `use_llm_for_headers: bool = False`
|
||||
(如果测试用例中不设置这些,则遵循 `run_api_tests.py` 传入的全局 LLM 开关。)
|
||||
|
||||
4. **实现验证逻辑**:重写 `BaseAPITestCase` 中一个或多个 `generate_*` 或 `validate_*` 方法。
|
||||
|
||||
### 2. `BaseAPITestCase` 核心方法
|
||||
|
||||
* **`__init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any])`**:
|
||||
* 构造函数。`endpoint_spec` 包含当前测试端点的 API 定义,`global_api_spec` 包含完整的 API 规范。
|
||||
* 基类会初始化 `self.logger`,可用于记录日志。
|
||||
|
||||
* **请求生成与修改方法**: 在 API 请求发送前调用,用于修改或生成请求数据。
|
||||
* `generate_query_params(self, current_query_params: Dict[str, Any]) -> Dict[str, Any]`
|
||||
* `generate_headers(self, current_headers: Dict[str, str]) -> Dict[str, str]`
|
||||
* `generate_request_body(self, current_body: Optional[Any]) -> Optional[Any]`
|
||||
* (如果需要,您也可以尝试定义 `generate_path_params` 方法来自定义路径参数的生成,其模式与上述类似。)
|
||||
|
||||
* **请求预校验方法**: 在请求数据完全构建后、发送前调用,用于静态检查。返回 `List[ValidationResult]`。
|
||||
* `validate_request_url(self, url: str, request_context: APIRequestContext) -> List[ValidationResult]`
|
||||
* `validate_request_headers(self, headers: Dict[str, str], request_context: APIRequestContext) -> List[ValidationResult]`
|
||||
* `validate_request_body(self, body: Optional[Any], request_context: APIRequestContext) -> List[ValidationResult]`
|
||||
|
||||
* **响应验证方法**: 在收到 API 响应后调用,这是最主要的验证阶段。返回 `List[ValidationResult]`。
|
||||
* `validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> List[ValidationResult]`
|
||||
* 检查状态码、响应头、响应体内容是否符合预期。
|
||||
* 进行业务逻辑相关的断言。
|
||||
|
||||
* **性能检查方法 (可选)**:
|
||||
* `check_performance(self, response_context: APIResponseContext, request_context: APIRequestContext) -> List[ValidationResult]`
|
||||
* 通常用于检查响应时间 `response_context.elapsed_time`。
|
||||
|
||||
### 3. 核心辅助类
|
||||
|
||||
* **`ValidationResult(passed: bool, message: str, details: Optional[Dict[str, Any]] = None)`**:
|
||||
* 封装单个验证点的结果。所有 `validate_*` 和 `check_*` 方法都应返回此对象的列表。
|
||||
* **`APIRequestContext`**: 包含当前请求的详细信息(方法、URL、参数、头、体、端点规范)。
|
||||
* **`APIResponseContext`**: 包含 API 响应的详细信息(状态码、头、JSON 内容、文本内容、耗时、原始响应对象、关联的请求上下文)。
|
||||
|
||||
### 4. 示例 (展示 `execution_order`)
|
||||
|
||||
参考您项目中的 `custom_testcases/basic_checks.py`,您可以像这样添加 `execution_order`:
|
||||
|
||||
```python
|
||||
# In custom_testcases/status_and_header_checks.py
|
||||
from ddms_compliance_suite.test_framework_core import BaseAPITestCase, TestSeverity, ValidationResult, APIRequestContext, APIResponseContext
|
||||
|
||||
class StatusCodeCheck(BaseAPITestCase):
|
||||
id = "TC-STATUS-001"
|
||||
name = "状态码检查"
|
||||
description = "验证API响应状态码。"
|
||||
severity = TestSeverity.CRITICAL
|
||||
tags = ["status", "smoke"]
|
||||
execution_order = 10 # 希望这个检查先于下面的 HeaderCheck 执行
|
||||
|
||||
def validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> list[ValidationResult]:
|
||||
results = []
|
||||
if response_context.status_code == 200:
|
||||
results.append(ValidationResult(passed=True, message="响应状态码为 200 OK。"))
|
||||
else:
|
||||
results.append(ValidationResult(passed=False, message=f"期望状态码 200,实际为 {response_context.status_code}。"))
|
||||
return results
|
||||
|
||||
class EssentialHeaderCheck(BaseAPITestCase):
|
||||
id = "TC-HEADER-ESSENTIAL-001"
|
||||
name = "必要请求头 X-Trace-ID 存在性检查"
|
||||
description = "验证响应中是否包含 X-Trace-ID。"
|
||||
severity = TestSeverity.HIGH
|
||||
tags = ["header"]
|
||||
execution_order = 20 # 在状态码检查之后执行
|
||||
|
||||
def validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> list[ValidationResult]:
|
||||
results = []
|
||||
if "X-Trace-ID" in response_context.headers:
|
||||
results.append(ValidationResult(passed=True, message="响应头中包含 X-Trace-ID。"))
|
||||
else:
|
||||
results.append(ValidationResult(passed=False, message="响应头中缺少 X-Trace-ID。"))
|
||||
return results
|
||||
```
|
||||
|
||||
### 5. 最佳实践
|
||||
|
||||
* **单一职责**:让每个 `APITestCase` 专注于特定的验证目标。
|
||||
* **清晰命名**:为类、ID、名称使用描述性文字。
|
||||
* **善用 `endpoint_spec`**:参考 API 定义进行精确测试。
|
||||
* **详细的 `ValidationResult`**:失败时提供充足的上下文信息。
|
||||
* **日志记录**:使用 `self.logger` 记录测试过程中的重要信息和问题。
|
||||
|
||||
希望这份更新的架构概览和编写指南对您有所帮助!通过 `execution_order`,您可以更好地控制复杂场景下测试用例的执行流程。
|
||||
1243
assets/doc/井筒API示例swagger_fixed_simple.json
Normal file
1243
assets/doc/井筒API示例swagger_fixed_simple.json
Normal file
File diff suppressed because it is too large
Load Diff
Binary file not shown.
@ -8,10 +8,13 @@ class StatusCode200Check(BaseAPITestCase):
|
||||
description = "验证 API 响应状态码是否为 200 OK。"
|
||||
severity = TestSeverity.CRITICAL
|
||||
tags = ["status_code", "smoke_test"]
|
||||
use_llm_for_body = False
|
||||
# 适用于所有方法和路径 (默认)
|
||||
# applicable_methods = None
|
||||
# applicable_paths_regex = None
|
||||
use_llm_for_body: bool = True
|
||||
use_llm_for_path_params: bool = True
|
||||
use_llm_for_query_params: bool = True
|
||||
use_llm_for_headers: bool = True
|
||||
|
||||
def __init__(self, endpoint_spec: dict, global_api_spec: dict):
|
||||
super().__init__(endpoint_spec, global_api_spec)
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
@ -112,7 +112,7 @@ class LLMService:
|
||||
def generate_parameters_from_schema(
|
||||
self,
|
||||
pydantic_model_class: type[BaseModel],
|
||||
prompt_instructions: Optional[str] = None,
|
||||
prompt_instruction: Optional[str] = None,
|
||||
max_tokens: int = 1024,
|
||||
temperature: float = 0.7
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
@ -133,8 +133,8 @@ class LLMService:
|
||||
"不包含任何额外的解释、注释或Markdown标记。"
|
||||
)
|
||||
user_prompt_content = f"请为以下JSON Schema生成一个有效的JSON对象实例:\n\n```json\n{schema_str}\n```\n"
|
||||
if prompt_instructions:
|
||||
user_prompt_content += f"\n请遵循以下额外指令:\n{prompt_instructions}"
|
||||
if prompt_instruction:
|
||||
user_prompt_content += f"\n请遵循以下额外指令:\n{prompt_instruction}"
|
||||
messages = [
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": user_prompt_content}
|
||||
@ -194,7 +194,7 @@ if __name__ == '__main__':
|
||||
logger.info("\n--- 测试 SampleUserProfile 参数生成 ---")
|
||||
generated_profile = llm_service_instance.generate_parameters_from_schema(
|
||||
pydantic_model_class=SampleUserProfile,
|
||||
prompt_instructions="请生成一个表示非活跃用户的配置文件,用户名包含 \"test_user\" 字样,城市为上海,并包含至少一个兴趣爱好。"
|
||||
prompt_instruction="请生成一个表示非活跃用户的配置文件,用户名包含 \"test_user\" 字样,城市为上海,并包含至少一个兴趣爱好。"
|
||||
)
|
||||
|
||||
if generated_profile:
|
||||
@ -210,7 +210,7 @@ if __name__ == '__main__':
|
||||
logger.info("\n--- 测试 SampleUserAddress 参数生成 ---")
|
||||
generated_address = llm_service_instance.generate_parameters_from_schema(
|
||||
pydantic_model_class=SampleUserAddress,
|
||||
prompt_instructions="生成一个位于北京市朝阳区的地址,邮编以1000开头。"
|
||||
prompt_instruction="生成一个位于北京市朝阳区的地址,邮编以1000开头。"
|
||||
)
|
||||
if generated_address:
|
||||
logger.info(f"成功生成的 UserAddress:\n{json.dumps(generated_address, indent=2, ensure_ascii=False)}")
|
||||
|
||||
@ -66,7 +66,16 @@ class TestCaseRegistry:
|
||||
except Exception as e:
|
||||
self.logger.error(f"处理文件 '{file_path}' 时发生未知错误: {e}", exc_info=True)
|
||||
|
||||
self.logger.info(f"测试用例发现完成。总共注册了 {len(self._registry)} 个独特的测试用例 (基于ID)。发现 {found_count} 个测试用例类。")
|
||||
# 根据 execution_order 对收集到的测试用例类进行排序
|
||||
try:
|
||||
self._test_case_classes.sort(key=lambda tc_class: (getattr(tc_class, 'execution_order', 100), tc_class.__name__))
|
||||
self.logger.info(f"已根据 execution_order (主要) 和类名 (次要) 对 {len(self._test_case_classes)} 个测试用例类进行了排序。")
|
||||
except AttributeError as e_sort:
|
||||
self.logger.error(f"对测试用例类进行排序时发生 AttributeError (可能部分类缺少 execution_order): {e_sort}", exc_info=True)
|
||||
except Exception as e_sort_general:
|
||||
self.logger.error(f"对测试用例类进行排序时发生未知错误: {e_sort_general}", exc_info=True)
|
||||
|
||||
self.logger.info(f"测试用例发现完成。总共注册了 {len(self._registry)} 个独特的测试用例 (基于ID)。发现并排序了 {len(self._test_case_classes)} 个测试用例类。")
|
||||
|
||||
def get_test_case_by_id(self, case_id: str) -> Optional[Type[BaseAPITestCase]]:
|
||||
"""根据ID获取已注册的测试用例类。"""
|
||||
|
||||
@ -83,7 +83,15 @@ class BaseAPITestCase:
|
||||
|
||||
applicable_methods: Optional[List[str]] = None
|
||||
applicable_paths_regex: Optional[str] = None
|
||||
use_llm_for_body: Optional[bool] = None # 新增属性:控制此测试用例是否使用LLM生成请求体
|
||||
|
||||
# 新增:测试用例执行顺序 (数值越小越先执行)
|
||||
execution_order: int = 100
|
||||
|
||||
# LLM 生成控制属性 (默认为 False,表示不使用LLM,除非显式开启)
|
||||
use_llm_for_body: bool = False
|
||||
use_llm_for_path_params: bool = False
|
||||
use_llm_for_query_params: bool = False
|
||||
use_llm_for_headers: bool = False
|
||||
|
||||
def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any]):
|
||||
"""
|
||||
|
||||
@ -311,11 +311,14 @@ class APITestOrchestrator:
|
||||
"""API测试编排器"""
|
||||
|
||||
def __init__(self, base_url: str,
|
||||
custom_test_cases_dir: Optional[str] = None, # 新的自定义测试用例目录路径
|
||||
custom_test_cases_dir: Optional[str] = None,
|
||||
llm_api_key: Optional[str] = None,
|
||||
llm_base_url: Optional[str] = None,
|
||||
llm_model_name: Optional[str] = None,
|
||||
use_llm_for_request_body: bool = False
|
||||
use_llm_for_request_body: bool = False,
|
||||
use_llm_for_path_params: bool = False,
|
||||
use_llm_for_query_params: bool = False,
|
||||
use_llm_for_headers: bool = False
|
||||
):
|
||||
"""
|
||||
初始化API测试编排器
|
||||
@ -326,7 +329,10 @@ class APITestOrchestrator:
|
||||
llm_api_key: 大模型服务的API Key。
|
||||
llm_base_url: 大模型服务的兼容OpenAI的基础URL。
|
||||
llm_model_name: 要使用的具体模型名称。
|
||||
use_llm_for_request_body: 是否使用LLM生成请求体,默认为False。
|
||||
use_llm_for_request_body: 是否全局启用LLM生成请求体。
|
||||
use_llm_for_path_params: 是否全局启用LLM生成路径参数。
|
||||
use_llm_for_query_params: 是否全局启用LLM生成查询参数。
|
||||
use_llm_for_headers: 是否全局启用LLM生成头部参数。
|
||||
"""
|
||||
self.base_url = base_url.rstrip('/')
|
||||
self.logger = logging.getLogger(__name__)
|
||||
@ -336,7 +342,6 @@ class APITestOrchestrator:
|
||||
self.api_caller = APICaller()
|
||||
self.validator = JSONSchemaValidator() # JSON Schema 验证器,可能会被测试用例内部使用
|
||||
|
||||
# 初始化 (新) 测试用例注册表
|
||||
self.test_case_registry: Optional[TestCaseRegistry] = None
|
||||
if custom_test_cases_dir:
|
||||
self.logger.info(f"初始化 TestCaseRegistry,扫描目录: {custom_test_cases_dir}")
|
||||
@ -348,35 +353,85 @@ class APITestOrchestrator:
|
||||
else:
|
||||
self.logger.info("未提供 custom_test_cases_dir,不加载自定义 APITestCase。")
|
||||
|
||||
# 初始化 LLM 服务 (如果配置了)
|
||||
self.llm_service: Optional[LLMService] = None
|
||||
# LLM 全局配置开关
|
||||
self.use_llm_for_request_body = use_llm_for_request_body
|
||||
self.use_llm_for_path_params = use_llm_for_path_params
|
||||
self.use_llm_for_query_params = use_llm_for_query_params
|
||||
self.use_llm_for_headers = use_llm_for_headers
|
||||
|
||||
if LLMService is None: # 检查导入是否成功
|
||||
self.llm_service: Optional[LLMService] = None
|
||||
if LLMService is None:
|
||||
self.logger.warning("LLMService 类未能导入,LLM 相关功能将完全禁用。")
|
||||
self.use_llm_for_request_body = False # 强制禁用
|
||||
elif self.use_llm_for_request_body: # 只有当用户希望使用且类已导入时才尝试初始化
|
||||
if llm_api_key and llm_base_url and llm_model_name:
|
||||
# 强制所有LLM使用为False,并确保服务实例为None
|
||||
self.use_llm_for_request_body = False
|
||||
self.use_llm_for_path_params = False
|
||||
self.use_llm_for_query_params = False
|
||||
self.use_llm_for_headers = False
|
||||
elif llm_api_key and llm_base_url and llm_model_name: # 直接检查配置是否完整
|
||||
try:
|
||||
self.llm_service = LLMService(
|
||||
api_key=llm_api_key,
|
||||
base_url=llm_base_url,
|
||||
model_name=llm_model_name
|
||||
)
|
||||
self.logger.info(f"LLMService 已成功初始化,模型: {llm_model_name}。将尝试使用LLM生成请求体。")
|
||||
except ValueError as ve: # LLMService init might raise ValueError for bad args
|
||||
self.logger.error(f"LLMService 初始化失败 (参数错误): {ve}。将回退到非LLM请求体生成。")
|
||||
self.llm_service = None
|
||||
self.use_llm_for_request_body = False # 初始化失败,禁用LLM使用
|
||||
self.logger.info(f"LLMService 已成功初始化,模型: {llm_model_name}。")
|
||||
except ValueError as ve:
|
||||
self.logger.error(f"LLMService 初始化失败 (参数错误): {ve}。LLM相关功能将不可用。")
|
||||
self.llm_service = None # 确保初始化失败时服务为None
|
||||
except Exception as e:
|
||||
self.logger.error(f"LLMService 初始化时发生未知错误: {e}。将回退到非LLM请求体生成。", exc_info=True)
|
||||
self.llm_service = None
|
||||
self.use_llm_for_request_body = False # 初始化失败,禁用LLM使用
|
||||
self.logger.error(f"LLMService 初始化时发生未知错误: {e}。LLM相关功能将不可用。", exc_info=True)
|
||||
self.llm_service = None # 确保初始化失败时服务为None
|
||||
else:
|
||||
self.logger.warning("希望使用LLM生成请求体,但未提供完整的LLM配置 (api_key, base_url, model_name)。将回退到非LLM请求体生成。")
|
||||
self.use_llm_for_request_body = False # 配置不全,禁用LLM使用
|
||||
elif not self.use_llm_for_request_body:
|
||||
self.logger.info("配置为不使用LLM生成请求体。")
|
||||
# 如果LLMService类存在,但配置不完整
|
||||
if LLMService:
|
||||
self.logger.warning("LLMService 类已找到,但未提供完整的LLM配置 (api_key, base_url, model_name)。LLM相关功能将不可用。")
|
||||
# self.llm_service 默认就是 None,无需额外操作
|
||||
|
||||
# 新增:端点级别的LLM生成参数缓存
|
||||
self.llm_endpoint_params_cache: Dict[str, Dict[str, Any]] = {}
|
||||
|
||||
def _should_use_llm_for_param_type(
|
||||
self,
|
||||
param_type_key: str, # 例如 "path_params", "query_params", "headers", "body"
|
||||
test_case_instance: Optional[BaseAPITestCase]
|
||||
) -> bool:
|
||||
"""
|
||||
判断是否应为特定参数类型尝试使用LLM。
|
||||
结合全局配置和测试用例特定配置。
|
||||
"""
|
||||
if not self.llm_service: # 如果LLM服务本身就不可用,则肯定不用
|
||||
return False
|
||||
|
||||
global_flag = False
|
||||
tc_specific_flag: Optional[bool] = None
|
||||
|
||||
if param_type_key == "body":
|
||||
global_flag = self.use_llm_for_request_body
|
||||
if test_case_instance:
|
||||
tc_specific_flag = test_case_instance.use_llm_for_body
|
||||
elif param_type_key == "path_params":
|
||||
global_flag = self.use_llm_for_path_params
|
||||
if test_case_instance:
|
||||
tc_specific_flag = test_case_instance.use_llm_for_path_params
|
||||
elif param_type_key == "query_params":
|
||||
global_flag = self.use_llm_for_query_params
|
||||
if test_case_instance:
|
||||
tc_specific_flag = test_case_instance.use_llm_for_query_params
|
||||
elif param_type_key == "headers":
|
||||
global_flag = self.use_llm_for_headers
|
||||
if test_case_instance:
|
||||
tc_specific_flag = test_case_instance.use_llm_for_headers
|
||||
else:
|
||||
self.logger.warning(f"未知的参数类型键 '{param_type_key}' 在 _should_use_llm_for_param_type 中检查。")
|
||||
return False
|
||||
|
||||
# 决定最终是否使用LLM的逻辑:
|
||||
# 1. 如果测试用例明确设置了 (tc_specific_flag is not None),则以测试用例的设置为准。
|
||||
# 2. 否则,使用全局设置。
|
||||
final_decision = tc_specific_flag if tc_specific_flag is not None else global_flag
|
||||
|
||||
# self.logger.debug(f"LLM决策 for '{param_type_key}': TC specific='{tc_specific_flag}', Global='{global_flag}', Final='{final_decision}')
|
||||
return final_decision
|
||||
|
||||
def _create_pydantic_model_from_schema(
|
||||
self,
|
||||
@ -573,40 +628,37 @@ class APITestOrchestrator:
|
||||
test_case_instance: Optional[BaseAPITestCase] = None
|
||||
|
||||
endpoint_spec_dict: Dict[str, Any]
|
||||
# 确保 endpoint_spec 转换为字典,以便在测试用例和请求上下文中统一使用
|
||||
if hasattr(endpoint_spec, 'to_dict') and callable(endpoint_spec.to_dict):
|
||||
endpoint_spec_dict = endpoint_spec.to_dict()
|
||||
elif isinstance(endpoint_spec, (YAPIEndpoint, SwaggerEndpoint)):
|
||||
elif isinstance(endpoint_spec, dict): # 如果它已经是字典 (例如从 OpenAPI 解析器直接过来)
|
||||
endpoint_spec_dict = endpoint_spec
|
||||
elif isinstance(endpoint_spec, (YAPIEndpoint, SwaggerEndpoint)): # 作为后备,从特定类型提取
|
||||
self.logger.debug(f"Manually converting endpoint_spec of type {type(endpoint_spec).__name__} to dict.")
|
||||
endpoint_spec_dict = {
|
||||
"method": getattr(endpoint_spec, 'method', 'UNKNOWN_METHOD'),
|
||||
"path": getattr(endpoint_spec, 'path', 'UNKNOWN_PATH'),
|
||||
"title": getattr(endpoint_spec, 'title', getattr(endpoint_spec, 'summary', '')),
|
||||
"summary": getattr(endpoint_spec, 'summary', ''),
|
||||
"description": getattr(endpoint_spec, 'description', ''), # 确保description也被传递
|
||||
"description": getattr(endpoint_spec, 'description', ''),
|
||||
"operationId": getattr(endpoint_spec, 'operation_id',
|
||||
f"{getattr(endpoint_spec, 'method', '').upper()}_{getattr(endpoint_spec, 'path', '').replace('/', '_')}"),
|
||||
# 尝试提取参数和请求体 (简化版)
|
||||
"parameters": getattr(endpoint_spec, 'parameters', []) if isinstance(endpoint_spec, SwaggerEndpoint) else (getattr(endpoint_spec, 'req_query', []) + getattr(endpoint_spec, 'req_headers', [])),
|
||||
"requestBody": getattr(endpoint_spec, 'request_body', None) if isinstance(endpoint_spec, SwaggerEndpoint) else getattr(endpoint_spec, 'req_body_other', None),
|
||||
"_original_object_type": type(endpoint_spec).__name__
|
||||
}
|
||||
if isinstance(endpoint_spec, YAPIEndpoint):
|
||||
for attr_name in dir(endpoint_spec):
|
||||
if not attr_name.startswith('_') and not callable(getattr(endpoint_spec, attr_name)):
|
||||
try:
|
||||
json.dumps({attr_name: getattr(endpoint_spec, attr_name)})
|
||||
endpoint_spec_dict[attr_name] = getattr(endpoint_spec, attr_name)
|
||||
except (TypeError, OverflowError):
|
||||
pass
|
||||
elif isinstance(endpoint_spec, SwaggerEndpoint):
|
||||
if hasattr(endpoint_spec, 'parameters'): endpoint_spec_dict['parameters'] = endpoint_spec.parameters
|
||||
if hasattr(endpoint_spec, 'request_body'): endpoint_spec_dict['request_body'] = endpoint_spec.request_body
|
||||
if hasattr(endpoint_spec, 'responses'): endpoint_spec_dict['responses'] = endpoint_spec.responses
|
||||
else:
|
||||
endpoint_spec_dict = endpoint_spec if isinstance(endpoint_spec, dict) else {}
|
||||
if not endpoint_spec_dict:
|
||||
endpoint_spec_dict = {}
|
||||
self.logger.warning(f"endpoint_spec无法转换为字典,实际类型: {type(endpoint_spec)}")
|
||||
|
||||
global_api_spec_dict: Dict[str, Any]
|
||||
if hasattr(global_api_spec, 'to_dict') and callable(global_api_spec.to_dict):
|
||||
global_api_spec_dict = global_api_spec.to_dict()
|
||||
elif isinstance(global_api_spec, dict):
|
||||
global_api_spec_dict = global_api_spec
|
||||
else:
|
||||
global_api_spec_dict = global_api_spec if isinstance(global_api_spec, dict) else {}
|
||||
if not global_api_spec_dict:
|
||||
global_api_spec_dict = {}
|
||||
self.logger.warning(f"global_api_spec无法转换为字典,实际类型: {type(global_api_spec)}")
|
||||
|
||||
|
||||
@ -618,24 +670,32 @@ class APITestOrchestrator:
|
||||
test_case_instance.logger.info(f"开始执行测试用例 '{test_case_instance.id}' for endpoint '{endpoint_spec_dict.get('method')} {endpoint_spec_dict.get('path')}'")
|
||||
|
||||
# 调用 _prepare_initial_request_data 时传递 test_case_instance
|
||||
initial_request_data = self._prepare_initial_request_data(endpoint_spec, test_case_instance=test_case_instance)
|
||||
# 并直接解包返回的元组
|
||||
method, path_params_data, query_params_data, headers_data, body_data = \
|
||||
self._prepare_initial_request_data(endpoint_spec_dict, test_case_instance=test_case_instance)
|
||||
|
||||
current_q_params = test_case_instance.generate_query_params(initial_request_data['query_params'])
|
||||
current_headers = test_case_instance.generate_headers(initial_request_data['headers'])
|
||||
current_body = test_case_instance.generate_request_body(initial_request_data['body'])
|
||||
# 让测试用例有机会修改这些生成的数据
|
||||
# 注意: BaseAPITestCase 中的 generate_* 方法现在需要传入 endpoint_spec_dict
|
||||
# 因为它们可能需要原始的端点定义来进行更复杂的逻辑
|
||||
current_q_params = test_case_instance.generate_query_params(query_params_data)
|
||||
current_headers = test_case_instance.generate_headers(headers_data)
|
||||
current_body = test_case_instance.generate_request_body(body_data)
|
||||
# 路径参数通常由编排器根据路径模板和数据最终确定,但如果测试用例要覆盖,可以提供 generate_path_params
|
||||
# 这里我们使用从 _prepare_initial_request_data 返回的 path_params_data 作为基础
|
||||
current_path_params = test_case_instance.generate_path_params(path_params_data) if hasattr(test_case_instance, 'generate_path_params') and callable(getattr(test_case_instance, 'generate_path_params')) and getattr(test_case_instance, 'generate_path_params').__func__ != BaseAPITestCase.generate_path_params else path_params_data
|
||||
|
||||
current_path_params = initial_request_data['path_params']
|
||||
|
||||
final_url = self.base_url + endpoint_spec_dict.get('path', '')
|
||||
final_url_template = endpoint_spec_dict.get('path', '')
|
||||
final_url = self.base_url + final_url_template
|
||||
for p_name, p_val in current_path_params.items():
|
||||
placeholder = f"{{{p_name}}}"
|
||||
if placeholder in final_url:
|
||||
if placeholder in final_url_template: # 替换基础路径模板中的占位符
|
||||
final_url = final_url.replace(placeholder, str(p_val))
|
||||
else:
|
||||
self.logger.warning(f"路径参数 '{p_name}' 在路径模板 '{endpoint_spec_dict.get('path')}' 中未找到占位符。")
|
||||
# 注意: 如果 _prepare_initial_request_data 填充的 final_url 已经包含了 base_url,这里的拼接逻辑需要调整
|
||||
# 假设 final_url_template 只是 path string e.g. /users/{id}
|
||||
|
||||
api_request_context = APIRequestContext(
|
||||
method=endpoint_spec_dict.get('method', 'GET').upper(),
|
||||
method=method, # 使用从 _prepare_initial_request_data 获取的 method
|
||||
url=final_url,
|
||||
path_params=current_path_params,
|
||||
query_params=current_q_params,
|
||||
@ -733,199 +793,328 @@ class APITestOrchestrator:
|
||||
duration=tc_duration
|
||||
)
|
||||
|
||||
def _prepare_initial_request_data(self, endpoint_spec: Union[YAPIEndpoint, SwaggerEndpoint], test_case_instance: Optional[BaseAPITestCase] = None) -> Dict[str, Any]:
|
||||
def _prepare_initial_request_data(
|
||||
self,
|
||||
endpoint_spec: Dict[str, Any],
|
||||
test_case_instance: Optional[BaseAPITestCase] = None
|
||||
) -> Tuple[str, Dict[str, Any], Dict[str, Any], Dict[str, Any], Optional[Any]]:
|
||||
"""
|
||||
根据端点规格准备一个初始的请求数据结构。
|
||||
返回一个包含 'path_params', 'query_params', 'headers', 'body' 的字典。
|
||||
Args:
|
||||
endpoint_spec: 当前端点的规格。
|
||||
test_case_instance: (可选) 当前正在执行的测试用例实例,用于细粒度控制LLM使用。
|
||||
根据OpenAPI端点规格和测试用例实例准备初始请求数据。
|
||||
包含端点级别的LLM参数缓存逻辑。
|
||||
"""
|
||||
self.logger.debug(f"Preparing initial request data for: {endpoint_spec.method} {endpoint_spec.path}")
|
||||
method = endpoint_spec.get("method", "get").upper()
|
||||
operation_id = endpoint_spec.get("operationId", f"{method}_{endpoint_spec.get('path', '')}")
|
||||
endpoint_cache_key = f"{method}_{endpoint_spec.get('path', '')}"
|
||||
|
||||
path_params_spec_list: List[Dict[str, Any]] = []
|
||||
query_params_spec_list: List[Dict[str, Any]] = []
|
||||
headers_spec_list: List[Dict[str, Any]] = []
|
||||
body_schema_dict: Optional[Dict[str, Any]] = None
|
||||
path_str = getattr(endpoint_spec, 'path', '')
|
||||
self.logger.info(f"[{operation_id}] 开始为端点 {endpoint_cache_key} 准备初始请求数据 (TC: {test_case_instance.id if test_case_instance else 'N/A'})")
|
||||
|
||||
if isinstance(endpoint_spec, YAPIEndpoint):
|
||||
query_params_spec_list = endpoint_spec.req_query or []
|
||||
headers_spec_list = endpoint_spec.req_headers or []
|
||||
if endpoint_spec.req_body_type == 'json' and endpoint_spec.req_body_other:
|
||||
try:
|
||||
body_schema_dict = json.loads(endpoint_spec.req_body_other) if isinstance(endpoint_spec.req_body_other, str) else endpoint_spec.req_body_other
|
||||
except json.JSONDecodeError:
|
||||
self.logger.warning(f"YAPI req_body_other for {path_str} is not valid JSON: {endpoint_spec.req_body_other}")
|
||||
# 尝试从缓存加载参数
|
||||
if endpoint_cache_key in self.llm_endpoint_params_cache:
|
||||
cached_params = self.llm_endpoint_params_cache[endpoint_cache_key]
|
||||
self.logger.info(f"[{operation_id}] 从缓存加载了端点 '{endpoint_cache_key}' 的LLM参数。")
|
||||
# 直接从缓存中获取各类参数,如果存在的话
|
||||
path_params_data = cached_params.get("path_params", {})
|
||||
query_params_data = cached_params.get("query_params", {})
|
||||
headers_data = cached_params.get("headers", {})
|
||||
body_data = cached_params.get("body") # Body可能是None
|
||||
|
||||
elif isinstance(endpoint_spec, SwaggerEndpoint):
|
||||
# 优先尝试 OpenAPI 3.0+ 的 requestBody
|
||||
if endpoint_spec.request_body and 'content' in endpoint_spec.request_body:
|
||||
json_content_spec = endpoint_spec.request_body['content'].get('application/json', {})
|
||||
if 'schema' in json_content_spec:
|
||||
body_schema_dict = json_content_spec['schema']
|
||||
self.logger.debug("从 Swagger 3.0+ 'requestBody' 中提取到 body schema。")
|
||||
# 即使从缓存加载,仍需确保默认头部(如Accept, Content-Type)存在或被正确设置
|
||||
# Content-Type应基于body_data是否存在来决定
|
||||
default_headers = {"Accept": "application/json"}
|
||||
if body_data is not None and method not in ["GET", "DELETE", "HEAD", "OPTIONS"]:
|
||||
default_headers["Content-Type"] = "application/json"
|
||||
|
||||
# 如果没有从 requestBody 中找到,再尝试 Swagger 2.0 的 in: "body" 参数
|
||||
if not body_schema_dict and endpoint_spec.parameters:
|
||||
for param_spec in endpoint_spec.parameters:
|
||||
if param_spec.get('in') == 'body':
|
||||
if 'schema' in param_spec:
|
||||
body_schema_dict = param_spec['schema']
|
||||
self.logger.debug(f"从 Swagger 2.0 'in: body' 参数 '{param_spec.get('name')}' 中提取到 body schema (作为回退)。")
|
||||
break # 找到一个 body 参数就足够了
|
||||
headers_data = {**default_headers, **headers_data} # 合并,缓存中的优先
|
||||
|
||||
# 处理 path, query, header 参数 (这部分逻辑需要保留并放在正确的位置)
|
||||
if endpoint_spec.parameters:
|
||||
for param_spec in endpoint_spec.parameters:
|
||||
param_in = param_spec.get('in')
|
||||
if param_in == 'path':
|
||||
path_params_spec_list.append(param_spec)
|
||||
elif param_in == 'query':
|
||||
query_params_spec_list.append(param_spec)
|
||||
elif param_in == 'header':
|
||||
headers_spec_list.append(param_spec)
|
||||
self.logger.debug(f"[{operation_id}] (缓存加载) 准备的请求数据: method={method}, path_params={path_params_data}, query_params={query_params_data}, headers={list(headers_data.keys())}, body_type={type(body_data).__name__}")
|
||||
return method, path_params_data, query_params_data, headers_data, body_data
|
||||
|
||||
# 缓存未命中,需要生成参数
|
||||
self.logger.info(f"[{operation_id}] 端点 '{endpoint_cache_key}' 的参数未在缓存中找到,开始生成。")
|
||||
generated_params_for_endpoint: Dict[str, Any] = {}
|
||||
|
||||
path_params_data: Dict[str, Any] = {}
|
||||
import re
|
||||
path_param_names_in_url = re.findall(r'{(.*?)}', path_str)
|
||||
|
||||
for p_name in path_param_names_in_url:
|
||||
found_spec = None
|
||||
for spec in path_params_spec_list:
|
||||
if spec.get('name') == p_name:
|
||||
found_spec = spec
|
||||
break
|
||||
|
||||
if found_spec and isinstance(found_spec, dict):
|
||||
value = found_spec.get('example')
|
||||
if value is None and found_spec.get('schema'):
|
||||
value = self._generate_data_from_schema(found_spec['schema'])
|
||||
path_params_data[p_name] = value if value is not None else f"example_{p_name}"
|
||||
else:
|
||||
path_params_data[p_name] = f"example_{p_name}"
|
||||
self.logger.debug(f"Path param '{p_name}' generated value: {path_params_data[p_name]}")
|
||||
|
||||
query_params_data: Dict[str, Any] = {}
|
||||
for q_param_spec in query_params_spec_list:
|
||||
name = q_param_spec.get('name')
|
||||
if name:
|
||||
value = q_param_spec.get('example')
|
||||
if value is None and 'value' in q_param_spec:
|
||||
value = q_param_spec['value']
|
||||
|
||||
if value is None and q_param_spec.get('schema'):
|
||||
value = self._generate_data_from_schema(q_param_spec['schema'])
|
||||
elif value is None and q_param_spec.get('type'):
|
||||
value = self._generate_data_from_schema({'type': q_param_spec.get('type')})
|
||||
|
||||
query_params_data[name] = value if value is not None else f"example_query_{name}"
|
||||
|
||||
headers_data: Dict[str, str] = {"Content-Type": "application/json", "Accept": "application/json"}
|
||||
for h_param_spec in headers_spec_list:
|
||||
name = h_param_spec.get('name')
|
||||
if name and name.lower() not in ['content-type', 'accept']:
|
||||
value = h_param_spec.get('example')
|
||||
if value is None and 'value' in h_param_spec:
|
||||
value = h_param_spec['value']
|
||||
|
||||
if value is None and h_param_spec.get('schema'):
|
||||
value = self._generate_data_from_schema(h_param_spec['schema'])
|
||||
elif value is None and h_param_spec.get('type'):
|
||||
value = self._generate_data_from_schema({'type': h_param_spec.get('type')})
|
||||
|
||||
if value is not None:
|
||||
headers_data[name] = str(value)
|
||||
else:
|
||||
headers_data[name] = f"example_header_{name}"
|
||||
|
||||
headers_data_generated: Dict[str, Any] = {} # LLM或常规生成的,不含默认
|
||||
body_data: Optional[Any] = None
|
||||
if body_schema_dict:
|
||||
generated_by_llm = False
|
||||
|
||||
# 决定是否应该为这个特定的情况尝试LLM
|
||||
# 1. 全局开关 self.use_llm_for_request_body 必须为 True
|
||||
# 2. LLM 服务 self.llm_service 必须可用
|
||||
# 3. 测试用例级别配置 test_case_instance.use_llm_for_body (如果存在且不是None) 会覆盖全局配置
|
||||
attempt_llm_globally = self.use_llm_for_request_body and self.llm_service
|
||||
should_try_llm_for_this_run = attempt_llm_globally
|
||||
# 提取各类参数的定义列表
|
||||
path_params_spec_list = [p for p in endpoint_spec.get("parameters", []) if p.get("in") == "path"]
|
||||
query_params_spec_list = [p for p in endpoint_spec.get("parameters", []) if p.get("in") == "query"]
|
||||
headers_spec_list = [p for p in endpoint_spec.get("parameters", []) if p.get("in") == "header"]
|
||||
request_body_spec = endpoint_spec.get("requestBody", {}).get("content", {}).get("application/json", {}).get("schema")
|
||||
|
||||
if test_case_instance and hasattr(test_case_instance, 'use_llm_for_body') and test_case_instance.use_llm_for_body is not None:
|
||||
should_try_llm_for_this_run = test_case_instance.use_llm_for_body
|
||||
if should_try_llm_for_this_run and not self.llm_service:
|
||||
self.logger.warning(f"测试用例 '{test_case_instance.id}' 配置为使用LLM,但LLM服务不可用。将回退。")
|
||||
should_try_llm_for_this_run = False # LLM服务不可用时,即使TC要求也无法使用
|
||||
self.logger.debug(f"测试用例 '{test_case_instance.id}' 的 use_llm_for_body 设置为 {test_case_instance.use_llm_for_body},最终决策是否尝试LLM: {should_try_llm_for_this_run}")
|
||||
elif not attempt_llm_globally and test_case_instance and hasattr(test_case_instance, 'use_llm_for_body') and test_case_instance.use_llm_for_body is True and not self.llm_service:
|
||||
# 特殊情况:全局LLM关闭,但测试用例希望开启,可是LLM服务不可用
|
||||
self.logger.warning(f"测试用例 '{test_case_instance.id}' 配置为使用LLM,但全局LLM服务不可用或未配置。将回退。")
|
||||
should_try_llm_for_this_run = False
|
||||
|
||||
if should_try_llm_for_this_run: # 只有在最终决策为True时才尝试
|
||||
self.logger.debug(f"尝试使用 LLM 为端点 {endpoint_spec.method} {endpoint_spec.path} 生成请求体 (TC覆盖: {test_case_instance.use_llm_for_body if test_case_instance else 'N/A'})。")
|
||||
# --- 1. 处理路径参数 ---
|
||||
param_type_key = "path_params"
|
||||
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and path_params_spec_list:
|
||||
self.logger.info(f"[{operation_id}] 尝试使用LLM生成路径参数。")
|
||||
object_schema, model_name = self._build_object_schema_for_params(path_params_spec_list, f"DynamicPathParamsFor_{operation_id}")
|
||||
if object_schema and model_name:
|
||||
try:
|
||||
# 生成一个稍微独特但可预测的模型名称,以利于缓存和调试
|
||||
model_base_name = "".join(part.capitalize() for part in re.split(r'[^a-zA-Z0-9]+', endpoint_spec.path.strip('/')) if part)
|
||||
dynamic_model_name = f"{model_base_name}{endpoint_spec.method.capitalize()}Body"
|
||||
if not dynamic_model_name or not dynamic_model_name[0].isalpha(): # 确保名称有效
|
||||
dynamic_model_name = f"Dynamic{endpoint_spec.method.capitalize()}Body_{abs(hash(endpoint_spec.path))}"
|
||||
|
||||
|
||||
DynamicPydanticModel = self._create_pydantic_model_from_schema(body_schema_dict, dynamic_model_name)
|
||||
|
||||
if DynamicPydanticModel:
|
||||
# 尝试获取端点的可读名称,优先顺序: title, summary, path
|
||||
readable_endpoint_name = getattr(endpoint_spec, 'title', None) or \
|
||||
getattr(endpoint_spec, 'summary', None) or \
|
||||
endpoint_spec.path
|
||||
|
||||
prompt_instr = f"请为API端点 '{readable_endpoint_name}' (方法: {endpoint_spec.method}) 生成一个符合其定义的请求体。"
|
||||
|
||||
# 可以进一步从 description 获取更详细的上下文给LLM
|
||||
ep_description = getattr(endpoint_spec, 'description', None)
|
||||
if ep_description:
|
||||
prompt_instr += f" API描述: {ep_description}"
|
||||
|
||||
llm_generated_body = self.llm_service.generate_parameters_from_schema(
|
||||
pydantic_model_class=DynamicPydanticModel,
|
||||
prompt_instructions=prompt_instr
|
||||
PydanticModel = self._create_pydantic_model_from_schema(object_schema, model_name)
|
||||
if PydanticModel:
|
||||
llm_generated = self.llm_service.generate_parameters_from_schema(
|
||||
PydanticModel,
|
||||
prompt_instruction=f"Generate valid path parameters for API operation: {operation_id}. Description: {endpoint_spec.get('description', '') or endpoint_spec.get('summary', 'N/A')}"
|
||||
)
|
||||
if llm_generated_body is not None:
|
||||
try:
|
||||
# 尝试用生成的模型验证LLM的输出,确保LLM确实遵循了schema
|
||||
DynamicPydanticModel.model_validate(llm_generated_body)
|
||||
body_data = llm_generated_body
|
||||
generated_by_llm = True
|
||||
self.logger.info(f"LLM 成功为 {endpoint_spec.method} {endpoint_spec.path} 生成并验证了请求体。")
|
||||
except Exception as p_val_error: # Catches Pydantic's ValidationError
|
||||
self.logger.warning(f"LLM为 {endpoint_spec.method} {endpoint_spec.path} 生成的请求体未能通过动态Pydantic模型验证: {p_val_error}. 将回退。LLM输出: {json.dumps(llm_generated_body, indent=2, ensure_ascii=False)[:500]}...")
|
||||
if isinstance(llm_generated, dict):
|
||||
path_params_data = llm_generated
|
||||
self.logger.info(f"[{operation_id}] LLM成功生成路径参数: {path_params_data}")
|
||||
else:
|
||||
self.logger.warning(f"LLM未能为 {endpoint_spec.method} {endpoint_spec.path} 生成请求体内容,将回退到默认方法。")
|
||||
self.logger.warning(f"[{operation_id}] LLM为路径参数返回了非字典类型: {type(llm_generated)}。回退到常规生成。")
|
||||
path_params_data = self._generate_params_from_list(path_params_spec_list, operation_id, "path")
|
||||
else:
|
||||
self.logger.warning(f"未能从Schema动态创建Pydantic模型用于LLM请求体生成 (端点: {endpoint_spec.method} {endpoint_spec.path}),将回退。")
|
||||
path_params_data = self._generate_params_from_list(path_params_spec_list, operation_id, "path")
|
||||
except Exception as e:
|
||||
self.logger.error(f"使用LLM生成请求体时发生错误: {e}。将回退到默认方法。", exc_info=True)
|
||||
self.logger.error(f"[{operation_id}] LLM生成路径参数失败: {e}。回退到常规生成。", exc_info=True)
|
||||
path_params_data = self._generate_params_from_list(path_params_spec_list, operation_id, "path")
|
||||
else: # _build_object_schema_for_params 返回 None
|
||||
path_params_data = self._generate_params_from_list(path_params_spec_list, operation_id, "path")
|
||||
else: # 不使用LLM或LLM服务不可用,或者 path_params_spec_list 为空但仍需确保path_params_data被赋值
|
||||
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and not path_params_spec_list:
|
||||
self.logger.info(f"[{operation_id}] 配置为路径参数使用LLM,但没有定义路径参数规格。")
|
||||
# 对于不使用LLM或LLM不适用的情况,或者 spec_list 为空的情况,都执行常规生成(如果 spec_list 非空则会记录)
|
||||
if path_params_spec_list and not self._should_use_llm_for_param_type(param_type_key, test_case_instance):
|
||||
self.logger.info(f"[{operation_id}] 使用常规方法或LLM未启用,为路径参数。")
|
||||
path_params_data = self._generate_params_from_list(path_params_spec_list, operation_id, "path")
|
||||
generated_params_for_endpoint[param_type_key] = path_params_data
|
||||
|
||||
if not generated_by_llm:
|
||||
# 只有当确实尝试了LLM(should_try_llm_for_this_run为True)但失败了,或者测试用例强制不使用LLM才记录回退日志
|
||||
log_fallback = False
|
||||
if should_try_llm_for_this_run: # 如果本应尝试LLM但generated_by_llm是False,说明LLM失败了
|
||||
log_fallback = True
|
||||
elif test_case_instance and hasattr(test_case_instance, 'use_llm_for_body') and test_case_instance.use_llm_for_body is False:
|
||||
# 如果测试用例明确禁用了LLM
|
||||
log_fallback = True
|
||||
self.logger.debug(f"测试用例 '{test_case_instance.id}' 明确配置不使用LLM,使用基于规则的生成方法 for {endpoint_spec.method} {endpoint_spec.path}。")
|
||||
# --- 2. 处理查询参数 ---
|
||||
param_type_key = "query_params"
|
||||
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and query_params_spec_list:
|
||||
self.logger.info(f"[{operation_id}] 尝试使用LLM生成查询参数。")
|
||||
object_schema, model_name = self._build_object_schema_for_params(query_params_spec_list, f"DynamicQueryParamsFor_{operation_id}")
|
||||
if object_schema and model_name:
|
||||
try:
|
||||
PydanticModel = self._create_pydantic_model_from_schema(object_schema, model_name)
|
||||
if PydanticModel:
|
||||
llm_generated = self.llm_service.generate_parameters_from_schema(
|
||||
PydanticModel,
|
||||
prompt_instruction=f"Generate valid query parameters for API operation: {operation_id}. Description: {endpoint_spec.get('description', '') or endpoint_spec.get('summary', 'N/A')}"
|
||||
)
|
||||
if isinstance(llm_generated, dict):
|
||||
query_params_data = llm_generated
|
||||
self.logger.info(f"[{operation_id}] LLM成功生成查询参数: {query_params_data}")
|
||||
else:
|
||||
self.logger.warning(f"[{operation_id}] LLM为查询参数返回了非字典类型: {type(llm_generated)}。回退到常规生成。")
|
||||
query_params_data = self._generate_params_from_list(query_params_spec_list, operation_id, "query")
|
||||
else:
|
||||
query_params_data = self._generate_params_from_list(query_params_spec_list, operation_id, "query")
|
||||
except Exception as e:
|
||||
self.logger.error(f"[{operation_id}] LLM生成查询参数失败: {e}。回退到常规生成。", exc_info=True)
|
||||
query_params_data = self._generate_params_from_list(query_params_spec_list, operation_id, "query")
|
||||
else: # _build_object_schema_for_params 返回 None
|
||||
query_params_data = self._generate_params_from_list(query_params_spec_list, operation_id, "query")
|
||||
else: # 不使用LLM或LLM服务不可用,或者 query_params_spec_list 为空
|
||||
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and not query_params_spec_list:
|
||||
self.logger.info(f"[{operation_id}] 配置为查询参数使用LLM,但没有定义查询参数规格。")
|
||||
if query_params_spec_list and not self._should_use_llm_for_param_type(param_type_key, test_case_instance):
|
||||
self.logger.info(f"[{operation_id}] 使用常规方法或LLM未启用,为查询参数。")
|
||||
query_params_data = self._generate_params_from_list(query_params_spec_list, operation_id, "query")
|
||||
generated_params_for_endpoint[param_type_key] = query_params_data
|
||||
|
||||
if log_fallback and not (test_case_instance and hasattr(test_case_instance, 'use_llm_for_body') and test_case_instance.use_llm_for_body is False) : # 避免重复日志
|
||||
self.logger.debug(f"LLM生成请求体失败或未启用 (最终决策: {should_try_llm_for_this_run}), 回退到基于规则的生成方法 for {endpoint_spec.method} {endpoint_spec.path}。")
|
||||
body_data = self._generate_data_from_schema(body_schema_dict)
|
||||
# --- 3. 处理头部参数 ---
|
||||
param_type_key = "headers"
|
||||
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and headers_spec_list:
|
||||
self.logger.info(f"[{operation_id}] 尝试使用LLM生成头部参数。")
|
||||
object_schema, model_name = self._build_object_schema_for_params(headers_spec_list, f"DynamicHeadersFor_{operation_id}")
|
||||
if object_schema and model_name:
|
||||
try:
|
||||
PydanticModel = self._create_pydantic_model_from_schema(object_schema, model_name)
|
||||
if PydanticModel:
|
||||
llm_generated = self.llm_service.generate_parameters_from_schema(
|
||||
PydanticModel,
|
||||
prompt_instruction=f"Generate valid HTTP headers for API operation: {operation_id}. Description: {endpoint_spec.get('description', '') or endpoint_spec.get('summary', 'N/A')}"
|
||||
)
|
||||
if isinstance(llm_generated, dict):
|
||||
headers_data_generated = llm_generated # Store LLM generated ones separately first
|
||||
self.logger.info(f"[{operation_id}] LLM成功生成头部参数: {headers_data_generated}")
|
||||
else:
|
||||
self.logger.warning(f"[{operation_id}] LLM为头部参数返回了非字典类型: {type(llm_generated)}。回退到常规生成。")
|
||||
headers_data_generated = self._generate_params_from_list(headers_spec_list, operation_id, "header")
|
||||
else:
|
||||
headers_data_generated = self._generate_params_from_list(headers_spec_list, operation_id, "header")
|
||||
except Exception as e:
|
||||
self.logger.error(f"[{operation_id}] LLM生成头部参数失败: {e}。回退到常规生成。", exc_info=True)
|
||||
headers_data_generated = self._generate_params_from_list(headers_spec_list, operation_id, "header")
|
||||
else: # _build_object_schema_for_params 返回 None
|
||||
headers_data_generated = self._generate_params_from_list(headers_spec_list, operation_id, "header")
|
||||
else: # 不使用LLM或LLM服务不可用,或者 headers_spec_list 为空
|
||||
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and not headers_spec_list:
|
||||
self.logger.info(f"[{operation_id}] 配置为头部参数使用LLM,但没有定义头部参数规格。")
|
||||
if headers_spec_list and not self._should_use_llm_for_param_type(param_type_key, test_case_instance):
|
||||
self.logger.info(f"[{operation_id}] 使用常规方法或LLM未启用,为头部参数。")
|
||||
headers_data_generated = self._generate_params_from_list(headers_spec_list, operation_id, "header")
|
||||
generated_params_for_endpoint[param_type_key] = headers_data_generated
|
||||
|
||||
return {
|
||||
"path_params": path_params_data,
|
||||
"query_params": query_params_data,
|
||||
"headers": headers_data,
|
||||
"body": body_data
|
||||
# --- 4. 处理请求体 ---
|
||||
param_type_key = "body"
|
||||
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and request_body_spec:
|
||||
self.logger.info(f"[{operation_id}] 尝试使用LLM生成请求体。")
|
||||
model_name = f"DynamicBodyFor_{operation_id}"
|
||||
try:
|
||||
PydanticModel = self._create_pydantic_model_from_schema(request_body_spec, model_name)
|
||||
if PydanticModel:
|
||||
llm_generated_body = self.llm_service.generate_parameters_from_schema(
|
||||
PydanticModel,
|
||||
prompt_instruction=f"Generate a valid JSON request body for API operation: {operation_id}. Description: {endpoint_spec.get('description', '') or endpoint_spec.get('summary', 'N/A')}. Schema: {json.dumps(request_body_spec, indent=2)}"
|
||||
)
|
||||
if isinstance(llm_generated_body, dict):
|
||||
try:
|
||||
body_data = PydanticModel(**llm_generated_body).model_dump(by_alias=True)
|
||||
self.logger.info(f"[{operation_id}] LLM成功生成并验证请求体。")
|
||||
except ValidationError as ve:
|
||||
self.logger.error(f"[{operation_id}] LLM生成的请求体未能通过Pydantic模型验证: {ve}。回退到常规生成。")
|
||||
body_data = self._generate_data_from_schema(request_body_spec, "requestBody", operation_id)
|
||||
elif isinstance(llm_generated_body, BaseModel): # LLM直接返回模型实例
|
||||
body_data = llm_generated_body.model_dump(by_alias=True)
|
||||
self.logger.info(f"[{operation_id}] LLM成功生成请求体 (模型实例)。")
|
||||
else:
|
||||
self.logger.warning(f"[{operation_id}] LLM为请求体返回了非预期类型: {type(llm_generated_body)}。回退到常规生成。")
|
||||
body_data = self._generate_data_from_schema(request_body_spec, "requestBody", operation_id)
|
||||
else: # _create_pydantic_model_from_schema 返回 None
|
||||
self.logger.warning(f"[{operation_id}] 未能为请求体创建Pydantic模型。回退到常规生成。")
|
||||
body_data = self._generate_data_from_schema(request_body_spec, "requestBody", operation_id)
|
||||
except Exception as e:
|
||||
self.logger.error(f"[{operation_id}] LLM生成请求体失败: {e}。回退到常规生成。", exc_info=True)
|
||||
body_data = self._generate_data_from_schema(request_body_spec, "requestBody", operation_id)
|
||||
elif request_body_spec: # 不使用LLM但有body spec
|
||||
self.logger.info(f"[{operation_id}] 使用常规方法或LLM未启用/不适用,为请求体。")
|
||||
body_data = self._generate_data_from_schema(request_body_spec, "requestBody", operation_id)
|
||||
else: # 没有requestBody定义
|
||||
self.logger.info(f"[{operation_id}] 端点没有定义请求体。")
|
||||
body_data = None # 明确设为None
|
||||
generated_params_for_endpoint[param_type_key] = body_data
|
||||
|
||||
# 合并最终的头部 (默认头部 + 生成的头部)
|
||||
final_headers = {"Accept": "application/json"}
|
||||
if body_data is not None and method not in ["GET", "DELETE", "HEAD", "OPTIONS"]:
|
||||
final_headers["Content-Type"] = "application/json"
|
||||
final_headers.update(headers_data_generated) # headers_data_generated 是从LLM或常规生成的
|
||||
|
||||
# 将本次生成的所有参数存入缓存
|
||||
self.llm_endpoint_params_cache[endpoint_cache_key] = generated_params_for_endpoint
|
||||
self.logger.info(f"[{operation_id}] 端点 '{endpoint_cache_key}' 的参数已生成并存入缓存。")
|
||||
|
||||
# 确保路径参数中的值都是字符串 (URL部分必须是字符串)
|
||||
path_params_data_str = {k: str(v) if v is not None else "" for k, v in path_params_data.items()}
|
||||
|
||||
self.logger.debug(f"[{operation_id}] (新生成) 准备的请求数据: method={method}, path_params={path_params_data_str}, query_params={query_params_data}, headers={list(final_headers.keys())}, body_type={type(body_data).__name__}")
|
||||
return method, path_params_data_str, query_params_data, final_headers, body_data
|
||||
|
||||
def _build_object_schema_for_params(self, params_spec_list: List[Dict[str, Any]], model_name_base: str) -> Tuple[Optional[Dict[str, Any]], str]:
|
||||
"""
|
||||
将参数列表 (如路径参数、查询参数列表) 转换为一个单一的 "type: object" JSON schema,
|
||||
以便用于创建 Pydantic 模型。
|
||||
会尝试适配参数定义中缺少嵌套 'schema' 字段但有顶层 'type' 的情况。
|
||||
"""
|
||||
if not params_spec_list:
|
||||
return None, model_name_base
|
||||
|
||||
properties = {}
|
||||
required_params = []
|
||||
|
||||
parameter_names = []
|
||||
for param_spec in params_spec_list:
|
||||
param_name = param_spec.get("name")
|
||||
if not param_name:
|
||||
self.logger.warning(f"参数定义缺少 'name' 字段: {param_spec}。已跳过。")
|
||||
continue
|
||||
parameter_names.append(param_name)
|
||||
|
||||
param_schema = param_spec.get("schema")
|
||||
|
||||
# ---- 适配开始 ----
|
||||
if not param_schema and param_spec.get("type"):
|
||||
self.logger.debug(f"参数 '{param_name}' 缺少嵌套 'schema' 字段,尝试从顶层 'type' 构建临时schema。 Param spec: {param_spec}")
|
||||
temp_schema = {"type": param_spec.get("type")}
|
||||
# 从 param_spec 顶层提取其他相关字段到 temp_schema
|
||||
for key in ["format", "default", "example", "description", "enum",
|
||||
"minimum", "maximum", "minLength", "maxLength", "pattern",
|
||||
"items"]: # items 用于处理顶层定义的array
|
||||
if key in param_spec:
|
||||
temp_schema[key] = param_spec[key]
|
||||
param_schema = temp_schema
|
||||
# ---- 适配结束 ----
|
||||
|
||||
if not param_schema: # 如果适配后仍然没有schema
|
||||
self.logger.warning(f"参数 '{param_name}' 缺少 'schema' 定义且无法从顶层构建: {param_spec}。已跳过。")
|
||||
continue
|
||||
|
||||
# 处理 $ref (简单情况,假设ref在components.schemas)
|
||||
# 更复杂的 $ref 解析可能需要访问完整的OpenAPI文档
|
||||
if isinstance(param_schema, dict) and "$ref" in param_schema: # 确保 param_schema 是字典再检查 $ref
|
||||
ref_path = param_schema["$ref"]
|
||||
# 这是一个非常简化的$ref处理,实际可能需要解析整个文档
|
||||
self.logger.warning(f"参数 '{param_name}' 的 schema 包含 $ref '{ref_path}',当前不支持自动解析。请确保schema是内联的。")
|
||||
# 可以尝试提供一个非常基础的schema,或者跳过这个参数,或者让_generate_data_from_schema处理
|
||||
properties[param_name] = {"type": "string", "description": f"Reference to {ref_path}"}
|
||||
elif isinstance(param_schema, dict): # 确保 param_schema 是字典
|
||||
properties[param_name] = param_schema
|
||||
else:
|
||||
self.logger.warning(f"参数 '{param_name}' 的 schema 不是一个有效的字典: {param_schema}。已跳过。")
|
||||
continue
|
||||
|
||||
if param_spec.get("required", False):
|
||||
required_params.append(param_name)
|
||||
|
||||
if not properties: # 如果所有参数都无效
|
||||
return None, model_name_base
|
||||
|
||||
model_name = f"{model_name_base}_{'_'.join(sorted(parameter_names))}" # 使模型名更具唯一性
|
||||
|
||||
object_schema = {
|
||||
"type": "object",
|
||||
"properties": properties,
|
||||
}
|
||||
if required_params:
|
||||
object_schema["required"] = required_params
|
||||
|
||||
self.logger.debug(f"[{model_name_base}] 为参数集 {parameter_names} 构建的最终 Object Schema: {json.dumps(object_schema, indent=2)}, 模型名: {model_name}")
|
||||
return object_schema, model_name
|
||||
|
||||
def _generate_params_from_list(self, params_spec_list: List[Dict[str, Any]], operation_id: str, param_type: str) -> Dict[str, Any]:
|
||||
"""
|
||||
遍历参数定义列表,使用 _generate_data_from_schema 为每个参数生成数据。
|
||||
会尝试适配参数定义中缺少嵌套 'schema' 字段但有顶层 'type' 的情况。
|
||||
"""
|
||||
generated_params: Dict[str, Any] = {}
|
||||
if not params_spec_list:
|
||||
self.logger.info(f"[{operation_id}] 没有定义 {param_type} 参数。")
|
||||
return generated_params
|
||||
|
||||
self.logger.info(f"[{operation_id}] 使用常规方法生成 {param_type} 参数。")
|
||||
for param_spec in params_spec_list:
|
||||
param_name = param_spec.get("name")
|
||||
param_schema = param_spec.get("schema")
|
||||
|
||||
# ---- 适配开始 ----
|
||||
if not param_schema and param_spec.get("type"):
|
||||
self.logger.debug(f"参数 '{param_name}' ('{param_type}' 类型) 缺少嵌套 'schema' 字段,尝试从顶层 'type' 构建临时schema用于常规生成。 Param spec: {param_spec}")
|
||||
temp_schema = {"type": param_spec.get("type")}
|
||||
# 从 param_spec 顶层提取其他相关字段到 temp_schema
|
||||
for key in ["format", "default", "example", "description", "enum",
|
||||
"minimum", "maximum", "minLength", "maxLength", "pattern",
|
||||
"items"]: # items 用于处理顶层定义的array
|
||||
if key in param_spec:
|
||||
temp_schema[key] = param_spec[key]
|
||||
param_schema = temp_schema
|
||||
# ---- 适配结束 ----
|
||||
|
||||
if param_name and param_schema and isinstance(param_schema, dict): # 确保param_schema是字典
|
||||
generated_value = self._generate_data_from_schema(
|
||||
param_schema,
|
||||
context_name=f"{param_type} parameter '{param_name}'",
|
||||
operation_id=operation_id
|
||||
)
|
||||
if generated_value is not None:
|
||||
generated_params[param_name] = generated_value
|
||||
elif param_spec.get("required"):
|
||||
self.logger.warning(f"[{operation_id}] 未能为必需的 {param_type} 参数 '{param_name}' 生成数据 (schema: {param_schema}),且其 schema 中可能没有有效的默认值或示例。")
|
||||
else:
|
||||
self.logger.warning(f"[{operation_id}] 跳过无效的 {param_type} 参数定义 (名称: {param_name}, schema: {param_schema}): {param_spec}")
|
||||
self.logger.info(f"[{operation_id}] 常规方法生成的 {param_type} 参数: {generated_params}")
|
||||
return generated_params
|
||||
|
||||
def run_test_for_endpoint(self, endpoint: Union[YAPIEndpoint, SwaggerEndpoint],
|
||||
global_api_spec: Union[ParsedYAPISpec, ParsedSwaggerSpec]
|
||||
@ -1062,64 +1251,91 @@ class APITestOrchestrator:
|
||||
summary.finalize_summary()
|
||||
return summary
|
||||
|
||||
def _generate_data_from_schema(self, schema: Dict[str, Any]) -> Any:
|
||||
def _generate_data_from_schema(self, schema: Dict[str, Any],
|
||||
context_name: Optional[str] = None,
|
||||
operation_id: Optional[str] = None) -> Any:
|
||||
"""
|
||||
根据JSON Schema生成测试数据 (此方法基本保持不变,可能被测试用例或编排器内部使用)
|
||||
增加了 context_name 和 operation_id 用于更详细的日志。
|
||||
"""
|
||||
log_prefix = f"[{operation_id}] " if operation_id else ""
|
||||
context_log = f" (context: {context_name})" if context_name else ""
|
||||
|
||||
if not schema or not isinstance(schema, dict):
|
||||
self.logger.debug(f"_generate_data_from_schema: 提供的 schema 无效或为空: {schema}")
|
||||
self.logger.debug(f"{log_prefix}_generate_data_from_schema: 提供的 schema 无效或为空{context_log}: {schema}")
|
||||
return None
|
||||
|
||||
schema_type = schema.get('type')
|
||||
|
||||
if 'example' in schema:
|
||||
self.logger.debug(f"{log_prefix}使用 schema 中的 'example' 值 for{context_log}: {schema['example']}")
|
||||
return schema['example']
|
||||
if 'default' in schema:
|
||||
self.logger.debug(f"{log_prefix}使用 schema 中的 'default' 值 for{context_log}: {schema['default']}")
|
||||
return schema['default']
|
||||
|
||||
if schema_type == 'object':
|
||||
result = {}
|
||||
properties = schema.get('properties', {})
|
||||
|
||||
self.logger.debug(f"{log_prefix}生成 object 类型数据 for{context_log}. Properties: {list(properties.keys())}")
|
||||
for prop_name, prop_schema in properties.items():
|
||||
result[prop_name] = self._generate_data_from_schema(prop_schema)
|
||||
# 递归调用时传递上下文,但稍微修改一下 context_name
|
||||
nested_context = f"{context_name}.{prop_name}" if context_name else prop_name
|
||||
result[prop_name] = self._generate_data_from_schema(prop_schema, nested_context, operation_id)
|
||||
return result if result else {}
|
||||
|
||||
elif schema_type == 'array':
|
||||
items_schema = schema.get('items', {})
|
||||
min_items = schema.get('minItems', 1 if schema.get('default') is None and schema.get('example') is None else 0)
|
||||
self.logger.debug(f"{log_prefix}生成 array 类型数据 for{context_log}. Items schema: {items_schema}, minItems: {min_items}")
|
||||
if min_items == 0 and (schema.get('default') == [] or schema.get('example') == []):
|
||||
return []
|
||||
|
||||
num_items_to_generate = max(1, min_items)
|
||||
generated_array = [self._generate_data_from_schema(items_schema) for _ in range(num_items_to_generate)]
|
||||
generated_array = []
|
||||
for i in range(num_items_to_generate):
|
||||
item_context = f"{context_name}[{i}]" if context_name else f"array_item[{i}]"
|
||||
generated_array.append(self._generate_data_from_schema(items_schema, item_context, operation_id))
|
||||
return generated_array
|
||||
|
||||
elif schema_type == 'string':
|
||||
string_format = schema.get('format', '')
|
||||
val = None
|
||||
if 'enum' in schema and schema['enum']:
|
||||
return schema['enum'][0]
|
||||
if string_format == 'date': return '2023-01-01'
|
||||
if string_format == 'date-time': return datetime.datetime.now().isoformat()
|
||||
if string_format == 'email': return 'test@example.com'
|
||||
if string_format == 'uuid': import uuid; return str(uuid.uuid4())
|
||||
return schema.get('default', schema.get('example', 'example_string'))
|
||||
val = schema['enum'][0]
|
||||
elif string_format == 'date': val = '2023-01-01'
|
||||
elif string_format == 'date-time': val = datetime.datetime.now().isoformat()
|
||||
elif string_format == 'email': val = 'test@example.com'
|
||||
elif string_format == 'uuid': import uuid; val = str(uuid.uuid4())
|
||||
else: val = 'example_string'
|
||||
self.logger.debug(f"{log_prefix}生成 string 类型数据 ('{string_format}') for{context_log}: {val}")
|
||||
return val
|
||||
|
||||
elif schema_type == 'number' or schema_type == 'integer':
|
||||
val = schema.get('default', schema.get('example'))
|
||||
if val is not None: return val
|
||||
val_to_return = schema.get('default', schema.get('example'))
|
||||
if val_to_return is not None:
|
||||
self.logger.debug(f"{log_prefix}使用 number/integer 的 default/example 值 for{context_log}: {val_to_return}")
|
||||
return val_to_return
|
||||
|
||||
minimum = schema.get('minimum')
|
||||
maximum = schema.get('maximum') # Not used yet for generation, but could be
|
||||
if minimum is not None: return minimum
|
||||
return 0 if schema_type == 'integer' else 0.0
|
||||
# maximum = schema.get('maximum') # Not used yet for generation, but could be
|
||||
if minimum is not None:
|
||||
val_to_return = minimum
|
||||
else:
|
||||
val_to_return = 0 if schema_type == 'integer' else 0.0
|
||||
self.logger.debug(f"{log_prefix}生成 number/integer 类型数据 for{context_log}: {val_to_return}")
|
||||
return val_to_return
|
||||
|
||||
elif schema_type == 'boolean':
|
||||
return schema.get('default', schema.get('example', False))
|
||||
val = schema.get('default', schema.get('example', False))
|
||||
self.logger.debug(f"{log_prefix}生成 boolean 类型数据 for{context_log}: {val}")
|
||||
return val
|
||||
|
||||
elif schema_type == 'null':
|
||||
self.logger.debug(f"{log_prefix}生成 null 类型数据 for{context_log}")
|
||||
return None
|
||||
|
||||
self.logger.debug(f"_generate_data_from_schema: 未知或不支持的 schema 类型 '{schema_type}' for schema: {schema}")
|
||||
self.logger.debug(f"{log_prefix}_generate_data_from_schema: 未知或不支持的 schema 类型 '{schema_type}' for{context_log}. Schema: {schema}")
|
||||
return None
|
||||
|
||||
|
||||
|
||||
@ -66,8 +66,20 @@ def parse_args():
|
||||
help='要使用的LLM模型名称 (例如 "gpt-3.5-turbo", "gpt-4")。')
|
||||
llm_group.add_argument('--use-llm-for-request-body',
|
||||
action='store_true',
|
||||
default=True,
|
||||
default=False, # 默认不使用LLM生成请求体
|
||||
help='是否启用LLM为API请求生成请求体数据。')
|
||||
llm_group.add_argument('--use-llm-for-path-params',
|
||||
action='store_true',
|
||||
default=False,
|
||||
help='是否启用LLM为API请求生成路径参数。')
|
||||
llm_group.add_argument('--use-llm-for-query-params',
|
||||
action='store_true',
|
||||
default=False,
|
||||
help='是否启用LLM为API请求生成查询参数。')
|
||||
llm_group.add_argument('--use-llm-for-headers',
|
||||
action='store_true',
|
||||
default=False,
|
||||
help='是否启用LLM为API请求生成头部参数。')
|
||||
|
||||
return parser.parse_args()
|
||||
|
||||
@ -212,11 +224,14 @@ def main():
|
||||
# 将 custom_test_cases_dir 参数传递给 APITestOrchestrator 的构造函数
|
||||
orchestrator = APITestOrchestrator(
|
||||
base_url=args.base_url,
|
||||
custom_test_cases_dir=args.custom_test_cases_dir, # 新增参数
|
||||
custom_test_cases_dir=args.custom_test_cases_dir,
|
||||
llm_api_key=args.llm_api_key,
|
||||
llm_base_url=args.llm_base_url,
|
||||
llm_model_name=args.llm_model_name,
|
||||
use_llm_for_request_body=args.use_llm_for_request_body
|
||||
use_llm_for_request_body=args.use_llm_for_request_body,
|
||||
use_llm_for_path_params=args.use_llm_for_path_params,
|
||||
use_llm_for_query_params=args.use_llm_for_query_params,
|
||||
use_llm_for_headers=args.use_llm_for_headers
|
||||
)
|
||||
|
||||
# 运行测试
|
||||
|
||||
13435
test_report.json
13435
test_report.json
File diff suppressed because it is too large
Load Diff
Loading…
x
Reference in New Issue
Block a user