This commit is contained in:
gongwenxin 2025-05-26 15:38:37 +08:00
parent 4180a0ce81
commit 6dde4d73e0
29 changed files with 3552 additions and 1265 deletions

View File

@ -3,11 +3,12 @@
本 API 合规性测试框架主要由以下几个核心组件构成,它们协同工作以完成测试的定义、发现、执行和报告: 本 API 合规性测试框架主要由以下几个核心组件构成,它们协同工作以完成测试的定义、发现、执行和报告:
1. **命令行接口 (`run_api_tests.py`)**: 1. **命令行接口 (`run_api_tests.py`)**:
* 作为测试执行的入口。 * 作为测试执行的入口。
* 负责解析用户通过命令行传入的参数,例如 API 服务的基础 URL、API 规范文件路径YAPI 或 Swagger、测试用例目录、输出报告配置以及 LLM 相关配置。 * 负责解析用户通过命令行传入的参数,例如 API 服务的基础 URL、API 规范文件路径YAPI 或 Swagger、测试用例目录、输出报告配置以及 LLM 相关配置。
* 初始化并驱动 `APITestOrchestrator` * 初始化并驱动 `APITestOrchestrator`
2. **测试编排器 (`APITestOrchestrator` 在 `ddms_compliance_suite/test_orchestrator.py`)**: 2. **测试编排器 (`APITestOrchestrator` 在 `ddms_compliance_suite/test_orchestrator.py`)**:
* **核心控制器**:是整个测试流程的指挥中心。 * **核心控制器**:是整个测试流程的指挥中心。
* **组件初始化**:负责初始化和管理其他关键组件,如 `InputParser`API 规范解析器)、`APICaller`API 请求调用器)、`TestCaseRegistry`(测试用例注册表)以及可选的 `LLMService`(大模型服务)。 * **组件初始化**:负责初始化和管理其他关键组件,如 `InputParser`API 规范解析器)、`APICaller`API 请求调用器)、`TestCaseRegistry`(测试用例注册表)以及可选的 `LLMService`(大模型服务)。
* **测试流程管理** * **测试流程管理**
@ -22,28 +23,28 @@
* 使用 `APICaller` 发送最终构建的 API 请求。 * 使用 `APICaller` 发送最终构建的 API 请求。
* 接收到 API 响应后,调用测试用例实例中定义的 `validate_response``check_performance` 方法,对响应进行详细验证。 * 接收到 API 响应后,调用测试用例实例中定义的 `validate_response``check_performance` 方法,对响应进行详细验证。
* **结果汇总**:收集每个测试用例的执行结果 (`ExecutedTestCaseResult`),汇总成每个端点的测试结果 (`TestResult`),并最终生成整个测试运行的摘要 (`TestSummary`)。 * **结果汇总**:收集每个测试用例的执行结果 (`ExecutedTestCaseResult`),汇总成每个端点的测试结果 (`TestResult`),并最终生成整个测试运行的摘要 (`TestSummary`)。
3. **测试用例注册表 (`TestCaseRegistry` 在 `ddms_compliance_suite/test_case_registry.py`)**: 3. **测试用例注册表 (`TestCaseRegistry` 在 `ddms_compliance_suite/test_case_registry.py`)**:
* **动态发现**:负责在用户指定的目录 (`custom_test_cases_dir`) 下扫描并动态加载所有以 `.py` 结尾的测试用例文件。 * **动态发现**:负责在用户指定的目录 (`custom_test_cases_dir`) 下扫描并动态加载所有以 `.py` 结尾的测试用例文件。
* **类识别与注册**:从加载的模块中,识别出所有继承自 `BaseAPITestCase` 的类,并根据其 `id` 属性进行注册。 * **类识别与注册**:从加载的模块中,识别出所有继承自 `BaseAPITestCase` 的类,并根据其 `id` 属性进行注册。
* **执行顺序排序**:在发现所有测试用例类后,会根据每个类的 `execution_order` 属性(主排序键,升序)和类名 `__name__`(次排序键,字母升序)对它们进行排序。 * **执行顺序排序**:在发现所有测试用例类后,会根据每个类的 `execution_order` 属性(主排序键,升序)和类名 `__name__`(次排序键,字母升序)对它们进行排序。
* **适用性筛选**:提供 `get_applicable_test_cases` 方法,根据 API 端点的 HTTP 方法和路径(通过正则表达式匹配)筛选出适用的、已排序的测试用例类列表给编排器。 * **适用性筛选**:提供 `get_applicable_test_cases` 方法,根据 API 端点的 HTTP 方法和路径(通过正则表达式匹配)筛选出适用的、已排序的测试用例类列表给编排器。
4. **测试框架核心 (`test_framework_core.py`)**: 4. **测试框架核心 (`test_framework_core.py`)**:
* **`BaseAPITestCase`**:所有自定义测试用例的基类。它定义了测试用例应具备的元数据(如 `id`, `name`, `description`, `severity`, `tags`, `execution_order`, `applicable_methods`, `applicable_paths_regex` 以及 LLM 使用标志位)和一系列生命周期钩子方法(如 `generate_*`, `validate_*`)。 * **`BaseAPITestCase`**:所有自定义测试用例的基类。它定义了测试用例应具备的元数据(如 `id`, `name`, `description`, `severity`, `tags`, `execution_order`, `applicable_methods`, `applicable_paths_regex` 以及 LLM 使用标志位)和一系列生命周期钩子方法(如 `generate_*`, `validate_*`)。
* **`APIRequestContext` / `APIResponseContext`**:数据类,分别用于封装 API 请求和响应的上下文信息,在测试用例的钩子方法间传递。 * **`APIRequestContext` / `APIResponseContext`**:数据类,分别用于封装 API 请求和响应的上下文信息,在测试用例的钩子方法间传递。
* **`ValidationResult`**:数据类,用于表示单个验证点的结果(通过/失败、消息、详细信息)。 * **`ValidationResult`**:数据类,用于表示单个验证点的结果(通过/失败、消息、详细信息)。
* **`TestSeverity`**:枚举类型,定义测试用例的严重级别。 * **`TestSeverity`**:枚举类型,定义测试用例的严重级别。
5. **API 规范解析器 (`InputParser` 在 `ddms_compliance_suite/input_parser/parser.py`)**: 5. **API 规范解析器 (`InputParser` 在 `ddms_compliance_suite/input_parser/parser.py`)**:
* 负责读取和解析 YAPIJSON 格式)或 Swagger/OpenAPIJSON 或 YAML 格式)的 API 规范文件。 * 负责读取和解析 YAPIJSON 格式)或 Swagger/OpenAPIJSON 或 YAML 格式)的 API 规范文件。
* 将原始规范数据转换成框架内部易于处理的结构化对象(如 `ParsedYAPISpec`, `YAPIEndpoint`, `ParsedSwaggerSpec`, `SwaggerEndpoint`)。 * 将原始规范数据转换成框架内部易于处理的结构化对象(如 `ParsedYAPISpec`, `YAPIEndpoint`, `ParsedSwaggerSpec`, `SwaggerEndpoint`)。
6. **API 调用器 (`APICaller` 在 `ddms_compliance_suite/api_caller/caller.py`)**: 6. **API 调用器 (`APICaller` 在 `ddms_compliance_suite/api_caller/caller.py`)**:
* 封装了实际的 HTTP 请求发送逻辑。 * 封装了实际的 HTTP 请求发送逻辑。
* 接收一个 `APIRequest` 对象包含方法、URL、参数、头部、请求体使用如 `requests` 库执行请求,并返回一个 `APIResponse` 对象(包含状态码、响应头、响应体内容等)。 * 接收一个 `APIRequest` 对象包含方法、URL、参数、头部、请求体使用如 `requests` 库执行请求,并返回一个 `APIResponse` 对象(包含状态码、响应头、响应体内容等)。
7. **LLM 服务 (`LLMService` 在 `ddms_compliance_suite/llm_utils/llm_service.py`)** (可选): 7. **LLM 服务 (`LLMService` 在 `ddms_compliance_suite/llm_utils/llm_service.py`)** (可选):
* 如果配置了 LLM 服务(如通义千问的兼容 OpenAI 模式的 API此组件负责与 LLM API 交互。 * 如果配置了 LLM 服务(如通义千问的兼容 OpenAI 模式的 API此组件负责与 LLM API 交互。
* 主要用于根据 Pydantic 模型(从 JSON Schema 动态创建)智能生成复杂的请求参数或请求体。 * 主要用于根据 Pydantic 模型(从 JSON Schema 动态创建)智能生成复杂的请求参数或请求体。
@ -59,6 +60,7 @@
1. **创建 Python 文件**:在您的自定义测试用例目录(例如 `custom_testcases/`)下创建一个新的 `.py` 文件。 1. **创建 Python 文件**:在您的自定义测试用例目录(例如 `custom_testcases/`)下创建一个新的 `.py` 文件。
2. **继承 `BaseAPITestCase`**:定义一个或多个类,使其继承自 `ddms_compliance_suite.test_framework_core.BaseAPITestCase` 2. **继承 `BaseAPITestCase`**:定义一个或多个类,使其继承自 `ddms_compliance_suite.test_framework_core.BaseAPITestCase`
3. **定义元数据 (类属性)** 3. **定义元数据 (类属性)**
* `id: str`: 测试用例的全局唯一标识符 (例如 `"TC-MYFEATURE-001"`)。 * `id: str`: 测试用例的全局唯一标识符 (例如 `"TC-MYFEATURE-001"`)。
* `name: str`: 人类可读的名称。 * `name: str`: 人类可读的名称。
* `description: str`: 详细描述。 * `description: str`: 详细描述。
@ -82,32 +84,31 @@
* `use_llm_for_query_params: bool = False` * `use_llm_for_query_params: bool = False`
* `use_llm_for_headers: bool = False` * `use_llm_for_headers: bool = False`
(如果测试用例中不设置这些,则遵循 `run_api_tests.py` 传入的全局 LLM 开关。) (如果测试用例中不设置这些,则遵循 `run_api_tests.py` 传入的全局 LLM 开关。)
4. **实现验证逻辑**:重写 `BaseAPITestCase` 中一个或多个 `generate_*``validate_*` 方法。 4. **实现验证逻辑**:重写 `BaseAPITestCase` 中一个或多个 `generate_*``validate_*` 方法。
### 2. `BaseAPITestCase` 核心方法 ### 2. `BaseAPITestCase` 核心方法
* **`__init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any])`**: * **`__init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any])`**:
* 构造函数。`endpoint_spec` 包含当前测试端点的 API 定义,`global_api_spec` 包含完整的 API 规范。 * 构造函数。`endpoint_spec` 包含当前测试端点的 API 定义,`global_api_spec` 包含完整的 API 规范。
* 基类会初始化 `self.logger`,可用于记录日志。 * 基类会初始化 `self.logger`,可用于记录日志。
* **请求生成与修改方法**: 在 API 请求发送前调用,用于修改或生成请求数据。 * **请求生成与修改方法**: 在 API 请求发送前调用,用于修改或生成请求数据。
* `generate_query_params(self, current_query_params: Dict[str, Any]) -> Dict[str, Any]` * `generate_query_params(self, current_query_params: Dict[str, Any]) -> Dict[str, Any]`
* `generate_headers(self, current_headers: Dict[str, str]) -> Dict[str, str]` * `generate_headers(self, current_headers: Dict[str, str]) -> Dict[str, str]`
* `generate_request_body(self, current_body: Optional[Any]) -> Optional[Any]` * `generate_request_body(self, current_body: Optional[Any]) -> Optional[Any]`
* (如果需要,您也可以尝试定义 `generate_path_params` 方法来自定义路径参数的生成,其模式与上述类似。)
* **请求预校验方法**: 在请求数据完全构建后、发送前调用,用于静态检查。返回 `List[ValidationResult]` * **请求预校验方法**: 在请求数据完全构建后、发送前调用,用于静态检查。返回 `List[ValidationResult]`
* `validate_request_url(self, url: str, request_context: APIRequestContext) -> List[ValidationResult]` * `validate_request_url(self, url: str, request_context: APIRequestContext) -> List[ValidationResult]`
* `validate_request_headers(self, headers: Dict[str, str], request_context: APIRequestContext) -> List[ValidationResult]` * `validate_request_headers(self, headers: Dict[str, str], request_context: APIRequestContext) -> List[ValidationResult]`
* `validate_request_body(self, body: Optional[Any], request_context: APIRequestContext) -> List[ValidationResult]` * `validate_request_body(self, body: Optional[Any], request_context: APIRequestContext) -> List[ValidationResult]`
* **响应验证方法**: 在收到 API 响应后调用,这是最主要的验证阶段。返回 `List[ValidationResult]` * **响应验证方法**: 在收到 API 响应后调用,这是最主要的验证阶段。返回 `List[ValidationResult]`
* `validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> List[ValidationResult]` * `validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> List[ValidationResult]`
* 检查状态码、响应头、响应体内容是否符合预期。 * 检查状态码、响应头、响应体内容是否符合预期。
* 进行业务逻辑相关的断言。 * 进行业务逻辑相关的断言。
* **性能检查方法 (可选)**: * **性能检查方法 (可选)**:
* `check_performance(self, response_context: APIResponseContext, request_context: APIRequestContext) -> List[ValidationResult]` * `check_performance(self, response_context: APIResponseContext, request_context: APIRequestContext) -> List[ValidationResult]`
* 通常用于检查响应时间 `response_context.elapsed_time` * 通常用于检查响应时间 `response_context.elapsed_time`

View File

@ -0,0 +1,430 @@
[
{
"index": 0,
"name": "公共分类",
"desc": "公共分类",
"add_time": 1730355210,
"up_time": 1730355210,
"list": [
{
"query_path": {
"path": "/api/dms/{dms_instance_code}/v1/message/push/{schema}/{version}",
"params": []
},
"edit_uid": 0,
"status": "undone",
"type": "var",
"req_body_is_json_schema": true,
"res_body_is_json_schema": true,
"api_opened": false,
"index": 0,
"tag": [],
"_id": 135716,
"method": "POST",
"title": "数据推送接口",
"desc": "",
"path": "/api/dms/{dms_instance_code}/v1/message/push/{schema}/{version}",
"req_params": [
{
"_id": "67ff3dc5335acf6754926ab7",
"name": "dms_instance_code",
"desc": "注册实例code\n井筒中心 well_kd_wellbore_ideas01"
},
{
"_id": "67ff3dc5335acf0829926ab6",
"name": "schema",
"desc": ""
},
{
"_id": "67ff3dc5335acf485f926ab5",
"name": "version",
"desc": ""
}
],
"req_body_form": [],
"req_headers": [
{
"required": "1",
"_id": "67ff3dc5335acf11b4926aba",
"name": "Content-Type",
"value": "application/json"
},
{
"required": "1",
"_id": "67ff3dc5335acfae3f926ab9",
"name": "tenant-id",
"desc": "tenant-id (Only:undefined)"
},
{
"required": "1",
"_id": "67ff3dc5335acfad4f926ab8",
"name": "Authorization",
"desc": "Authorization (Only:undefined)"
}
],
"req_query": [],
"req_body_type": "json",
"res_body_type": "json",
"res_body": "{\"$schema\":\"http://json-schema.org/draft-04/schema#\",\"type\":\"object\",\"properties\":{\"code\":{\"type\":\"number\"},\"message\":{\"type\":\"string\"},\"data\":{\"type\":\"object\",\"properties\":{\"total\":{\"type\":\"number\"},\"list\":{\"type\":\"array\",\"items\":{\"type\":\"object\",\"properties\":{\"dsid\":{\"type\":\"string\"},\"dataRegion\":{\"type\":\"string\"},\"gasReleaseMon\":{\"type\":\"null\"},\"gasReleaseYear\":{\"type\":\"null\"},\"releaseGasCum\":{\"type\":\"null\"}}}}}}}}",
"req_body_other": "{\"properties\":{\"isSearchCount\":{\"default\":true,\"description\":\"是否统计总条数\",\"type\":\"boolean\"},\"query\":{\"description\":\"查询条件\",\"properties\":{\"dataRegions\":{\"description\":\"数据域JD、DG、TL\",\"items\":{\"description\":\"数据域JD、DG、TL\",\"type\":\"string\"},\"type\":\"array\"},\"fields\":{\"description\":\"查询的字段\",\"items\":{\"description\":\"查询的字段\",\"type\":\"string\"},\"type\":\"array\"},\"filter\":{\"description\":\"筛选器\",\"properties\":{\"key\":{\"description\":\"条件项\",\"type\":\"string\"},\"logic\":{\"description\":\"逻辑操作符可选值为AND、OR\",\"type\":\"string\"},\"realValue\":{\"description\":\"条件值\",\"items\":{\"description\":\"条件值\",\"type\":\"object\",\"properties\":{}},\"type\":\"array\"},\"singleValue\":{\"type\":\"object\",\"writeOnly\":true,\"properties\":{}},\"subFilter\":{\"description\":\"子条件\",\"items\":{\"$ref\":\"#/components/schemas/FilterVO\",\"type\":\"string\"},\"type\":\"array\"},\"symbol\":{\"description\":\"运算符,可选值为:>、>=、<、<=、>、<、=、<>、!=、IN、NOTIN、LIKE、IS当运算符为 IS 时条件值只能为NULL或NOT NULL\",\"type\":\"string\"}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/FilterVO\"},\"groupFields\":{\"description\":\"分组字段group by\",\"items\":{\"description\":\"分组字段group by\",\"type\":\"string\"},\"type\":\"array\"},\"groupFilter\":{\"description\":\"筛选器\",\"properties\":{\"key\":{\"description\":\"条件项\",\"type\":\"string\"},\"logic\":{\"description\":\"逻辑操作符可选值为AND、OR\",\"type\":\"string\"},\"realValue\":{\"description\":\"条件值\",\"items\":{\"description\":\"条件值\",\"type\":\"object\",\"properties\":{}},\"type\":\"array\"},\"singleValue\":{\"type\":\"object\",\"writeOnly\":true,\"properties\":{}},\"subFilter\":{\"description\":\"子条件\",\"items\":{\"$ref\":\"#/components/schemas/FilterVO\",\"type\":\"string\"},\"type\":\"array\"},\"symbol\":{\"description\":\"运算符,可选值为:>、>=、<、<=、>、<、=、<>、!=、IN、NOTIN、LIKE、IS当运算符为 IS 时条件值只能为NULL或NOT NULL\",\"type\":\"string\"}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/FilterVO\"},\"sort\":{\"additionalProperties\":{\"description\":\"排序字段key=字段名value=排序方式ASC、DESC\",\"type\":\"string\"},\"description\":\"排序字段key=字段名value=排序方式ASC、DESC\",\"type\":\"object\",\"properties\":{}}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/QueryVO\"}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/RdbQueryPageInput\"}",
"project_id": 1193,
"catid": 17672,
"markdown": "",
"uid": 808,
"add_time": 1744780583,
"up_time": 1744780741,
"__v": 0
}
]
},
{
"index": 0,
"name": "地质单元",
"desc": null,
"add_time": 1745736888,
"up_time": 1745736888,
"list": [
{
"query_path": {
"path": "/api/dms/{dms_instance_code}/v1/cd_geo_unit/{version}",
"params": []
},
"edit_uid": 0,
"status": "undone",
"type": "var",
"req_body_is_json_schema": true,
"res_body_is_json_schema": true,
"api_opened": false,
"index": 0,
"tag": [],
"_id": 135751,
"method": "POST",
"title": "地质单元列表查询",
"desc": "",
"path": "/api/dms/{dms_instance_code}/v1/cd_geo_unit/{version}",
"req_params": [
{
"_id": "680dd55a335acff12e926d18",
"name": "dms_instance_code",
"desc": "注册实例code\n井筒中心 well_kd_wellbore_ideas01"
},
{
"_id": "680dd55a335acfc43d926d17",
"name": "version",
"example": "1.0.0",
"desc": "交换模型版本"
}
],
"req_body_form": [],
"req_headers": [
{
"required": "1",
"_id": "680dd55a335acfb51b926d1d",
"name": "Content-Type",
"value": "application/json"
},
{
"required": "1",
"_id": "680dd55a335acf6226926d1c",
"name": "tenant-id",
"desc": "tenant-id (Only:undefined)"
},
{
"required": "1",
"_id": "680dd55a335acf39a0926d1b",
"name": "Authorization",
"desc": "Authorization (Only:undefined)"
}
],
"req_query": [
{
"required": "0",
"_id": "680dd55a335acfabfa926d1a",
"name": "pageNo",
"desc": "页码(从1开始)"
},
{
"required": "0",
"_id": "680dd55a335acf831b926d19",
"name": "pageSize",
"desc": "分页大小(最大值200)"
}
],
"req_body_type": "json",
"res_body_type": "json",
"res_body": "{\"$schema\":\"http://json-schema.org/draft-04/schema#\",\"type\":\"object\",\"properties\":{\"code\":{\"type\":\"number\"},\"message\":{\"type\":\"string\"},\"data\":{\"type\":\"object\",\"properties\":{\"total\":{\"type\":\"number\"},\"list\":{\"type\":\"array\",\"items\":{\"type\":\"object\",\"properties\":{\"dsid\":{\"type\":\"string\"},\"dataRegion\":{\"type\":\"string\"},\"gasReleaseMon\":{\"type\":\"null\"},\"gasReleaseYear\":{\"type\":\"null\"},\"releaseGasCum\":{\"type\":\"null\"}}}}}}}}",
"req_body_other": "{\"properties\":{\"isSearchCount\":{\"default\":true,\"description\":\"是否统计总条数\",\"type\":\"boolean\"},\"query\":{\"description\":\"查询条件\",\"properties\":{\"dataRegions\":{\"description\":\"数据域JD、DG、TL\",\"items\":{\"description\":\"数据域JD、DG、TL\",\"type\":\"string\"},\"type\":\"array\"},\"fields\":{\"description\":\"查询的字段\",\"items\":{\"description\":\"查询的字段\",\"type\":\"string\"},\"type\":\"array\"},\"filter\":{\"description\":\"筛选器\",\"properties\":{\"key\":{\"description\":\"条件项\",\"type\":\"string\"},\"logic\":{\"description\":\"逻辑操作符可选值为AND、OR\",\"type\":\"string\"},\"realValue\":{\"description\":\"条件值\",\"items\":{\"description\":\"条件值\",\"type\":\"object\",\"properties\":{}},\"type\":\"array\"},\"singleValue\":{\"type\":\"object\",\"writeOnly\":true,\"properties\":{}},\"subFilter\":{\"description\":\"子条件\",\"items\":{\"$ref\":\"#/components/schemas/FilterVO\",\"type\":\"string\"},\"type\":\"array\"},\"symbol\":{\"description\":\"运算符,可选值为:>、>=、<、<=、>、<、=、<>、!=、IN、NOTIN、LIKE、IS当运算符为 IS 时条件值只能为NULL或NOT NULL\",\"type\":\"string\"}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/FilterVO\"},\"groupFields\":{\"description\":\"分组字段group by\",\"items\":{\"description\":\"分组字段group by\",\"type\":\"string\"},\"type\":\"array\"},\"groupFilter\":{\"description\":\"筛选器\",\"properties\":{\"key\":{\"description\":\"条件项\",\"type\":\"string\"},\"logic\":{\"description\":\"逻辑操作符可选值为AND、OR\",\"type\":\"string\"},\"realValue\":{\"description\":\"条件值\",\"items\":{\"description\":\"条件值\",\"type\":\"object\",\"properties\":{}},\"type\":\"array\"},\"singleValue\":{\"type\":\"object\",\"writeOnly\":true,\"properties\":{}},\"subFilter\":{\"description\":\"子条件\",\"items\":{\"$ref\":\"#/components/schemas/FilterVO\",\"type\":\"string\"},\"type\":\"array\"},\"symbol\":{\"description\":\"运算符,可选值为:>、>=、<、<=、>、<、=、<>、!=、IN、NOTIN、LIKE、IS当运算符为 IS 时条件值只能为NULL或NOT NULL\",\"type\":\"string\"}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/FilterVO\"},\"sort\":{\"additionalProperties\":{\"description\":\"排序字段key=字段名value=排序方式ASC、DESC\",\"type\":\"string\"},\"description\":\"排序字段key=字段名value=排序方式ASC、DESC\",\"type\":\"object\",\"properties\":{}}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/QueryVO\"}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/RdbQueryPageInput\"}",
"project_id": 1193,
"catid": 18705,
"markdown": "",
"uid": 1103,
"add_time": 1745736910,
"up_time": 1745737050,
"__v": 0
},
{
"query_path": {
"path": "/api/dms/{dms_instance_code}/v1/cd_geo_unit",
"params": []
},
"edit_uid": 0,
"status": "undone",
"type": "var",
"req_body_is_json_schema": true,
"res_body_is_json_schema": true,
"api_opened": false,
"index": 0,
"tag": [],
"_id": 135749,
"method": "PUT",
"title": "地质单元数据修改",
"desc": "",
"path": "/api/dms/{dms_instance_code}/v1/cd_geo_unit",
"req_params": [
{
"_id": "680dd510335acf19c3926cec",
"name": "dms_instance_code",
"desc": "注册实例code\n井筒中心 well_kd_wellbore_ideas01"
}
],
"req_body_form": [],
"req_headers": [
{
"required": "1",
"_id": "680dd510335acf7bb3926cf0",
"name": "Content-Type",
"value": "application/json"
},
{
"required": "1",
"_id": "680dd510335acf6953926cef",
"name": "tenant-id",
"desc": "tenant-id (Only:undefined)"
},
{
"required": "1",
"_id": "680dd510335acfd86b926cee",
"name": "Authorization",
"desc": "Authorization (Only:undefined)"
}
],
"req_query": [
{
"required": "1",
"_id": "680dd510335acf1ed1926ced",
"name": "id",
"example": "dsid",
"desc": "主键id的key"
}
],
"req_body_type": "json",
"res_body_type": "json",
"res_body": "{\"$schema\":\"http://json-schema.org/draft-04/schema#\",\"type\":\"object\",\"properties\":{\"code\":{\"type\":\"number\"},\"message\":{\"type\":\"string\"},\"data\":{\"type\":\"boolean\"}}}",
"req_body_other": "{\"$schema\":\"http://json-schema.org/draft-04/schema#\",\"type\":\"object\",\"properties\":{\"version\":{\"type\":\"string\"},\"data\":{\"type\":\"array\",\"items\":{\"type\":\"object\",\"properties\":{\"bsflag\":{\"type\":\"number\"},\"wellCommonName\":{\"type\":\"string\"},\"wellId\":{\"type\":\"string\"},\"dataRegion\":{\"type\":\"string\"}},\"required\":[\"bsflag\",\"wellCommonName\",\"wellId\",\"dataRegion\"]}}}}",
"project_id": 1193,
"catid": 18705,
"markdown": "",
"uid": 1103,
"add_time": 1745736907,
"up_time": 1745736976,
"__v": 0
},
{
"query_path": {
"path": "/api/dms/{dms_instance_code}/v1/cd_geo_unit",
"params": []
},
"edit_uid": 0,
"status": "undone",
"type": "var",
"req_body_is_json_schema": true,
"res_body_is_json_schema": true,
"api_opened": false,
"index": 0,
"tag": [],
"_id": 135750,
"method": "DELETE",
"title": "地质单元数据删除",
"desc": "",
"path": "/api/dms/{dms_instance_code}/v1/cd_geo_unit",
"req_params": [
{
"_id": "680dd51b335acfe93a926cf1",
"name": "dms_instance_code",
"desc": "注册实例code\n井筒中心 well_kd_wellbore_ideas01"
}
],
"req_body_form": [],
"req_headers": [
{
"required": "1",
"_id": "680dd51b335acfb316926cf5",
"name": "Content-Type",
"value": "application/json"
},
{
"required": "1",
"_id": "680dd51b335acf45f4926cf4",
"name": "tenant-id",
"desc": "tenant-id (Only:undefined)"
},
{
"required": "1",
"_id": "680dd51b335acf0cfa926cf3",
"name": "Authorization",
"desc": "Authorization (Only:undefined)"
}
],
"req_query": [
{
"required": "1",
"_id": "680dd51b335acf7441926cf2",
"name": "id",
"example": "dsid",
"desc": "主键id的key"
}
],
"req_body_type": "json",
"res_body_type": "json",
"res_body": "{\"$schema\":\"http://json-schema.org/draft-04/schema#\",\"type\":\"object\",\"properties\":{\"code\":{\"type\":\"number\"},\"message\":{\"type\":\"string\"},\"data\":{\"type\":\"boolean\"}}}",
"req_body_other": "{\"$schema\":\"http://json-schema.org/draft-04/schema#\",\"type\":\"object\",\"properties\":{\"version\":{\"type\":\"string\",\"title\":\"版本号\"},\"data\":{\"type\":\"array\",\"items\":{\"type\":\"string\"},\"title\":\"主键id数据集\"}}}",
"project_id": 1193,
"catid": 18705,
"markdown": "",
"uid": 1103,
"add_time": 1745736908,
"up_time": 1745736987,
"__v": 0
},
{
"query_path": {
"path": "/api/dms/{dms_instance_code}/v1/cd_geo_unit",
"params": []
},
"edit_uid": 0,
"status": "done",
"type": "var",
"req_body_is_json_schema": true,
"res_body_is_json_schema": true,
"api_opened": false,
"index": 0,
"tag": [],
"_id": 135748,
"method": "POST",
"title": "地质单元数据添加",
"desc": "",
"path": "/api/dms/{dms_instance_code}/v1/cd_geo_unit",
"req_params": [
{
"_id": "680dd4ff335acf81b3926ce8",
"name": "dms_instance_code",
"desc": "注册实例code\n井筒中心 well_kd_wellbore_ideas01"
}
],
"req_body_form": [],
"req_headers": [
{
"required": "1",
"_id": "680dd4ff335acfc5c7926ceb",
"name": "Content-Type",
"value": "application/json"
},
{
"required": "1",
"_id": "680dd4ff335acff16f926cea",
"name": "tenant-id",
"desc": "tenant-id (Only:undefined)"
},
{
"required": "1",
"_id": "680dd4ff335acf1a5c926ce9",
"name": "Authorization",
"desc": "Authorization (Only:undefined)"
}
],
"req_query": [],
"req_body_type": "json",
"res_body_type": "json",
"res_body": "{\"$schema\":\"http://json-schema.org/draft-04/schema#\",\"type\":\"object\",\"properties\":{\"code\":{\"type\":\"number\"},\"message\":{\"type\":\"string\"},\"data\":{\"type\":\"boolean\"}}}",
"req_body_other": "{\"$schema\":\"http://json-schema.org/draft-04/schema#\",\"type\":\"object\",\"properties\":{\"version\":{\"type\":\"string\",\"title\":\"交换模型版本号\"},\"data\":{\"type\":\"array\",\"items\":{\"type\":\"object\",\"properties\":{\"bsflag\":{\"type\":\"number\",\"title\":\"必填字段删除标记\"},\"wellCommonName\":{\"type\":\"string\"},\"wellId\":{\"type\":\"string\"},\"dataRegion\":{\"type\":\"string\"}},\"required\":[\"bsflag\",\"wellCommonName\",\"wellId\",\"dataRegion\"]},\"title\":\"交换模型数据集\"}}}",
"project_id": 1193,
"catid": 18705,
"markdown": "",
"uid": 1103,
"add_time": 1745736907,
"up_time": 1745736959,
"__v": 0
},
{
"query_path": {
"path": "/api/dms/{dms_instance_code}/v1/cd_geo_unit/{version}/{id}",
"params": []
},
"edit_uid": 0,
"status": "done",
"type": "var",
"req_body_is_json_schema": true,
"res_body_is_json_schema": true,
"api_opened": false,
"index": 0,
"tag": [],
"_id": 135752,
"method": "GET",
"title": "地质单元查询详情",
"desc": "",
"path": "/api/dms/{dms_instance_code}/v1/cd_geo_unit/{version}/{id}",
"req_params": [
{
"_id": "680dd53d335acf42db926d05",
"name": "dms_instance_code",
"desc": "注册实例code\n井筒中心 well_kd_wellbore_ideas01"
},
{
"_id": "680dd53d335acf2105926d04",
"name": "version",
"example": "1.0.0",
"desc": "交换模型版本"
},
{
"_id": "680dd53d335acf7970926d03",
"name": "id",
"desc": ""
}
],
"req_body_form": [],
"req_headers": [
{
"required": "1",
"_id": "680dd53d335acf68e4926d08",
"name": "Content-Type",
"value": "application/json"
},
{
"required": "1",
"_id": "680dd53d335acf9fb4926d07",
"name": "tenant-id",
"desc": "tenant-id (Only:undefined)"
},
{
"required": "1",
"_id": "680dd53d335acf7a8d926d06",
"name": "Authorization",
"desc": "Authorization (Only:undefined)"
}
],
"req_query": [],
"req_body_type": "json",
"res_body_type": "json",
"res_body": "{\"$schema\":\"http://json-schema.org/draft-04/schema#\",\"type\":\"object\",\"properties\":{\"code\":{\"type\":\"number\"},\"message\":{\"type\":\"string\"},\"data\":{\"type\":\"object\",\"properties\":{\"total\":{\"type\":\"number\"},\"list\":{\"type\":\"array\",\"items\":{\"type\":\"object\",\"properties\":{\"dsid\":{\"type\":\"string\"},\"dataRegion\":{\"type\":\"string\"},\"gasReleaseMon\":{\"type\":\"null\"},\"gasReleaseYear\":{\"type\":\"null\"},\"releaseGasCum\":{\"type\":\"null\"}}}}}}}}",
"req_body_other": "{\"properties\":{\"isSearchCount\":{\"default\":true,\"description\":\"是否统计总条数\",\"type\":\"boolean\"},\"query\":{\"description\":\"查询条件\",\"properties\":{\"dataRegions\":{\"description\":\"数据域JD、DG、TL\",\"items\":{\"description\":\"数据域JD、DG、TL\",\"type\":\"string\"},\"type\":\"array\"},\"fields\":{\"description\":\"查询的字段\",\"items\":{\"description\":\"查询的字段\",\"type\":\"string\"},\"type\":\"array\"},\"filter\":{\"description\":\"筛选器\",\"properties\":{\"key\":{\"description\":\"条件项\",\"type\":\"string\"},\"logic\":{\"description\":\"逻辑操作符可选值为AND、OR\",\"type\":\"string\"},\"realValue\":{\"description\":\"条件值\",\"items\":{\"description\":\"条件值\",\"type\":\"object\",\"properties\":{}},\"type\":\"array\"},\"singleValue\":{\"type\":\"object\",\"writeOnly\":true,\"properties\":{}},\"subFilter\":{\"description\":\"子条件\",\"items\":{\"$ref\":\"#/components/schemas/FilterVO\",\"type\":\"string\"},\"type\":\"array\"},\"symbol\":{\"description\":\"运算符,可选值为:>、>=、<、<=、>、<、=、<>、!=、IN、NOTIN、LIKE、IS当运算符为 IS 时条件值只能为NULL或NOT NULL\",\"type\":\"string\"}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/FilterVO\"},\"groupFields\":{\"description\":\"分组字段group by\",\"items\":{\"description\":\"分组字段group by\",\"type\":\"string\"},\"type\":\"array\"},\"groupFilter\":{\"description\":\"筛选器\",\"properties\":{\"key\":{\"description\":\"条件项\",\"type\":\"string\"},\"logic\":{\"description\":\"逻辑操作符可选值为AND、OR\",\"type\":\"string\"},\"realValue\":{\"description\":\"条件值\",\"items\":{\"description\":\"条件值\",\"type\":\"object\",\"properties\":{}},\"type\":\"array\"},\"singleValue\":{\"type\":\"object\",\"writeOnly\":true,\"properties\":{}},\"subFilter\":{\"description\":\"子条件\",\"items\":{\"$ref\":\"#/components/schemas/FilterVO\",\"type\":\"string\"},\"type\":\"array\"},\"symbol\":{\"description\":\"运算符,可选值为:>、>=、<、<=、>、<、=、<>、!=、IN、NOTIN、LIKE、IS当运算符为 IS 时条件值只能为NULL或NOT NULL\",\"type\":\"string\"}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/FilterVO\"},\"sort\":{\"additionalProperties\":{\"description\":\"排序字段key=字段名value=排序方式ASC、DESC\",\"type\":\"string\"},\"description\":\"排序字段key=字段名value=排序方式ASC、DESC\",\"type\":\"object\",\"properties\":{}}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/QueryVO\"}},\"type\":\"object\",\"$$ref\":\"#/components/schemas/RdbQueryPageInput\"}",
"project_id": 1193,
"catid": 18705,
"markdown": "",
"uid": 1103,
"add_time": 1745736911,
"up_time": 1745737021,
"__v": 0
}
]
}
]

Binary file not shown.

View File

@ -13,13 +13,13 @@ class StatusCode200Check(BaseAPITestCase):
# applicable_methods = None # applicable_methods = None
# applicable_paths_regex = None # applicable_paths_regex = None
execution_order = 10 # 示例执行顺序 execution_order = 10 # 示例执行顺序
use_llm_for_body: bool = True # use_llm_for_body: bool = True
use_llm_for_path_params: bool = True # use_llm_for_path_params: bool = True
use_llm_for_query_params: bool = True # use_llm_for_query_params: bool = True
use_llm_for_headers: bool = True # use_llm_for_headers: bool = True
def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None): def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None, llm_service: Optional[Any] = None):
super().__init__(endpoint_spec, global_api_spec, json_schema_validator=json_schema_validator) super().__init__(endpoint_spec, global_api_spec, json_schema_validator=json_schema_validator, llm_service=llm_service)
self.logger.info(f"测试用例 {self.id} ({self.name}) 已针对端点 '{self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')}' 初始化。") self.logger.info(f"测试用例 {self.id} ({self.name}) 已针对端点 '{self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')}' 初始化。")
def validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> list[ValidationResult]: def validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> list[ValidationResult]:
@ -50,42 +50,3 @@ class StatusCode200Check(BaseAPITestCase):
) )
self.logger.warning(f"状态码验证失败: 期望 {expected_status_code}, 实际 {actual_status_code} for {request_context.url}") self.logger.warning(f"状态码验证失败: 期望 {expected_status_code}, 实际 {actual_status_code} for {request_context.url}")
return results return results
class HeaderExistenceCheck(BaseAPITestCase):
id = "TC-HEADER-001"
name = "检查响应中是否存在 'X-Request-ID'"
description = "验证 API 响应是否包含 'X-Request-ID' 头。"
severity = TestSeverity.MEDIUM
tags = ["header", "observability"]
execution_order = 10 # 示例执行顺序
use_llm_for_body = False
EXPECTED_HEADER = "X-Request-ID" # 示例,可以根据实际需要修改
def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None):
super().__init__(endpoint_spec, global_api_spec, json_schema_validator=json_schema_validator)
self.logger.info(f"测试用例 {self.id} ({self.name}) 已初始化 for endpoint {self.endpoint_spec.get('path')}")
def validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> list[ValidationResult]:
results = []
if self.EXPECTED_HEADER in response_context.headers:
results.append(
ValidationResult(
passed=True,
message=f"响应头中找到了期望的 '{self.EXPECTED_HEADER}'"
)
)
self.logger.info(f"请求头 '{self.EXPECTED_HEADER}' 存在于 {request_context.url} 的响应中。")
else:
results.append(
ValidationResult(
passed=False,
message=f"响应头中未找到期望的 '{self.EXPECTED_HEADER}'",
details={
"expected_header": self.EXPECTED_HEADER,
"actual_headers": list(response_context.headers.keys())
}
)
)
self.logger.warning(f"请求头 '{self.EXPECTED_HEADER}' 未在 {request_context.url} 的响应中找到。")
return results

View File

@ -13,8 +13,8 @@ class ResponseSchemaValidationCase(BaseAPITestCase):
# This test is generally applicable, especially for GET requests or successful POST/PUT. # This test is generally applicable, especially for GET requests or successful POST/PUT.
# It might need refinement based on specific endpoint characteristics (e.g., no response body for DELETE) # It might need refinement based on specific endpoint characteristics (e.g., no response body for DELETE)
def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None): def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None, llm_service: Optional[Any] = None):
super().__init__(endpoint_spec, global_api_spec, json_schema_validator) super().__init__(endpoint_spec, global_api_spec, json_schema_validator, llm_service=llm_service)
self.logger.info(f"测试用例 '{self.id}' 已为端点 '{self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')}' 初始化。") self.logger.info(f"测试用例 '{self.id}' 已为端点 '{self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')}' 初始化。")
def validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> List[ValidationResult]: def validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> List[ValidationResult]:

View File

@ -11,69 +11,20 @@ class MissingRequiredFieldBodyCase(BaseAPITestCase):
tags = ["error-handling", "appendix-b", "4003", "required-fields", "request-body"] tags = ["error-handling", "appendix-b", "4003", "required-fields", "request-body"]
execution_order = 210 # Before query, same as original combined execution_order = 210 # Before query, same as original combined
def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None): def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None, llm_service: Optional[Any] = None):
super().__init__(endpoint_spec, global_api_spec, json_schema_validator) super().__init__(endpoint_spec, global_api_spec, json_schema_validator, llm_service=llm_service)
self.logger.setLevel(logging.DEBUG) # Ensure detailed logging for this class self.logger = logging.getLogger(f"testcase.{self.id}")
self.target_field_path: Optional[List[str]] = None
self.original_value_at_path: Any = None
self.removed_field_path: Optional[List[str]] = None # Path to the removed field, e.g., ['level1', 'level2_field'] self.removed_field_path: Optional[List[str]] = None # Path to the removed field, e.g., ['level1', 'level2_field']
self.original_body_schema: Optional[Dict[str, Any]] = None self.original_body_schema: Optional[Dict[str, Any]] = None
self._try_find_removable_body_field() self._try_find_removable_body_field()
def _resolve_ref_if_present(self, schema_to_resolve: Dict[str, Any]) -> Dict[str, Any]: def _resolve_ref_if_present(self, schema_to_resolve: Dict[str, Any]) -> Dict[str, Any]:
ref_value = None # 根据用户进一步要求,方法体简化为直接返回,不进行任何 $ref/$ $$ref 的检查。
if isinstance(schema_to_resolve, dict): # self.logger.debug(f"_resolve_ref_if_present called. Returning schema as-is per new configuration.")
if "$ref" in schema_to_resolve:
ref_value = schema_to_resolve["$ref"]
elif "$$ref" in schema_to_resolve:
ref_value = schema_to_resolve["$$ref"]
if ref_value:
self.logger.debug(f"发现引用 '{ref_value}',尝试解析...")
try:
actual_global_spec_dict = None
if hasattr(self.global_api_spec, 'spec') and isinstance(self.global_api_spec.spec, dict):
actual_global_spec_dict = self.global_api_spec.spec
elif isinstance(self.global_api_spec, dict):
actual_global_spec_dict = self.global_api_spec
if not actual_global_spec_dict:
self.logger.warning(f"无法从 self.global_api_spec (类型: {type(self.global_api_spec)}) 获取用于解析引用的字典。")
return schema_to_resolve return schema_to_resolve
resolved_schema = None
if ref_value.startswith("#/components/schemas/"):
schema_name = ref_value.split("/")[-1]
components = actual_global_spec_dict.get("components")
if components and isinstance(components.get("schemas"), dict):
resolved_schema = components["schemas"].get(schema_name)
if resolved_schema and isinstance(resolved_schema, dict):
self.logger.info(f"成功从 #/components/schemas/ 解析引用 '{ref_value}'")
return resolved_schema
else:
self.logger.warning(f"解析引用 '{ref_value}' (路径: #/components/schemas/) 失败:未找到或找到的不是字典: {schema_name}")
else:
self.logger.warning(f"尝试从 #/components/schemas/ 解析引用 '{ref_value}' 失败:无法找到 'components.schemas' 结构。")
# 如果从 #/components/schemas/ 未成功解析,尝试 #/definitions/
if not resolved_schema and ref_value.startswith("#/definitions/"):
schema_name = ref_value.split("/")[-1]
definitions = actual_global_spec_dict.get("definitions")
if definitions and isinstance(definitions, dict):
resolved_schema = definitions.get(schema_name)
if resolved_schema and isinstance(resolved_schema, dict):
self.logger.info(f"成功从 #/definitions/ 解析引用 '{ref_value}'")
return resolved_schema
else:
self.logger.warning(f"解析引用 '{ref_value}' (路径: #/definitions/) 失败:未找到或找到的不是字典: {schema_name}")
else:
self.logger.warning(f"尝试从 #/definitions/ 解析引用 '{ref_value}' 失败:无法找到 'definitions' 结构。")
if not resolved_schema:
self.logger.warning(f"最终未能通过任一已知路径 (#/components/schemas/ 或 #/definitions/) 解析引用 '{ref_value}'")
except Exception as e:
self.logger.error(f"解析引用 '{ref_value}' 时发生错误: {e}", exc_info=True)
return schema_to_resolve # 返回原始 schema 如果不是 ref 或者所有解析尝试都失败
def _find_required_field_in_schema_recursive(self, current_schema: Dict[str, Any], current_path: List[str]) -> Optional[List[str]]: def _find_required_field_in_schema_recursive(self, current_schema: Dict[str, Any], current_path: List[str]) -> Optional[List[str]]:
"""递归查找第一个可移除的必填字段的路径。 """递归查找第一个可移除的必填字段的路径。
现在也会查找数组内对象中必填的字段""" 现在也会查找数组内对象中必填的字段"""

View File

@ -10,10 +10,11 @@ class MissingRequiredFieldQueryCase(BaseAPITestCase):
tags = ["error-handling", "appendix-b", "4003", "required-fields", "query-parameters"] tags = ["error-handling", "appendix-b", "4003", "required-fields", "query-parameters"]
execution_order = 211 # After body, before original combined one might have been execution_order = 211 # After body, before original combined one might have been
def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None): def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None, llm_service: Optional[Any] = None):
super().__init__(endpoint_spec, global_api_spec, json_schema_validator) super().__init__(endpoint_spec, global_api_spec, json_schema_validator=json_schema_validator, llm_service=llm_service)
self.removed_field_name: Optional[str] = None self.target_param_name: Optional[str] = None
self._try_find_removable_query_param() self._try_find_removable_query_param()
self.logger.info(f"测试用例 {self.id} ({self.name}) 已针对端点 '{self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')}' 初始化。Target param to remove: {self.target_param_name}")
def _try_find_removable_query_param(self): def _try_find_removable_query_param(self):
query_params_spec_list = self.endpoint_spec.get("parameters", []) query_params_spec_list = self.endpoint_spec.get("parameters", [])
@ -23,8 +24,8 @@ class MissingRequiredFieldQueryCase(BaseAPITestCase):
if isinstance(param_spec, dict) and param_spec.get("in") == "query" and param_spec.get("required") is True: if isinstance(param_spec, dict) and param_spec.get("in") == "query" and param_spec.get("required") is True:
field_name = param_spec.get("name") field_name = param_spec.get("name")
if field_name: if field_name:
self.removed_field_name = field_name self.target_param_name = field_name
self.logger.info(f"必填字段缺失测试的目标字段 (查询参数): '{self.removed_field_name}'") self.logger.info(f"必填字段缺失测试的目标字段 (查询参数): '{self.target_param_name}'")
return return
self.logger.info('在此端点规范中未找到可用于测试 "必填查询参数缺失" 的字段。') self.logger.info('在此端点规范中未找到可用于测试 "必填查询参数缺失" 的字段。')
@ -34,20 +35,20 @@ class MissingRequiredFieldQueryCase(BaseAPITestCase):
return current_body return current_body
def generate_query_params(self, current_query_params: Dict[str, Any]) -> Dict[str, Any]: def generate_query_params(self, current_query_params: Dict[str, Any]) -> Dict[str, Any]:
if self.removed_field_name and isinstance(current_query_params, dict): if self.target_param_name and isinstance(current_query_params, dict):
if self.removed_field_name in current_query_params: if self.target_param_name in current_query_params:
new_params = copy.deepcopy(current_query_params) new_params = copy.deepcopy(current_query_params)
original_value = new_params.pop(self.removed_field_name) # 移除参数 original_value = new_params.pop(self.target_param_name) # 移除参数
self.logger.info(f"为进行必填查询参数缺失测试,已从查询参数中移除 '{self.removed_field_name}' (原值: '{original_value}')。") self.logger.info(f"为进行必填查询参数缺失测试,已从查询参数中移除 '{self.target_param_name}' (原值: '{original_value}')。")
return new_params return new_params
else: else:
self.logger.warning(f"计划移除的查询参数 '{self.removed_field_name}' 在当前查询参数中未找到。") self.logger.warning(f"计划移除的查询参数 '{self.target_param_name}' 在当前查询参数中未找到。")
return current_query_params return current_query_params
def validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> List[ValidationResult]: def validate_response(self, response_context: APIResponseContext, request_context: APIRequestContext) -> List[ValidationResult]:
results = [] results = []
if not self.removed_field_name: if not self.target_param_name:
results.append(self.passed("跳过测试在API规范中未找到合适的必填查询参数用于移除测试。")) results.append(self.passed("跳过测试在API规范中未找到合适的必填查询参数用于移除测试。"))
self.logger.info("由于未识别到可移除的必填查询参数,跳过此测试用例。") self.logger.info("由于未识别到可移除的必填查询参数,跳过此测试用例。")
return results return results
@ -58,7 +59,7 @@ class MissingRequiredFieldQueryCase(BaseAPITestCase):
expected_status_codes = [400, 422] expected_status_codes = [400, 422]
specific_error_code_from_appendix_b = "4003" specific_error_code_from_appendix_b = "4003"
msg_prefix = f"当移除必填查询参数 '{self.removed_field_name}' 时," msg_prefix = f"当移除必填查询参数 '{self.target_param_name}' 时,"
if status_code in expected_status_codes: if status_code in expected_status_codes:
status_msg = f"{msg_prefix}API响应了预期的错误状态码 {status_code}" status_msg = f"{msg_prefix}API响应了预期的错误状态码 {status_code}"
@ -77,8 +78,8 @@ class MissingRequiredFieldQueryCase(BaseAPITestCase):
else: else:
results.append(self.failed( results.append(self.failed(
message=f"{msg_prefix}期望API返回状态码 {expected_status_codes} 中的一个,但实际收到 {status_code}", message=f"{msg_prefix}期望API返回状态码 {expected_status_codes} 中的一个,但实际收到 {status_code}",
details={"status_code": status_code, "response_body": json_content, "removed_field": f"query.{self.removed_field_name}"} details={"status_code": status_code, "response_body": json_content, "removed_field": f"query.{self.target_param_name}"}
)) ))
self.logger.warning(f"必填查询参数缺失测试失败:期望状态码 {expected_status_codes},实际为 {status_code}。移除的参数:'{self.removed_field_name}'") self.logger.warning(f"必填查询参数缺失测试失败:期望状态码 {expected_status_codes},实际为 {status_code}。移除的参数:'{self.target_param_name}'")
return results return results

View File

@ -11,15 +11,20 @@ class TypeMismatchBodyCase(BaseAPITestCase):
tags = ["error-handling", "appendix-b", "4001", "request-body"] tags = ["error-handling", "appendix-b", "4001", "request-body"]
execution_order = 202 # Slightly after query param one execution_order = 202 # Slightly after query param one
def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None): def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None, llm_service: Optional[Any] = None):
super().__init__(endpoint_spec, global_api_spec, json_schema_validator) super().__init__(endpoint_spec, global_api_spec, json_schema_validator, llm_service=llm_service)
self.logger.setLevel(logging.DEBUG) self.logger.setLevel(logging.DEBUG)
self.target_field_path: Optional[List[str]] = None self.target_field_path: Optional[List[str]] = None
self.original_field_type: Optional[str] = None self.original_field_type: Optional[str] = None
# Location is always 'body' for this class # Location is always 'body' for this class
self.target_field_location: str = "body" self.target_field_location: str = "body"
self.target_field_schema: Optional[Dict[str, Any]] = None self.target_field_schema: Optional[Dict[str, Any]] = None
self.json_schema_validator = json_schema_validator
self.original_value_at_path: Any = None
self.mismatched_value: Any = None
self._try_find_mismatch_target_in_body()
def _try_find_mismatch_target_in_body(self):
self.logger.critical(f"{self.id} __INIT__ >>> STARTED") self.logger.critical(f"{self.id} __INIT__ >>> STARTED")
self.logger.debug(f"开始为端点 {self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')} 初始化请求体类型不匹配测试的目标字段查找。") self.logger.debug(f"开始为端点 {self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')} 初始化请求体类型不匹配测试的目标字段查找。")
@ -64,58 +69,8 @@ class TypeMismatchBodyCase(BaseAPITestCase):
self.logger.info(f"最终,在端点 {self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')} 的请求体中,均未找到可用于测试类型不匹配的字段。") self.logger.info(f"最终,在端点 {self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')} 的请求体中,均未找到可用于测试类型不匹配的字段。")
def _resolve_ref_if_present(self, schema_to_resolve: Dict[str, Any]) -> Dict[str, Any]: def _resolve_ref_if_present(self, schema_to_resolve: Dict[str, Any]) -> Dict[str, Any]:
ref_value = None # 根据用户进一步要求,方法体简化为直接返回,不进行任何 $ref/$ $$ref 的检查。
if isinstance(schema_to_resolve, dict): # self.logger.debug(f"_resolve_ref_if_present called. Returning schema as-is per new configuration.")
if "$ref" in schema_to_resolve:
ref_value = schema_to_resolve["$ref"]
elif "$$ref" in schema_to_resolve:
ref_value = schema_to_resolve["$$ref"]
if ref_value:
self.logger.debug(f"发现引用 '{ref_value}',尝试解析...")
try:
actual_global_spec_dict = None
if hasattr(self.global_api_spec, 'spec') and isinstance(self.global_api_spec.spec, dict):
actual_global_spec_dict = self.global_api_spec.spec
elif isinstance(self.global_api_spec, dict):
actual_global_spec_dict = self.global_api_spec
if not actual_global_spec_dict:
self.logger.warning(f"无法从 self.global_api_spec (类型: {type(self.global_api_spec)}) 获取用于解析引用的字典。")
return schema_to_resolve
resolved_schema = None
if ref_value.startswith("#/components/schemas/"):
schema_name = ref_value.split("/")[-1]
components = actual_global_spec_dict.get("components")
if components and isinstance(components.get("schemas"), dict):
resolved_schema = components["schemas"].get(schema_name)
if resolved_schema and isinstance(resolved_schema, dict):
self.logger.info(f"成功从 #/components/schemas/ 解析引用 '{ref_value}'")
return resolved_schema
else:
self.logger.warning(f"解析引用 '{ref_value}' (路径: #/components/schemas/) 失败:未找到或找到的不是字典: {schema_name}")
else:
self.logger.warning(f"尝试从 #/components/schemas/ 解析引用 '{ref_value}' 失败:无法找到 'components.schemas' 结构。")
if not resolved_schema and ref_value.startswith("#/definitions/"):
schema_name = ref_value.split("/")[-1]
definitions = actual_global_spec_dict.get("definitions")
if definitions and isinstance(definitions, dict):
resolved_schema = definitions.get(schema_name)
if resolved_schema and isinstance(resolved_schema, dict):
self.logger.info(f"成功从 #/definitions/ 解析引用 '{ref_value}'")
return resolved_schema
else:
self.logger.warning(f"解析引用 '{ref_value}' (路径: #/definitions/) 失败:未找到或找到的不是字典: {schema_name}")
else:
self.logger.warning(f"尝试从 #/definitions/ 解析引用 '{ref_value}' 失败:无法找到 'definitions' 结构。")
if not resolved_schema:
self.logger.warning(f"最终未能通过任一已知路径 (#/components/schemas/ 或 #/definitions/) 解析引用 '{ref_value}'")
except Exception as e:
self.logger.error(f"解析引用 '{ref_value}' 时发生错误: {e}", exc_info=True)
return schema_to_resolve return schema_to_resolve
def _find_target_field_in_schema(self, schema_to_search: Dict[str, Any], base_path_for_log: str) -> bool: def _find_target_field_in_schema(self, schema_to_search: Dict[str, Any], base_path_for_log: str) -> bool:

View File

@ -11,14 +11,19 @@ class TypeMismatchQueryParamCase(BaseAPITestCase):
tags = ["error-handling", "appendix-b", "4001", "query-parameters"] tags = ["error-handling", "appendix-b", "4001", "query-parameters"]
execution_order = 201 # Slightly after the combined one might have been execution_order = 201 # Slightly after the combined one might have been
def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None): def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None, llm_service: Optional[Any] = None):
super().__init__(endpoint_spec, global_api_spec, json_schema_validator) super().__init__(endpoint_spec, global_api_spec, json_schema_validator, llm_service=llm_service)
self.logger.setLevel(logging.DEBUG) self.logger.setLevel(logging.DEBUG)
self.target_field_path: Optional[List[str]] = None self.target_field_path: Optional[List[str]] = None
self.original_field_type: Optional[str] = None self.original_field_type: Optional[str] = None
# Location is always 'query' for this class # Location is always 'query' for this class
self.target_field_location: str = "query" self.target_field_location: str = "query"
self.target_field_schema: Optional[Dict[str, Any]] = None self.target_field_schema: Optional[Dict[str, Any]] = None
self.original_value_at_path: Any = None
self.mismatched_value: Any = None
# 调用新方法来查找目标字段
self._try_find_mismatch_target_in_query()
self.logger.critical(f"{self.id} __INIT__ >>> STARTED") self.logger.critical(f"{self.id} __INIT__ >>> STARTED")
self.logger.debug(f"开始为端点 {self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')} 初始化查询参数类型不匹配测试的目标字段查找。") self.logger.debug(f"开始为端点 {self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')} 初始化查询参数类型不匹配测试的目标字段查找。")
@ -78,59 +83,63 @@ class TypeMismatchQueryParamCase(BaseAPITestCase):
if not self.target_field_path: if not self.target_field_path:
self.logger.info(f"最终,在端点 {self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')} 的查询参数中,均未找到可用于测试类型不匹配的字段。") self.logger.info(f"最终,在端点 {self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')} 的查询参数中,均未找到可用于测试类型不匹配的字段。")
def _try_find_mismatch_target_in_query(self):
self.logger.critical(f"{self.id} _try_find_mismatch_target_in_query >>> STARTED")
self.logger.debug(f"开始为端点 {self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')} 初始化查询参数类型不匹配测试的目标字段查找。")
parameters = self.endpoint_spec.get("parameters", [])
self.logger.critical(f"{self.id} _try_find_mismatch_target_in_query >>> Parameters to be processed: {parameters}")
self.logger.debug(f"传入的参数列表 (在 {self.id}中): {parameters}")
for param_spec in parameters:
if param_spec.get("in") == "query":
param_name = param_spec.get("name")
if not param_name:
self.logger.warning("发现一个没有名称的查询参数定义,已跳过。")
continue
self.logger.debug(f"检查查询参数: '{param_name}'")
param_type = param_spec.get("type")
param_schema = param_spec.get("schema")
# Scenario 1: Simple type directly in param_spec (e.g., type: string)
if param_type in ["string", "number", "integer", "boolean"]:
self.target_field_path = [param_name]
self.original_field_type = param_type
self.target_field_schema = param_spec
self.logger.info(f"目标字段(查询参数 - 简单类型): {param_name},原始类型: {self.original_field_type}")
break
# Scenario 2: Schema defined for the query parameter (OpenAPI 3.0 style, or complex objects in query)
elif isinstance(param_schema, dict):
self.logger.debug(f"查询参数 '{param_name}' 包含嵌套 schema尝试在其内部查找简单类型字段。")
resolved_param_schema = self._resolve_ref_if_present(param_schema)
if resolved_param_schema.get("type") == "object":
properties = resolved_param_schema.get("properties", {})
for prop_name, prop_details_orig in properties.items():
prop_details = self._resolve_ref_if_present(prop_details_orig)
if prop_details.get("type") in ["string", "number", "integer", "boolean"]:
self.target_field_path = [param_name, prop_name]
self.original_field_type = prop_details.get("type")
self.target_field_schema = prop_details
self.logger.info(f"目标字段(查询参数 - 对象属性): {param_name}.{prop_name},原始类型: {self.original_field_type}")
break
if self.target_field_path: break
elif resolved_param_schema.get("type") in ["string", "number", "integer", "boolean"]:
self.target_field_path = [param_name]
self.original_field_type = resolved_param_schema.get("type")
self.target_field_schema = resolved_param_schema
self.logger.info(f"目标字段(查询参数 - schema为简单类型): {param_name},原始类型: {self.original_field_type}")
break
else:
self.logger.debug(f"查询参数 '{param_name}' (type: {param_type}, schema: {param_schema}) 不是直接的简单类型,也无直接可用的对象型 schema 属性。")
if not self.target_field_path:
self.logger.info(f"最终,在端点 {self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')} 的查询参数中,均未找到可用于测试类型不匹配的字段。")
def _resolve_ref_if_present(self, schema_to_resolve: Dict[str, Any]) -> Dict[str, Any]: def _resolve_ref_if_present(self, schema_to_resolve: Dict[str, Any]) -> Dict[str, Any]:
ref_value = None # 根据用户进一步要求,方法体简化为直接返回,不进行任何 $ref/$ $$ref 的检查。
if isinstance(schema_to_resolve, dict): # self.logger.debug(f"_resolve_ref_if_present called. Returning schema as-is per new configuration.")
if "$ref" in schema_to_resolve:
ref_value = schema_to_resolve["$ref"]
elif "$$ref" in schema_to_resolve:
ref_value = schema_to_resolve["$$ref"]
if ref_value:
self.logger.debug(f"发现引用 '{ref_value}',尝试解析...")
try:
actual_global_spec_dict = None
if hasattr(self.global_api_spec, 'spec') and isinstance(self.global_api_spec.spec, dict):
actual_global_spec_dict = self.global_api_spec.spec
elif isinstance(self.global_api_spec, dict):
actual_global_spec_dict = self.global_api_spec
if not actual_global_spec_dict:
self.logger.warning(f"无法从 self.global_api_spec (类型: {type(self.global_api_spec)}) 获取用于解析引用的字典。")
return schema_to_resolve
resolved_schema = None
if ref_value.startswith("#/components/schemas/"):
schema_name = ref_value.split("/")[-1]
components = actual_global_spec_dict.get("components")
if components and isinstance(components.get("schemas"), dict):
resolved_schema = components["schemas"].get(schema_name)
if resolved_schema and isinstance(resolved_schema, dict):
self.logger.info(f"成功从 #/components/schemas/ 解析引用 '{ref_value}'")
return resolved_schema
else:
self.logger.warning(f"解析引用 '{ref_value}' (路径: #/components/schemas/) 失败:未找到或找到的不是字典: {schema_name}")
else:
self.logger.warning(f"尝试从 #/components/schemas/ 解析引用 '{ref_value}' 失败:无法找到 'components.schemas' 结构。")
if not resolved_schema and ref_value.startswith("#/definitions/"):
schema_name = ref_value.split("/")[-1]
definitions = actual_global_spec_dict.get("definitions")
if definitions and isinstance(definitions, dict):
resolved_schema = definitions.get(schema_name)
if resolved_schema and isinstance(resolved_schema, dict):
self.logger.info(f"成功从 #/definitions/ 解析引用 '{ref_value}'")
return resolved_schema
else:
self.logger.warning(f"解析引用 '{ref_value}' (路径: #/definitions/) 失败:未找到或找到的不是字典: {schema_name}")
else:
self.logger.warning(f"尝试从 #/definitions/ 解析引用 '{ref_value}' 失败:无法找到 'definitions' 结构。")
if not resolved_schema:
self.logger.warning(f"最终未能通过任一已知路径 (#/components/schemas/ 或 #/definitions/) 解析引用 '{ref_value}'")
except Exception as e:
self.logger.error(f"解析引用 '{ref_value}' 时发生错误: {e}", exc_info=True)
return schema_to_resolve return schema_to_resolve
# No generate_request_body, or it simply returns current_body # No generate_request_body, or it simply returns current_body

View File

@ -0,0 +1,247 @@
import re
import json # 确保导入json
from typing import Dict, Any, Optional, List
from ddms_compliance_suite.test_framework_core import BaseAPITestCase, TestSeverity, ValidationResult, APIRequestContext
# LLMService的导入路径需要根据您的项目结构确认
# 假设 LLMService 在 ddms_compliance_suite.llm_utils.llm_service
try:
from ddms_compliance_suite.llm_utils.llm_service import LLMService
except ImportError:
LLMService = None
# print("LLMService not found, PathVerbNounCheckCase will be skipped or limited.")
class ComprehensiveURLCheckLLMCase(BaseAPITestCase):
id = "TC-NORMATIVE-URL-LLM-COMPREHENSIVE-001"
name = "综合URL规范与RESTful风格检查 (LLM)"
description = (
"使用LLM统一评估API路径是否符合以下规范\n"
"1. 路径参数命名 (例如,全小写蛇形命名法)。\n"
"2. URL路径结构 (例如,/{领域}/{版本号}/资源类型)。\n"
"3. URL版本号嵌入 (例如,包含 /v1/)。\n"
"4. RESTful风格与可读性 (名词表示资源HTTP方法表示动作易理解性)。"
)
severity = TestSeverity.MEDIUM # 综合性检查,可能包含不同严重级别的问题
tags = ["normative-spec", "url", "restful", "llm", "readability", "naming-convention", "structure", "versioning", "static-check"]
execution_order = 60 # 更新执行顺序
# 此测试用例可以覆盖所有路径但其有效性依赖LLM
# applicable_methods = None
# applicable_paths_regex = None
# 这个标志可以用来在测试用例级别控制是否实际调用LLM即使全局LLM服务可用
# 如果您希望总是尝试只要LLMService能初始化可以不设置这个或者在逻辑中检查 self.llm_service 是否存在
# use_llm_for_path_analysis: bool = True
def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None, llm_service: Optional[LLMService] = None):
super().__init__(endpoint_spec, global_api_spec, json_schema_validator)
self.llm_service = llm_service # 存储注入的 LLMService 实例
if not self.llm_service:
self.logger.warning(f"LLMService 未注入或初始化失败,测试用例 {self.id} 将无法执行LLM路径分析。")
def _get_llm_service_from_orchestrator(self) -> Optional[Any]:
# 在实际框架中测试用例可能无法直接访问编排器来获取LLM服务。
# 这种依赖注入通常在测试用例实例化时或方法调用时处理。
# 此处为一个占位符理想情况下APITestOrchestrator会将llm_service实例传给需要它的测试用例
# 或测试用例通过某种服务定位器获取。
# 暂时我们假设如果全局配置了LLM它就能用。
# 真实的实现需要APITestOrchestrator在执行此测试用例前将llm_service实例注入。
# 为了能运行我们先返回None并在下面逻辑中处理。
# 或者,修改 Orchestrator 将其注入到 self.global_api_spec 或 self.endpoint_spec (不推荐)
# 最好的方式是在 __init__ 中接收一个 llm_service: Optional[LLMService] 参数。
# 但这需要修改 BaseAPITestCase 和 APITestOrchestrator 的 __init__ 和调用逻辑。
# 临时的解决方法:依赖 APITestOrchestrator 初始化时是否成功创建了 LLMService。
# 这仍然是一个间接的检查。一个更直接的方式是在Orchestrator执行此测试用例时传入。
if hasattr(self, '_orchestrator_llm_service_instance') and self._orchestrator_llm_service_instance:
return self._orchestrator_llm_service_instance
# 如果没有明确注入我们只能依赖全局LLMService是否被加载
if LLMService is not None:
# 这里不能直接实例化一个新的LLMService因为它需要API Key等配置这些配置在Orchestrator那里。
# 这个测试用例需要依赖Orchestrator来提供一个已经配置好的LLMService实例。
# 此处返回一个指示如果LLM功能应该被使用则需要Orchestrator提供服务。
return "NEEDS_INJECTION"
return None
def _extract_path_param_names(self, path_template: str) -> List[str]:
"""从路径模板中提取路径参数名称。例如 /users/{user_id}/items/{item_id} -> ['user_id', 'item_id']"""
return re.findall(r'\{([^}]+)\}', path_template)
def validate_request_url(self, url: str, request_context: APIRequestContext) -> List[ValidationResult]:
results: List[ValidationResult] = []
path_template = self.endpoint_spec.get('path', '')
http_method = request_context.method.upper()
operation_id = self.endpoint_spec.get('operationId', self.endpoint_spec.get('title', '')) # 获取operationId或title
if not self.llm_service:
results.append(ValidationResult(
passed=True, # 标记为通过以避免阻塞,但消息表明跳过
message=f"路径 '{path_template}' 的LLM综合URL检查已跳过LLM服务不可用。",
details={"path_template": path_template, "http_method": http_method, "reason": "LLM Service not available or not injected."}
))
self.logger.warning(f"LLM综合URL检查已跳过对路径 '{path_template}' 的检查LLM服务不可用。")
return results
path_param_names = self._extract_path_param_names(path_template)
path_params_str = ", ".join(path_param_names) if path_param_names else ""
# - 接口名称 (OperationId 或 Title): {operation_id if operation_id else '请你自己从路径模板中提取'}
# 构建给LLM的Prompt要求JSON输出
prompt_instruction = f"""
请扮演一位资深的API设计评审员我将提供一个API端点的路径模板HTTP方法以及可能的接口名称
请根据以下石油行业API设计规范评估此API端点并以严格的JSON格式返回您的评估结果
JSON对象应包含一个名为 "assessments" 的键其值为一个对象列表每个对象代表对一个标准的评估包含 "standard_name" (字符串), "is_compliant" (布尔值), "reason" (字符串) 三个键
API端点信息:
- HTTP方法: {http_method}
- 路径模板: {path_template}
- 路径中提取的参数名: [{path_params_str}]
评估标准:
1. **接口名称规范 (接口名称需要你从路径模板中提取一般是路径中除了参数名以外的最后的一个单词)**:
- 规则: 采用'动词+名词'结构明确业务语义 (例如: GetWellLog, SubmitSeismicJob)
- standard_name: "interface_naming_convention"
2. **HTTP方法使用规范**:
- 规则: 遵循RESTful规范GET用于数据检索, POST用于创建资源, PUT用于更新资源, DELETE用于删除资源
- standard_name: "http_method_usage"
3. **URL路径结构规范**:
- 规则: 格式为 `<前缀>/<专业领域>/v<版本号>/<资源类型>` (例如: /logging/v1.2/wells, /seismicprospecting/v1.0/datasets)
- 前缀: 示例: /api/dms
- 专业领域: 专业领域示例: seismicprospecting, welllogging, reservoirevaluation
- 版本号: 语义化版本例如 v1, v1.0, v2.1.3
- 资源类型: 通常为名词复数
- standard_name: "url_path_structure"
4. **URL路径参数命名规范**:
- 规则: 路径参数如果存在必须使用全小写字母(可以是一个单词或小写字母加下划线命名这是多个单词的情况并能反映资源的唯一标识 (例如: {{well_id}},{{version}},{{schema}})
- standard_name: "url_path_parameter_naming"
5. **资源命名规范 (在路径中)**:
- 规则: 资源集合应使用名词的复数形式表示 (例如 `/wells`, `/logs`)应优先使用石油行业的标准术语 (例如用 `trajectory` 而非 `path` 来表示井轨迹)
- standard_name: "resource_naming_in_path"
- standard_name: "resource"
- standard_name: "schema"
- standard_name: "version"
请确保您的输出是一个可以被 `json.loads()` 直接解析的JSON对象
例如:
{{
"assessments": [
{{
"standard_name": "interface_naming_convention",
"is_compliant": true,
"reason": "接口名称 'GetWellboreTrajectory' 符合动词+名词结构。"
}},
{{
"standard_name": "http_method_usage",
"is_compliant": true,
"reason": "GET方法用于检索资源符合规范。"
}}
// ... 其他标准的评估 ...
]
}}
"""
# 6. **路径可读性与整体RESTful风格**:
# - 规则: 路径整体是否具有良好的可读性、易于理解其功能并且符合RESTful设计原则综合评估可参考前面几点
# - standard_name: "general_readability_and_restfulness"
messages = [
{"role": "system", "content": "你是一位API设计评审专家专注于评估API的URL规范性和RESTful风格。你的输出必须是严格的JSON格式。"},
{"role": "user", "content": prompt_instruction}
]
self.logger.info(f"向LLM发送请求评估路径: {path_template} ({http_method})")
# 假设 _execute_chat_completion_request 支持 response_format={"type": "json_object"} (如果LLM API支持)
# 否则我们需要解析文本输出。为简化这里假设LLM会遵循JSON格式指令。
llm_response_str = self.llm_service._execute_chat_completion_request(
messages=messages,
max_tokens=1024, # 根据评估结果的复杂度调整
temperature=0.2 # 低温以获得更确定的、结构化的输出
)
if not llm_response_str:
results.append(ValidationResult(
passed=False, # 执行失败
message=f"未能从LLM获取对路径 '{path_template}' 的评估。",
details={"path_template": path_template, "http_method": http_method, "reason": "LLM did not return a response."}
))
self.logger.error(f"LLM对路径 '{path_template}' 的评估请求未返回任何内容。")
return results
self.logger.debug(f"LLM对路径 '{path_template}' 的原始响应: {llm_response_str}")
try:
# 尝试清理并解析LLM响应
# 有时LLM可能在JSON前后添加 "```json" 和 "```"
cleaned_response_str = llm_response_str.strip()
if cleaned_response_str.startswith("```json"):
cleaned_response_str = cleaned_response_str[7:]
if cleaned_response_str.endswith("```"):
cleaned_response_str = cleaned_response_str[:-3]
llm_assessment_data = json.loads(cleaned_response_str)
if "assessments" not in llm_assessment_data or not isinstance(llm_assessment_data["assessments"], list):
raise ValueError("LLM响应JSON中缺少 'assessments' 列表或格式不正确。")
found_assessments = False
for assessment in llm_assessment_data["assessments"]:
standard_name = assessment.get("standard_name", "未知标准")
is_compliant = assessment.get("is_compliant", False)
reason = assessment.get("reason", "LLM未提供原因。")
found_assessments = True
results.append(ValidationResult(
passed=is_compliant,
message=f"LLM评估 - {standard_name}: {reason}",
details={
"standard_name": standard_name,
"is_compliant_by_llm": is_compliant,
"llm_reason": reason,
"path_template": path_template,
"http_method": http_method
}
))
log_level = self.logger.info if is_compliant else self.logger.warning
log_level(f"LLM评估 - 标准 '{standard_name}' for '{path_template}': {'符合' if is_compliant else '不符合'}。原因: {reason}")
if not found_assessments:
results.append(ValidationResult(
passed=False,
message=f"LLM返回的评估结果中不包含任何有效的评估项。",
details={"path_template": path_template, "http_method": http_method, "raw_llm_response": llm_response_str}
))
except json.JSONDecodeError as e_json:
results.append(ValidationResult(
passed=False, # 执行失败
message=f"无法将LLM对路径 '{path_template}' 的评估响应解析为JSON: {e_json}",
details={"path_template": path_template, "http_method": http_method, "raw_llm_response": llm_response_str, "error": str(e_json)}
))
self.logger.error(f"LLM对路径 '{path_template}' 的响应JSON解析失败: {e_json}. Raw response: {llm_response_str}")
except ValueError as e_val: # 自定义错误,如缺少 'assessments'
results.append(ValidationResult(
passed=False, # 执行失败
message=f"LLM对路径 '{path_template}' 的评估响应JSON结构不符合预期: {e_val}",
details={"path_template": path_template, "http_method": http_method, "raw_llm_response": llm_response_str, "error": str(e_val)}
))
self.logger.error(f"LLM对路径 '{path_template}' 的响应JSON结构错误: {e_val}. Raw response: {llm_response_str}")
except Exception as e_generic:
results.append(ValidationResult(
passed=False, # 执行失败
message=f"处理LLM对路径 '{path_template}' 的评估响应时发生未知错误: {e_generic}",
details={"path_template": path_template, "http_method": http_method, "raw_llm_response": llm_response_str, "error": str(e_generic)}
))
self.logger.error(f"处理LLM对路径 '{path_template}' 的响应时发生未知错误: {e_generic}", exc_info=True)
return results

View File

@ -12,8 +12,8 @@ class HTTPSMandatoryCase(BaseAPITestCase):
# 此测试会修改URL为HTTP应适用于大多数端点。 # 此测试会修改URL为HTTP应适用于大多数端点。
def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None): def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None, llm_service: Optional[Any] = None):
super().__init__(endpoint_spec, global_api_spec, json_schema_validator) super().__init__(endpoint_spec, global_api_spec, json_schema_validator, llm_service=llm_service)
self.logger.info(f"测试用例 '{self.id}' 已为端点 '{self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')}' 初始化。") self.logger.info(f"测试用例 '{self.id}' 已为端点 '{self.endpoint_spec.get('method')} {self.endpoint_spec.get('path')}' 初始化。")
def modify_request_url(self, current_url: str) -> str: def modify_request_url(self, current_url: str) -> str:

View File

@ -1,355 +1,481 @@
"""Input Parser Module"""
import json import json
import os
from typing import Any, Dict, Optional, List, Union
from pydantic import BaseModel # For defining the structure of parsed inputs
from dataclasses import dataclass, field
import logging import logging
from typing import List, Dict, Any, Optional, Union # Ensure Union is imported
logger = logging.getLogger("InputParser") logger = logging.getLogger(__name__)
class ParsedOpenAPISpec(BaseModel): class BaseEndpoint:
# Placeholder for OpenAPI spec details relevant to the compliance suite """所有端点对象的基类,可以包含一些通用属性或方法。"""
spec: Dict[str, Any] def __init__(self, method: str, path: str):
info: Dict[str, Any] # Swagger 'info' object with title, version, etc. self.method = method
paths: Dict[str, Dict[str, Any]] # API paths and their operations self.path = path
tags: Optional[List[Dict[str, str]]] = None # API tags
basePath: Optional[str] = None # Base path for all APIs
swagger_version: str # Swagger specification version
@dataclass def to_dict(self) -> Dict[str, Any]:
class YAPIEndpoint: # 基类可以提供一个默认的 to_dict 实现或要求子类实现
"""YAPI API端点信息""" raise NotImplementedError("Subclasses must implement to_dict")
path: str
method: str
title: str = ""
description: str = ""
category_name: str = ""
req_params: List[Dict[str, Any]] = field(default_factory=list)
req_query: List[Dict[str, Any]] = field(default_factory=list)
req_headers: List[Dict[str, Any]] = field(default_factory=list)
req_body_type: str = ""
req_body_other: str = ""
res_body_type: str = ""
res_body: str = ""
@dataclass class YAPIEndpoint(BaseEndpoint): # Inherit from BaseEndpoint
class ParsedYAPISpec: def __init__(self, data: Dict[str, Any], category_name: Optional[str] = None, category_id: Optional[int] = None):
"""解析后的YAPI规范""" super().__init__(method=data.get("method", "GET").upper(), path=data.get("path", ""))
endpoints: List[YAPIEndpoint] self._raw_data = data
categories: List[Dict[str, Any]] self.title: str = data.get("title", "")
total_count: int self.desc: Optional[str] = data.get("desc")
self._id: int = data.get("_id")
self.project_id: int = data.get("project_id")
self.catid: int = data.get("catid")
@dataclass self.req_params: List[Dict[str, Any]] = data.get("req_params", [])
class SwaggerEndpoint: self.req_query: List[Dict[str, Any]] = data.get("req_query", [])
"""Swagger API端点信息""" self.req_headers: List[Dict[str, Any]] = data.get("req_headers", [])
path: str self.req_body_form: List[Dict[str, Any]] = data.get("req_body_form", [])
method: str
summary: str = ""
description: str = ""
operation_id: str = ""
tags: List[str] = field(default_factory=list)
parameters: List[Dict[str, Any]] = field(default_factory=list)
responses: Dict[str, Any] = field(default_factory=dict)
consumes: List[str] = field(default_factory=list)
produces: List[str] = field(default_factory=list)
request_body: Dict[str, Any] = field(default_factory=dict)
@dataclass self.req_body_type: Optional[str] = data.get("req_body_type")
class ParsedSwaggerSpec: self.req_body_is_json_schema: bool = data.get("req_body_is_json_schema", False)
"""解析后的Swagger规范""" self.req_body_other: Optional[str] = data.get("req_body_other")
endpoints: List[SwaggerEndpoint]
info: Dict[str, Any]
swagger_version: str
host: str = ""
base_path: str = ""
schemes: List[str] = field(default_factory=list)
tags: List[Dict[str, Any]] = field(default_factory=list)
categories: List[Dict[str, Any]] = field(default_factory=list)
class ParsedBusinessLogic(BaseModel): self.res_body_type: Optional[str] = data.get("res_body_type")
# Placeholder for parsed business logic flow self.res_body_is_json_schema: bool = data.get("res_body_is_json_schema", False)
name: str self.res_body: Optional[str] = data.get("res_body")
steps: list # List of steps, each could be another Pydantic model
class InputParser: self.status: str = data.get("status", "undone")
""" self.api_opened: bool = data.get("api_opened", False)
Responsible for parsing DDMS supplier's input materials like API specs, etc. self.uid: int = data.get("uid")
"""
def __init__(self): self.category_name = category_name
pass self.category_id = category_id if category_id is not None else self.catid
def parse_openapi_spec(self, spec_path: str) -> Optional[ParsedOpenAPISpec]: self._parsed_req_body_schema: Optional[Dict[str, Any]] = None
""" if self.req_body_type == "json" and self.req_body_other and self.req_body_is_json_schema:
Parses an OpenAPI specification from a file path.
Args:
spec_path: The file path of the OpenAPI specification.
Returns:
A ParsedOpenAPISpec object containing the parsed specification,
or None if parsing fails.
"""
try: try:
# Check if file exists self._parsed_req_body_schema = json.loads(self.req_body_other)
if not os.path.exists(spec_path):
print(f"Error: File not found: {spec_path}")
return None
# Read and parse JSON file
with open(spec_path, 'r', encoding='utf-8') as f:
swagger_data = json.load(f)
# Extract basic information
swagger_version = swagger_data.get('swagger', swagger_data.get('openapi', 'Unknown'))
info = swagger_data.get('info', {})
paths = swagger_data.get('paths', {})
tags = swagger_data.get('tags', [])
base_path = swagger_data.get('basePath', '')
# Create and return ParsedOpenAPISpec
return ParsedOpenAPISpec(
spec=swagger_data,
info=info,
paths=paths,
tags=tags,
basePath=base_path,
swagger_version=swagger_version
)
except FileNotFoundError:
print(f"File not found: {spec_path}")
return None
except json.JSONDecodeError as e: except json.JSONDecodeError as e:
print(f"Error parsing JSON from {spec_path}: {e}") logger.error(f"YAPIEndpoint (ID: {self._id}, Title: {self.title}): Failed to parse req_body_other as JSON during init: {e}. Content: {self.req_body_other[:200]}")
return None
except Exception as e:
print(f"Error parsing OpenAPI spec from {spec_path}: {e}")
return None
def parse_yapi_spec(self, file_path: str) -> Optional[ParsedYAPISpec]:
"""
解析YAPI规范文件
Args:
file_path: YAPI JSON文件路径
Returns:
Optional[ParsedYAPISpec]: 解析后的YAPI规范如果解析失败则返回None
"""
if not os.path.isfile(file_path):
logger.error(f"文件不存在: {file_path}")
return None
self._parsed_res_body_schema: Optional[Dict[str, Any]] = None
if self.res_body_type == "json" and self.res_body and self.res_body_is_json_schema:
try: try:
with open(file_path, 'r', encoding='utf-8') as f: self._parsed_res_body_schema = json.loads(self.res_body)
yapi_data = json.load(f) except json.JSONDecodeError as e:
logger.error(f"YAPIEndpoint (ID: {self._id}, Title: {self.title}): Failed to parse res_body as JSON during init: {e}. Content: {self.res_body[:200]}")
if not isinstance(yapi_data, list): def to_dict(self) -> Dict[str, Any]:
logger.error(f"无效的YAPI文件格式: 顶层元素应该是数组") endpoint_dict = {
return None "method": self.method,
"path": self.path,
"title": self.title,
"summary": self.title,
"description": self.desc or "",
"operationId": f"{self.method.lower()}_{self.path.replace('/', '_').replace('{', '').replace('}', '')}_{self._id}",
"tags": [self.category_name or str(self.catid)],
"parameters": [],
"requestBody": None,
"responses": {},
"_source_format": "yapi",
"_yapi_id": self._id,
"_yapi_raw_data": self._raw_data # Keep raw data for debugging or deeper inspection if needed
}
endpoints = [] # Path parameters from req_params
categories = [] for p_spec in self.req_params:
param_name = p_spec.get("name")
# 处理分类 if not param_name: continue
for category_data in yapi_data: endpoint_dict["parameters"].append({
if not isinstance(category_data, dict): "name": param_name,
logger.warning(f"YAPI 分类条目格式不正确,应为字典类型,已跳过: {category_data}") "in": "path",
continue "required": True, # Path parameters are always required
"description": p_spec.get("desc", ""),
category_name = category_data.get('name', '') "schema": {"type": "string", "example": p_spec.get("example", f"example_{param_name}")}
category_desc = category_data.get('desc', '')
# 添加到分类列表
categories.append({
'name': category_name,
'desc': category_desc
}) })
# 处理API接口 # Query parameters from req_query
api_list = category_data.get('list', []) for q_spec in self.req_query:
if not isinstance(api_list, list): param_name = q_spec.get("name")
logger.warning(f"分类 '{category_name}' 中的 API列表 (list) 格式不正确,应为数组类型,已跳过。") if not param_name: continue
is_required = q_spec.get("required") == "1" # YAPI uses "1" for true
param_schema = {"type": "string"} # Default to string, YAPI doesn't specify types well here
if "example" in q_spec: param_schema["example"] = q_spec["example"]
# Add other fields from YAPI query spec if needed (e.g., desc)
endpoint_dict["parameters"].append({
"name": param_name,
"in": "query",
"required": is_required,
"description": q_spec.get("desc", ""),
"schema": param_schema
})
# Header parameters from req_headers
for h_spec in self.req_headers:
param_name = h_spec.get("name")
if not param_name or param_name.lower() == 'content-type': continue # Content-Type is handled by requestBody
is_required = h_spec.get("required") == "1"
default_value = h_spec.get("value") # YAPI uses 'value' for default/example header value
param_schema = {"type": "string"}
if default_value:
if is_required: # If required, it's more like an example of what's expected
param_schema["example"] = default_value
else: # If not required, it's a default value
param_schema["default"] = default_value
endpoint_dict["parameters"].append({
"name": param_name,
"in": "header",
"required": is_required,
"description": h_spec.get("desc", ""),
"schema": param_schema
})
# Request body
if self.req_body_type == "json" and self._parsed_req_body_schema:
endpoint_dict["requestBody"] = {
"content": {
"application/json": {
"schema": self._parsed_req_body_schema
}
}
}
elif self.req_body_type == "form" and self.req_body_form:
properties = {}
required_form_params = []
for form_param in self.req_body_form:
name = form_param.get("name")
if not name: continue
properties[name] = {
"type": "string", # YAPI form params are typically strings, file uploads are different
"description": form_param.get("desc","")
}
if form_param.get("example"): properties[name]["example"] = form_param.get("example")
if form_param.get("required") == "1": required_form_params.append(name)
endpoint_dict["requestBody"] = {
"content": {
"application/x-www-form-urlencoded": {
"schema": {
"type": "object",
"properties": properties,
"required": required_form_params if required_form_params else None # OpenAPI: omit if empty
}
}
# YAPI also supports req_body_type = 'file', which would map to multipart/form-data
# This example focuses on json and basic form.
}
}
# Add other req_body_types if necessary (e.g., raw, file)
# Responses
# YAPI has a simpler response structure. We'll map its res_body to a default success response (e.g., 200 or 201).
default_success_status = "200"
if self.method == "POST": default_success_status = "201" # Common practice for POST success
if self.res_body_type == "json" and self._parsed_res_body_schema:
endpoint_dict["responses"][default_success_status] = {
"description": "Successful Operation (from YAPI res_body)",
"content": {
"application/json": {
"schema": self._parsed_res_body_schema
}
}
}
elif self.res_body_type == "json" and not self._parsed_res_body_schema and self.res_body: # Schema parsing failed but text exists
endpoint_dict["responses"][default_success_status] = {
"description": "Successful Operation (Schema parsing error, raw text might be available)",
"content": {"application/json": {"schema": {"type": "object", "description": "Schema parsing failed for YAPI res_body."}}} # Placeholder
}
else: # No JSON schema, or other res_body_type
endpoint_dict["responses"][default_success_status] = {
"description": "Successful Operation (No specific schema provided in YAPI for this response)"
}
# Ensure there's always a default response if nothing specific was added
if not endpoint_dict["responses"]:
endpoint_dict["responses"]["default"] = {"description": "Default response from YAPI definition"}
return endpoint_dict
def __repr__(self):
return f"<YAPIEndpoint ID:{self._id} Method:{self.method} Path:{self.path} Title:'{self.title}'>"
class SwaggerEndpoint(BaseEndpoint): # Inherit from BaseEndpoint
def __init__(self, path: str, method: str, data: Dict[str, Any], global_spec: Dict[str, Any]):
super().__init__(method=method.upper(), path=path)
self._raw_data = data
self._global_spec = global_spec # Store for $ref resolution
self.summary: Optional[str] = data.get("summary")
self.description: Optional[str] = data.get("description")
self.operation_id: Optional[str] = data.get("operationId")
self.tags: List[str] = data.get("tags", [])
# Parameters, requestBody, responses are processed by to_dict
def _resolve_ref(self, ref_path: str) -> Optional[Dict[str, Any]]:
"""Resolves a $ref path within the global OpenAPI/Swagger spec."""
if not ref_path.startswith("#/"):
logger.warning(f"Unsupported $ref path: {ref_path}. Only local refs '#/...' are currently supported.")
return None
parts = ref_path[2:].split('/') # Remove '#/' and split
current_level = self._global_spec
try:
for part in parts:
# Decode URI component encoding if present (e.g. "~0" for "~", "~1" for "/")
part = part.replace("~1", "/").replace("~0", "~")
current_level = current_level[part]
# It's crucial to return a copy if the resolved ref will be modified,
# or ensure modifications happen on copies later.
# For now, returning as is, assuming downstream processing is careful or uses copies.
if isinstance(current_level, dict):
return current_level # Potentially json.loads(json.dumps(current_level)) for a deep copy
else: # Resolved to a non-dict, which might be valid for some simple refs but unusual for schemas
logger.warning(f"$ref '{ref_path}' resolved to a non-dictionary type: {type(current_level)}. Value: {str(current_level)[:100]}")
return {"type": "string", "description": f"Resolved $ref '{ref_path}' to non-dict: {str(current_level)[:100]}"} # Placeholder
except (KeyError, TypeError, AttributeError) as e:
logger.error(f"Failed to resolve $ref '{ref_path}': {e}", exc_info=True)
return None
def _process_schema_or_ref(self, schema_like: Any) -> Optional[Dict[str, Any]]:
"""
Processes a schema part, resolving $refs and recursively processing nested structures.
Returns a new dictionary with resolved refs, or None if resolution fails badly.
"""
if not isinstance(schema_like, dict):
if schema_like is None: return None
logger.warning(f"Expected a dictionary for schema processing, got {type(schema_like)}. Value: {str(schema_like)[:100]}")
return {"type": "string", "description": f"Schema was not a dict: {str(schema_like)[:100]}"} # Placeholder for non-dict schema
# If it's a $ref, resolve it.
if "$ref" in schema_like:
return self._resolve_ref(schema_like["$ref"]) # This will be the new base schema_like
# Create a copy to avoid modifying the original spec during processing
processed_schema = schema_like.copy()
# Recursively process 'properties' for object schemas
if "properties" in processed_schema and isinstance(processed_schema["properties"], dict):
new_properties = {}
for prop_name, prop_schema in processed_schema["properties"].items():
resolved_prop = self._process_schema_or_ref(prop_schema)
if resolved_prop is not None: # Only add if resolution was successful
new_properties[prop_name] = resolved_prop
# else: logger.warning(f"Failed to process property '{prop_name}' in {self.operation_id or self.path}")
processed_schema["properties"] = new_properties
# Recursively process 'items' for array schemas
if "items" in processed_schema and isinstance(processed_schema["items"], dict): # 'items' should be a schema object
resolved_items = self._process_schema_or_ref(processed_schema["items"])
if resolved_items is not None:
processed_schema["items"] = resolved_items
# else: logger.warning(f"Failed to process 'items' schema in {self.operation_id or self.path}")
# Handle allOf, anyOf, oneOf by trying to merge or process them (simplistic merge for allOf)
# This is a complex area of JSON Schema. This is a very basic attempt.
if "allOf" in processed_schema and isinstance(processed_schema["allOf"], list):
merged_all_of_props = {}
merged_all_of_required = set()
temp_schema_for_all_of = {"type": processed_schema.get("type", "object"), "properties": {}, "required": []}
for sub_schema_data in processed_schema["allOf"]:
resolved_sub_schema = self._process_schema_or_ref(sub_schema_data)
if resolved_sub_schema and isinstance(resolved_sub_schema, dict):
if "properties" in resolved_sub_schema:
temp_schema_for_all_of["properties"].update(resolved_sub_schema["properties"])
if "required" in resolved_sub_schema and isinstance(resolved_sub_schema["required"], list):
merged_all_of_required.update(resolved_sub_schema["required"])
# Copy other top-level keywords from the resolved_sub_schema if needed, e.g. description
for key, value in resolved_sub_schema.items():
if key not in ["properties", "required", "type", "$ref", "allOf", "anyOf", "oneOf"]:
if key not in temp_schema_for_all_of or temp_schema_for_all_of[key] is None: # prioritize existing
temp_schema_for_all_of[key] = value
if temp_schema_for_all_of["properties"]:
processed_schema["properties"] = {**processed_schema.get("properties",{}), **temp_schema_for_all_of["properties"]}
if merged_all_of_required:
current_required = set(processed_schema.get("required", []))
current_required.update(merged_all_of_required)
processed_schema["required"] = sorted(list(current_required))
del processed_schema["allOf"] # Remove allOf after processing
# Copy other merged attributes back to processed_schema
for key, value in temp_schema_for_all_of.items():
if key not in ["properties", "required", "type", "$ref", "allOf", "anyOf", "oneOf"]:
if key not in processed_schema or processed_schema[key] is None:
processed_schema[key] = value
# anyOf, oneOf are harder as they represent choices. For now, we might just list them or pick first.
# For simplicity in to_dict, we might not fully expand them but ensure refs inside are resolved.
for keyword in ["anyOf", "oneOf"]:
if keyword in processed_schema and isinstance(processed_schema[keyword], list):
processed_sub_list = []
for sub_item in processed_schema[keyword]:
resolved_sub = self._process_schema_or_ref(sub_item)
if resolved_sub:
processed_sub_list.append(resolved_sub)
if processed_sub_list: # only update if some were resolved
processed_schema[keyword] = processed_sub_list
return processed_schema
def to_dict(self) -> Dict[str, Any]:
endpoint_data = {
"method": self.method,
"path": self.path,
"summary": self.summary or "",
"title": self.summary or self.operation_id or "", # Fallback for title
"description": self.description or "",
"operationId": self.operation_id or f"{self.method.lower()}_{self.path.replace('/', '_').replace('{', '').replace('}', '')}",
"tags": self.tags,
"parameters": [],
"requestBody": None,
"responses": {},
"_source_format": "swagger/openapi",
"_swagger_raw_data": self._raw_data, # Keep raw for debugging
"_global_api_spec_for_resolution": self._global_spec # For test cases that might need to resolve further
}
# Process parameters
if "parameters" in self._raw_data and isinstance(self._raw_data["parameters"], list):
for param_data_raw in self._raw_data["parameters"]:
# Each param_data_raw could itself be a $ref or contain a schema that is a $ref
processed_param_container = self._process_schema_or_ref(param_data_raw)
if processed_param_container and isinstance(processed_param_container, dict):
# If the parameter itself was a $ref, processed_param_container is the resolved object.
# If it contained a schema that was a $ref, that nested schema should be resolved.
# We need to ensure 'schema' key exists if 'in' is path, query, header
if "schema" in processed_param_container and isinstance(processed_param_container["schema"], dict):
# schema was present, process it further (it might have been already by _process_schema_or_ref if it was a complex object)
# but if _process_schema_or_ref was called on param_data_raw which wasn't a ref itself,
# the internal 'schema' ref might not have been re-processed with full context.
# However, the recursive nature of _process_schema_or_ref should handle nested $refs.
pass # Assume it's processed by the main call to _process_schema_or_ref on param_data_raw
elif "content" in processed_param_container: # Parameter described by Content Object (OpenAPI 3.x)
pass # Content object schemas should have been resolved by _process_schema_or_ref
endpoint_data["parameters"].append(processed_param_container)
# Process requestBody
if "requestBody" in self._raw_data and isinstance(self._raw_data["requestBody"], dict):
processed_rb = self._process_schema_or_ref(self._raw_data["requestBody"])
if processed_rb:
endpoint_data["requestBody"] = processed_rb
# Process responses
if "responses" in self._raw_data and isinstance(self._raw_data["responses"], dict):
for status_code, resp_data_raw in self._raw_data["responses"].items():
processed_resp = self._process_schema_or_ref(resp_data_raw)
if processed_resp:
endpoint_data["responses"][status_code] = processed_resp
elif resp_data_raw: # If processing failed but raw exists, keep raw (though this is less ideal)
endpoint_data["responses"][status_code] = resp_data_raw
logger.warning(f"Kept raw response data for {status_code} due to processing failure for {self.operation_id or self.path}")
if not endpoint_data["responses"]: # Ensure default response if none processed
endpoint_data["responses"]["default"] = {"description": "Default response from Swagger/OpenAPI definition"}
return endpoint_data
def __repr__(self):
return f"<SwaggerEndpoint Method:{self.method} Path:{self.path} Summary:'{self.summary}'>"
class ParsedAPISpec:
"""解析后的API规范的通用基类"""
def __init__(self, spec_type: str, endpoints: List[Union[YAPIEndpoint, SwaggerEndpoint]], spec: Dict[str, Any]):
self.spec_type = spec_type
self.endpoints = endpoints
self.spec = spec # Store the original full spec dictionary, useful for $ref resolution if not pre-resolved
class ParsedYAPISpec(ParsedAPISpec):
"""解析后的YAPI规范"""
def __init__(self, endpoints: List[YAPIEndpoint], categories: List[Dict[str, Any]], spec: Dict[str, Any]):
super().__init__(spec_type="yapi", endpoints=endpoints, spec=spec)
self.categories = categories
class ParsedSwaggerSpec(ParsedAPISpec):
"""解析后的Swagger/OpenAPI规范"""
def __init__(self, endpoints: List[SwaggerEndpoint], tags: List[Dict[str, Any]], spec: Dict[str, Any]):
super().__init__(spec_type="swagger", endpoints=endpoints, spec=spec)
self.tags = tags
class InputParser:
"""负责解析输入如YAPI JSON并提取API端点信息"""
def __init__(self):
self.logger = logging.getLogger(__name__)
def parse_yapi_spec(self, file_path: str) -> Optional[ParsedYAPISpec]:
self.logger.info(f"Parsing YAPI spec from: {file_path}")
all_endpoints: List[YAPIEndpoint] = []
yapi_categories: List[Dict[str, Any]] = []
raw_spec_data_list: Optional[List[Dict[str, Any]]] = None # YAPI export is a list of categories
try:
with open(file_path, 'r', encoding='utf-8') as f:
raw_spec_data_list = json.load(f)
if not isinstance(raw_spec_data_list, list):
self.logger.error(f"YAPI spec file {file_path} does not contain a JSON list as expected for categories.")
return None
for category_data in raw_spec_data_list:
if not isinstance(category_data, dict):
self.logger.warning(f"Skipping non-dictionary item in YAPI spec list: {str(category_data)[:100]}")
continue continue
cat_name = category_data.get("name")
cat_id = category_data.get("_id", category_data.get("id")) # YAPI uses _id
yapi_categories.append({"name": cat_name, "description": category_data.get("desc"), "id": cat_id})
for api_item in api_list: for endpoint_data in category_data.get("list", []):
if not isinstance(api_item, dict): if not isinstance(endpoint_data, dict):
logger.warning(f"分类 '{category_name}' 中的 API条目格式不正确应为字典类型已跳过: {api_item}") self.logger.warning(f"Skipping non-dictionary endpoint item in category '{cat_name}': {str(endpoint_data)[:100]}")
continue continue
try:
yapi_endpoint = YAPIEndpoint(endpoint_data, category_name=cat_name, category_id=cat_id)
all_endpoints.append(yapi_endpoint)
except Exception as e_ep:
self.logger.error(f"Error processing YAPI endpoint data (ID: {endpoint_data.get('_id', 'N/A')}, Title: {endpoint_data.get('title', 'N/A')}). Error: {e_ep}", exc_info=True)
# 提取API信息 # The 'spec' for ParsedYAPISpec should be a dict representing the whole document.
path = api_item.get('path', '') # Since YAPI export is a list of categories, we wrap it.
if not path: yapi_full_spec_dict = {"yapi_categories": raw_spec_data_list}
logger.info(f"分类 '{category_name}' 中的 API条目缺少 'path',使用空字符串。 API: {api_item.get('title', '未命名')}") return ParsedYAPISpec(endpoints=all_endpoints, categories=yapi_categories, spec=yapi_full_spec_dict)
method = api_item.get('method', 'GET') except FileNotFoundError:
if api_item.get('method') is None: # 仅当原始数据中完全没有 method 字段时记录 self.logger.error(f"YAPI spec file not found: {file_path}")
logger.info(f"分类 '{category_name}' 中的 API条目 '{path}' 缺少 'method',使用默认值 'GET'") except json.JSONDecodeError as e:
title = api_item.get('title', '') self.logger.error(f"Error decoding JSON from YAPI spec file {file_path}: {e}")
if not title:
logger.info(f"分类 '{category_name}' 中的 API条目 '{path}' ({method}) 缺少 'title',使用空字符串。")
description = api_item.get('desc', '')
# 提取请求参数
req_params = api_item.get('req_params', [])
req_query = api_item.get('req_query', [])
req_headers = api_item.get('req_headers', [])
# 提取请求体信息
req_body_type = api_item.get('req_body_type', '')
req_body_other = api_item.get('req_body_other', '')
# 提取响应体信息
res_body_type = api_item.get('res_body_type', '')
res_body = api_item.get('res_body', '')
# 创建端点对象
endpoint = YAPIEndpoint(
path=path,
method=method,
title=title,
description=description,
category_name=category_name,
req_params=req_params,
req_query=req_query,
req_headers=req_headers,
req_body_type=req_body_type,
req_body_other=req_body_other,
res_body_type=res_body_type,
res_body=res_body
)
endpoints.append(endpoint)
return ParsedYAPISpec(
endpoints=endpoints,
categories=categories,
total_count=len(endpoints)
)
except Exception as e: except Exception as e:
logger.error(f"解析YAPI文件时出错: {str(e)}") self.logger.error(f"An unexpected error occurred while parsing YAPI spec {file_path}: {e}", exc_info=True)
return None return None
def parse_swagger_spec(self, file_path: str) -> Optional[ParsedSwaggerSpec]: def parse_swagger_spec(self, file_path: str) -> Optional[ParsedSwaggerSpec]:
""" self.logger.info(f"Parsing Swagger/OpenAPI spec from: {file_path}")
解析Swagger规范文件 all_endpoints: List[SwaggerEndpoint] = []
swagger_tags: List[Dict[str, Any]] = []
Args: raw_spec_data_dict: Optional[Dict[str, Any]] = None # Swagger/OpenAPI is a single root object
file_path: Swagger JSON文件路径
Returns:
Optional[ParsedSwaggerSpec]: 解析后的Swagger规范如果解析失败则返回None
"""
if not os.path.isfile(file_path):
logger.error(f"文件不存在: {file_path}")
return None
try: try:
with open(file_path, 'r', encoding='utf-8') as f: with open(file_path, 'r', encoding='utf-8') as f:
swagger_data = json.load(f) # TODO: Add YAML support if needed, e.g., using PyYAML
raw_spec_data_dict = json.load(f)
if not isinstance(swagger_data, dict): if not isinstance(raw_spec_data_dict, dict):
logger.error(f"无效的Swagger文件格式: 顶层元素应该是对象") self.logger.error(f"Swagger spec file {file_path} does not contain a JSON object as expected.")
return None return None
# 提取基本信息 swagger_tags = raw_spec_data_dict.get("tags", [])
swagger_version = swagger_data.get('swagger', swagger_data.get('openapi', '')) paths = raw_spec_data_dict.get("paths", {})
info = swagger_data.get('info', {})
host = swagger_data.get('host', '')
base_path = swagger_data.get('basePath', '')
schemes = swagger_data.get('schemes', [])
tags = swagger_data.get('tags', [])
# 创建分类列表 for path, path_item_obj in paths.items():
categories = [] if not isinstance(path_item_obj, dict): continue
for tag in tags: for method, operation_obj in path_item_obj.items():
categories.append({ # Common methods, can be extended
'name': tag.get('name', ''), if method.lower() not in ["get", "post", "put", "delete", "patch", "options", "head", "trace"]:
'desc': tag.get('description', '') continue # Skip non-standard HTTP methods or extensions like 'parameters' at path level
}) if not isinstance(operation_obj, dict): continue
try:
# 处理API路径 # Pass the full raw_spec_data_dict for $ref resolution within SwaggerEndpoint
paths = swagger_data.get('paths', {}) swagger_endpoint = SwaggerEndpoint(path, method, operation_obj, global_spec=raw_spec_data_dict)
endpoints = [] all_endpoints.append(swagger_endpoint)
except Exception as e_ep:
for path, path_item in paths.items(): self.logger.error(f"Error processing Swagger endpoint: {method.upper()} {path}. Error: {e_ep}", exc_info=True)
if not isinstance(path_item, dict):
continue
# 处理每个HTTP方法 (GET, POST, PUT, DELETE等)
for method, operation in path_item.items():
if method in ['get', 'post', 'put', 'delete', 'patch', 'options', 'head', 'trace']:
if not isinstance(operation, dict):
continue
# 提取操作信息
summary = operation.get('summary', '')
description = operation.get('description', '')
operation_id = operation.get('operationId', '')
operation_tags = operation.get('tags', [])
# 提取参数信息
parameters = operation.get('parameters', [])
# 提取响应信息
responses = operation.get('responses', {})
# 提取请求和响应的内容类型
consumes = operation.get('consumes', swagger_data.get('consumes', []))
produces = operation.get('produces', swagger_data.get('produces', []))
# 提取请求体信息 (OpenAPI 3.0 格式)
request_body = operation.get('requestBody', {})
# 创建端点对象
endpoint = SwaggerEndpoint(
path=path,
method=method.upper(),
summary=summary,
description=description,
operation_id=operation_id,
tags=operation_tags,
parameters=parameters,
responses=responses,
consumes=consumes,
produces=produces,
request_body=request_body
)
endpoints.append(endpoint)
# 创建返回对象
return ParsedSwaggerSpec(
endpoints=endpoints,
info=info,
swagger_version=swagger_version,
host=host,
base_path=base_path,
schemes=schemes,
tags=tags,
categories=categories
)
return ParsedSwaggerSpec(endpoints=all_endpoints, tags=swagger_tags, spec=raw_spec_data_dict)
except FileNotFoundError:
self.logger.error(f"Swagger spec file not found: {file_path}")
except json.JSONDecodeError as e:
self.logger.error(f"Error decoding JSON from Swagger spec file {file_path}: {e}")
except Exception as e: except Exception as e:
logger.error(f"解析Swagger文件时出错: {str(e)}") self.logger.error(f"An unexpected error occurred while parsing Swagger spec {file_path}: {e}", exc_info=True)
return None return None
def parse_business_logic_flow(self, flow_description: str) -> Optional[ParsedBusinessLogic]:
"""
Parses a business logic flow description.
The format of this description is TBD and this parser would need to be built accordingly.
Args:
flow_description: The string content describing the business logic flow.
Returns:
A ParsedBusinessLogic object or None if parsing fails.
"""
print(f"[InputParser] Placeholder: Parsing business logic flow. Content: {flow_description[:100]}...")
# Placeholder: Actual parsing logic will depend on the defined format.
return ParsedBusinessLogic(name="Example Flow", steps=["Step 1 API call", "Step 2 Validate Response"])
# Add other parsers as needed (e.g., for data object definitions)

View File

@ -20,22 +20,22 @@ logger = logging.getLogger(__name__)
class ValidationResult: class ValidationResult:
"""Validation result container""" """Validation result container"""
def __init__(self, is_valid: bool, errors: List[str] = None, warnings: List[str] = None): def __init__(self, passed: bool, errors: List[str] = None, warnings: List[str] = None):
""" """
Initialize a validation result Initialize a validation result
Args: Args:
is_valid: Whether the data is valid according to the schema passed: Whether the data is valid according to the schema
errors: List of error messages (if any) errors: List of error messages (if any)
warnings: List of warning messages (if any) warnings: List of warning messages (if any)
""" """
self.is_valid = is_valid self.passed = passed
self.errors = errors or [] self.errors = errors or []
self.warnings = warnings or [] self.warnings = warnings or []
def __str__(self) -> str: def __str__(self) -> str:
"""String representation of validation result""" """String representation of validation result"""
status = "Valid" if self.is_valid else "Invalid" status = "Valid" if self.passed else "Invalid"
result = f"Validation Result: {status}\n" result = f"Validation Result: {status}\n"
if self.errors: if self.errors:

View File

@ -93,18 +93,20 @@ class BaseAPITestCase:
use_llm_for_query_params: bool = False use_llm_for_query_params: bool = False
use_llm_for_headers: bool = False use_llm_for_headers: bool = False
def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None): def __init__(self, endpoint_spec: Dict[str, Any], global_api_spec: Dict[str, Any], json_schema_validator: Optional[Any] = None, llm_service: Optional[Any] = None):
""" """
初始化测试用例 初始化测试用例
Args: Args:
endpoint_spec: 当前被测API端点的详细定义 (来自YAPI/Swagger解析结果) endpoint_spec: 当前被测API端点的详细定义 (来自YAPI/Swagger解析结果)
global_api_spec: 完整的API规范文档 (来自YAPI/Swagger解析结果) global_api_spec: 完整的API规范文档 (来自YAPI/Swagger解析结果)
json_schema_validator: APITestOrchestrator 传入的 JSONSchemaValidator 实例 (可选) json_schema_validator: APITestOrchestrator 传入的 JSONSchemaValidator 实例 (可选)
llm_service: APITestOrchestrator 传入的 LLMService 实例 (可选)
""" """
self.endpoint_spec = endpoint_spec self.endpoint_spec = endpoint_spec
self.global_api_spec = global_api_spec self.global_api_spec = global_api_spec
self.logger = logging.getLogger(f"testcase.{self.id}") self.logger = logging.getLogger(f"testcase.{self.id}")
self.json_schema_validator = json_schema_validator # 存储传入的校验器实例 self.json_schema_validator = json_schema_validator # 存储传入的校验器实例
self.llm_service = llm_service # 存储注入的 LLMService 实例
self.logger.debug(f"Test case '{self.id}' initialized for endpoint: {self.endpoint_spec.get('method', '')} {self.endpoint_spec.get('path', '')}") self.logger.debug(f"Test case '{self.id}' initialized for endpoint: {self.endpoint_spec.get('method', '')} {self.endpoint_spec.get('path', '')}")
# --- 1. 请求生成与修改阶段 --- # --- 1. 请求生成与修改阶段 ---
@ -120,6 +122,20 @@ class BaseAPITestCase:
self.logger.debug(f"Hook: generate_request_body, current body type: {type(current_body)}") self.logger.debug(f"Hook: generate_request_body, current body type: {type(current_body)}")
return current_body return current_body
def generate_path_params(self, current_path_params: Dict[str, Any]) -> Dict[str, Any]:
"""
允许测试用例修改或生成路径参数
这些参数将用于替换URL中的占位符例如 /users/{userId}
Args:
current_path_params: 从API规范或编排器默认逻辑生成的初始路径参数
Returns:
最终要使用的路径参数字典
"""
self.logger.debug(f"Hook: generate_path_params, current: {current_path_params}")
return current_path_params
# --- 1.5. 请求URL修改阶段 (新增钩子) --- # --- 1.5. 请求URL修改阶段 (新增钩子) ---
def modify_request_url(self, current_url: str) -> str: def modify_request_url(self, current_url: str) -> str:
""" """
@ -180,26 +196,40 @@ class BaseAPITestCase:
results.append(self.failed(f"{context_message_prefix} schema validation skipped: Validator not available.")) results.append(self.failed(f"{context_message_prefix} schema validation skipped: Validator not available."))
return results return results
is_valid, errors = self.json_schema_validator.validate(data_to_validate, schema_definition) # validator_result 是 JSONSchemaValidator 内部定义的 ValidationResult 对象
validator_result = self.json_schema_validator.validate(data_to_validate, schema_definition)
if is_valid: if validator_result.passed:
results.append(self.passed(f"{context_message_prefix} conforms to the JSON schema.")) success_message = f"{context_message_prefix} conforms to the JSON schema."
# 可以选择性地将 validator 的警告信息添加到 details
current_details = {}
if validator_result.warnings:
current_details["schema_warnings"] = validator_result.warnings
# 如果 validator_result 有其他有用的成功信息,也可以加入 message 或 details
results.append(self.passed(success_message, details=current_details if current_details else None))
else: else:
error_messages = [] # 从 validator_result.errors 构建 message 和 details
if isinstance(errors, list): error_reason = "Validation failed."
for error in errors: # jsonschema.exceptions.ValidationError 对象 if validator_result.errors:
error_messages.append(f"- Path: '{list(error.path)}', Message: {error.message}") # error.path 是一个deque error_reason = "Errors:\n" + "\n".join([f"- {e}" for e in validator_result.errors])
elif isinstance(errors, str): # 兼容旧版或简单错误字符串
error_messages.append(errors) full_message = f"{context_message_prefix} does not conform to the JSON schema. {error_reason}"
current_details = {
"schema_errors": validator_result.errors,
"validated_data_sample": str(data_to_validate)[:200]
}
if validator_result.warnings:
current_details["schema_warnings"] = validator_result.warnings
full_message = f"{context_message_prefix} does not conform to the JSON schema. Errors:\n" + "\n".join(error_messages)
results.append(self.failed( results.append(self.failed(
message=full_message, message=full_message,
details={"schema_errors": error_messages, "validated_data_sample": str(data_to_validate)[:200]} details=current_details
)) ))
self.logger.warning(f"{context_message_prefix} schema validation failed: {full_message}") self.logger.warning(f"{context_message_prefix} schema validation failed: {full_message}")
return results return results
# --- Helper to easily create a passed ValidationResult --- # --- Helper to easily create a passed ValidationResult ---
@staticmethod @staticmethod
def passed(message: str, details: Optional[Dict[str, Any]] = None) -> ValidationResult: def passed(message: str, details: Optional[Dict[str, Any]] = None) -> ValidationResult:

View File

@ -640,21 +640,22 @@ class APITestOrchestrator:
validation_results: List[ValidationResult] = [] validation_results: List[ValidationResult] = []
overall_status: ExecutedTestCaseResult.Status overall_status: ExecutedTestCaseResult.Status
execution_message = "" execution_message = ""
test_case_instance: Optional[BaseAPITestCase] = None # Initialize to None
# 将 endpoint_spec 转换为字典,如果它还不是的话 # 将 endpoint_spec 转换为字典,如果它还不是的话
endpoint_spec_dict: Dict[str, Any] endpoint_spec_dict: Dict[str, Any]
if isinstance(endpoint_spec, dict): if isinstance(endpoint_spec, dict):
endpoint_spec_dict = endpoint_spec endpoint_spec_dict = endpoint_spec
self.logger.debug(f"endpoint_spec 已经是字典类型。") # self.logger.debug(f"endpoint_spec 已经是字典类型。")
elif hasattr(endpoint_spec, 'to_dict') and callable(endpoint_spec.to_dict): elif hasattr(endpoint_spec, 'to_dict') and callable(endpoint_spec.to_dict):
try: try:
endpoint_spec_dict = endpoint_spec.to_dict() endpoint_spec_dict = endpoint_spec.to_dict()
self.logger.debug(f"成功通过 to_dict() 方法将类型为 {type(endpoint_spec)} 的 endpoint_spec 转换为字典。") # self.logger.debug(f"成功通过 to_dict() 方法将类型为 {type(endpoint_spec)} 的 endpoint_spec 转换为字典。")
if not endpoint_spec_dict: # 如果 to_dict() 返回空字典 if not endpoint_spec_dict: # 如果 to_dict() 返回空字典
self.logger.warning(f"endpoint_spec.to_dict() (类型: {type(endpoint_spec)}) 返回了一个空字典。") # self.logger.warning(f"endpoint_spec.to_dict() (类型: {type(endpoint_spec)}) 返回了一个空字典。")
# 尝试备用转换 # 尝试备用转换
if isinstance(endpoint_spec, (YAPIEndpoint, SwaggerEndpoint)): if isinstance(endpoint_spec, (YAPIEndpoint, SwaggerEndpoint)):
self.logger.debug(f"尝试从 {type(endpoint_spec).__name__} 对象的属性手动构建 endpoint_spec_dict。") # self.logger.debug(f"尝试从 {type(endpoint_spec).__name__} 对象的属性手动构建 endpoint_spec_dict。")
endpoint_spec_dict = { endpoint_spec_dict = {
"method": getattr(endpoint_spec, 'method', 'UNKNOWN_METHOD').upper(), "method": getattr(endpoint_spec, 'method', 'UNKNOWN_METHOD').upper(),
"path": getattr(endpoint_spec, 'path', 'UNKNOWN_PATH'), "path": getattr(endpoint_spec, 'path', 'UNKNOWN_PATH'),
@ -667,7 +668,7 @@ class APITestOrchestrator:
"_original_object_type": type(endpoint_spec).__name__ "_original_object_type": type(endpoint_spec).__name__
} }
if not any(endpoint_spec_dict.values()): # 如果手动构建后仍基本为空 if not any(endpoint_spec_dict.values()): # 如果手动构建后仍基本为空
self.logger.error(f"手动从属性构建 endpoint_spec_dict (类型: {type(endpoint_spec)}) 后仍然为空或无效。") # self.logger.error(f"手动从属性构建 endpoint_spec_dict (类型: {type(endpoint_spec)}) 后仍然为空或无效。")
endpoint_spec_dict = {} # 重置为空,触发下方错误处理 endpoint_spec_dict = {} # 重置为空,触发下方错误处理
except Exception as e: except Exception as e:
self.logger.error(f"调用 endpoint_spec (类型: {type(endpoint_spec)}) 的 to_dict() 方法时出错: {e}。尝试备用转换。") self.logger.error(f"调用 endpoint_spec (类型: {type(endpoint_spec)}) 的 to_dict() 方法时出错: {e}。尝试备用转换。")
@ -691,10 +692,10 @@ class APITestOrchestrator:
endpoint_spec_dict = {} # 转换失败 endpoint_spec_dict = {} # 转换失败
elif hasattr(endpoint_spec, 'data') and isinstance(getattr(endpoint_spec, 'data'), dict): # 兼容 YAPIEndpoint 结构 elif hasattr(endpoint_spec, 'data') and isinstance(getattr(endpoint_spec, 'data'), dict): # 兼容 YAPIEndpoint 结构
endpoint_spec_dict = getattr(endpoint_spec, 'data') endpoint_spec_dict = getattr(endpoint_spec, 'data')
self.logger.debug(f"使用了类型为 {type(endpoint_spec)} 的 endpoint_spec 的 .data 属性。") # self.logger.debug(f"使用了类型为 {type(endpoint_spec)} 的 endpoint_spec 的 .data 属性。")
else: # 如果没有 to_dict, 也不是已知可直接访问 .data 的类型,则尝试最后的通用转换或手动构建 else: # 如果没有 to_dict, 也不是已知可直接访问 .data 的类型,则尝试最后的通用转换或手动构建
if isinstance(endpoint_spec, (YAPIEndpoint, SwaggerEndpoint)): if isinstance(endpoint_spec, (YAPIEndpoint, SwaggerEndpoint)):
self.logger.debug(f"类型为 {type(endpoint_spec).__name__} 的 endpoint_spec 没有 to_dict() 或 data尝试从属性手动构建。") # self.logger.debug(f"类型为 {type(endpoint_spec).__name__} 的 endpoint_spec 没有 to_dict() 或 data尝试从属性手动构建。")
endpoint_spec_dict = { endpoint_spec_dict = {
"method": getattr(endpoint_spec, 'method', 'UNKNOWN_METHOD').upper(), "method": getattr(endpoint_spec, 'method', 'UNKNOWN_METHOD').upper(),
"path": getattr(endpoint_spec, 'path', 'UNKNOWN_PATH'), "path": getattr(endpoint_spec, 'path', 'UNKNOWN_PATH'),
@ -795,7 +796,8 @@ class APITestOrchestrator:
test_case_instance = test_case_class( test_case_instance = test_case_class(
endpoint_spec=endpoint_spec_dict, endpoint_spec=endpoint_spec_dict,
global_api_spec=global_spec_dict, global_api_spec=global_spec_dict,
json_schema_validator=self.validator json_schema_validator=self.validator,
llm_service=self.llm_service # Pass the orchestrator's LLM service instance
) )
self.logger.info(f"开始执行测试用例 '{test_case_instance.id}' ({test_case_instance.name}) for endpoint '{endpoint_spec_dict.get('method', 'N/A')} {endpoint_spec_dict.get('path', 'N/A')}'") self.logger.info(f"开始执行测试用例 '{test_case_instance.id}' ({test_case_instance.name}) for endpoint '{endpoint_spec_dict.get('method', 'N/A')} {endpoint_spec_dict.get('path', 'N/A')}'")
@ -928,226 +930,199 @@ class APITestOrchestrator:
) )
except Exception as e: except Exception as e:
self.logger.error(f"执行测试用例 '{test_case_class.id if test_case_instance else test_case_class.__name__}' 时发生严重错误: {e}", exc_info=True) self.logger.error(f"执行测试用例 '{test_case_class.id if hasattr(test_case_class, 'id') else test_case_class.__name__}' (在实例化阶段或之前) 时发生严重错误: {e}", exc_info=True)
# 如果 test_case_instance 在实例化时失败,它将是 None
tc_id_for_log = test_case_instance.id if test_case_instance else (test_case_class.id if hasattr(test_case_class, 'id') else "unknown_tc_id_instantiation_error")
tc_name_for_log = test_case_instance.name if test_case_instance else (test_case_class.name if hasattr(test_case_class, 'name') else test_case_class.__name__)
# 实例化失败严重性默认为CRITICAL
tc_severity_for_log = test_case_instance.severity if test_case_instance else TestSeverity.CRITICAL
tc_duration = time.monotonic() - start_time tc_duration = time.monotonic() - start_time
# validation_results 可能在此阶段为空,或包含来自先前步骤的条目(如果错误发生在实例化之后)
return ExecutedTestCaseResult( return ExecutedTestCaseResult(
test_case_id=test_case_instance.id if test_case_instance else test_case_class.id if hasattr(test_case_class, 'id') else "unknown_tc_id", test_case_id=tc_id_for_log,
test_case_name=test_case_instance.name if test_case_instance else test_case_class.name if hasattr(test_case_class, 'name') else "Unknown Test Case Name", test_case_name=tc_name_for_log,
test_case_severity=test_case_instance.severity if test_case_instance else TestSeverity.CRITICAL, test_case_severity=tc_severity_for_log,
status=ExecutedTestCaseResult.Status.ERROR, status=ExecutedTestCaseResult.Status.ERROR,
validation_points=validation_results, validation_points=validation_results, # Ensure validation_results is defined (it is, at the start of the function)
message=f"测试用例执行时发生内部错误: {str(e)}", message=f"测试用例执行时发生内部错误 (可能在实例化期间): {str(e)}",
duration=tc_duration duration=tc_duration
) )
def _prepare_initial_request_data( def _prepare_initial_request_data(
self, self,
endpoint_spec: Dict[str, Any], endpoint_spec: Dict[str, Any], # 已经转换为字典
test_case_instance: Optional[BaseAPITestCase] = None test_case_instance: Optional[BaseAPITestCase] = None # 传入测试用例实例以便访问其LLM配置
) -> Tuple[str, Dict[str, Any], Dict[str, Any], Dict[str, Any], Optional[Any]]: ) -> APIRequestContext: # 返回 APIRequestContext 对象
""" """
根据OpenAPI端点规格和测试用例实例准备初始请求数据 根据API端点规范准备初始的请求数据包括URL模板路径参数查询参数头部和请求体
包含端点级别的LLM参数缓存逻辑 这些数据将作为测试用例中 generate_* 方法的输入
""" """
method = endpoint_spec.get("method", "get").upper() method = endpoint_spec.get('method', 'GET').upper()
operation_id = endpoint_spec.get("operationId", f"{method}_{endpoint_spec.get('path', '')}") path_template = endpoint_spec.get('path', '/') # 这是路径模板, e.g., /users/{id}
endpoint_cache_key = f"{method}_{endpoint_spec.get('path', '')}" operation_id = endpoint_spec.get('operationId') or f"{method}_{path_template.replace('/', '_').replace('{', '_').replace('}','')}"
self.logger.info(f"[{operation_id}] 开始为端点 {endpoint_cache_key} 准备初始请求数据 (TC: {test_case_instance.id if test_case_instance else 'N/A'})") initial_path_params: Dict[str, Any] = {}
initial_query_params: Dict[str, Any] = {}
initial_headers: Dict[str, str] = {}
initial_body: Optional[Any] = None
# 尝试从缓存加载参数 parameters = endpoint_spec.get('parameters', [])
if endpoint_cache_key in self.llm_endpoint_params_cache:
cached_params = self.llm_endpoint_params_cache[endpoint_cache_key]
self.logger.info(f"[{operation_id}] 从缓存加载了端点 '{endpoint_cache_key}' 的LLM参数。")
# 直接从缓存中获取各类参数,如果存在的话
path_params_data = cached_params.get("path_params", {})
query_params_data = cached_params.get("query_params", {})
headers_data = cached_params.get("headers", {})
body_data = cached_params.get("body") # Body可能是None
# 即使从缓存加载仍需确保默认头部如Accept, Content-Type存在或被正确设置 # 1. 处理路径参数
# Content-Type应基于body_data是否存在来决定 path_param_specs = [p for p in parameters if p.get('in') == 'path']
default_headers = {"Accept": "application/json"} for param_spec in path_param_specs:
if body_data is not None and method not in ["GET", "DELETE", "HEAD", "OPTIONS"]: name = param_spec.get('name')
default_headers["Content-Type"] = "application/json" if not name: continue
headers_data = {**default_headers, **headers_data} # 合并,缓存中的优先 should_use_llm = self._should_use_llm_for_param_type("path_params", test_case_instance)
if should_use_llm and self.llm_service:
self.logger.info(f"Attempting LLM generation for path parameter '{name}' in '{operation_id}'")
# generated_value = self.llm_service.generate_data_for_parameter(param_spec, endpoint_spec, "path")
# initial_path_params[name] = generated_value if generated_value is not None else f"llm_placeholder_for_{name}"
initial_path_params[name] = f"llm_path_{name}" # Placeholder
else:
if 'example' in param_spec:
initial_path_params[name] = param_spec['example']
elif param_spec.get('schema') and 'example' in param_spec['schema']:
initial_path_params[name] = param_spec['schema']['example'] # OpenAPI 3.0 `parameter.schema.example`
elif 'default' in param_spec.get('schema', {}):
initial_path_params[name] = param_spec['schema']['default']
elif 'default' in param_spec: # OpenAPI 2.0 `parameter.default`
initial_path_params[name] = param_spec['default']
else:
schema = param_spec.get('schema', {})
param_type = schema.get('type', 'string')
if param_type == 'integer': initial_path_params[name] = 123
elif param_type == 'number': initial_path_params[name] = 1.23
elif param_type == 'boolean': initial_path_params[name] = True
elif param_type == 'string' and schema.get('format') == 'uuid': initial_path_params[name] = str(UUID(int=0)) # Example UUID
elif param_type == 'string' and schema.get('format') == 'date': initial_path_params[name] = dt.date.today().isoformat()
elif param_type == 'string' and schema.get('format') == 'date-time': initial_path_params[name] = dt.datetime.now().isoformat()
else: initial_path_params[name] = f"param_{name}"
self.logger.debug(f"Initial path param for '{operation_id}': {name} = {initial_path_params.get(name)}")
self.logger.debug(f"[{operation_id}] (缓存加载) 准备的请求数据: method={method}, path_params={path_params_data}, query_params={query_params_data}, headers={list(headers_data.keys())}, body_type={type(body_data).__name__}") # 2. 处理查询参数
return method, path_params_data, query_params_data, headers_data, body_data query_param_specs = [p for p in parameters if p.get('in') == 'query']
for param_spec in query_param_specs:
name = param_spec.get('name')
if not name: continue
should_use_llm = self._should_use_llm_for_param_type("query_params", test_case_instance)
if should_use_llm and self.llm_service:
self.logger.info(f"Attempting LLM generation for query parameter '{name}' in '{operation_id}'")
initial_query_params[name] = f"llm_query_{name}" # Placeholder
else:
if 'example' in param_spec:
initial_query_params[name] = param_spec['example']
elif param_spec.get('schema') and 'example' in param_spec['schema']:
initial_query_params[name] = param_spec['schema']['example']
elif 'default' in param_spec.get('schema', {}):
initial_query_params[name] = param_spec['schema']['default']
elif 'default' in param_spec:
initial_query_params[name] = param_spec['default']
else:
initial_query_params[name] = f"query_val_{name}" # Simplified default
self.logger.debug(f"Initial query param for '{operation_id}': {name} = {initial_query_params.get(name)}")
# 缓存未命中,需要生成参数 # 3. 处理请求头参数 (包括规范定义的和标准的 Content-Type/Accept)
self.logger.info(f"[{operation_id}] 端点 '{endpoint_cache_key}' 的参数未在缓存中找到,开始生成。") header_param_specs = [p for p in parameters if p.get('in') == 'header']
generated_params_for_endpoint: Dict[str, Any] = {} for param_spec in header_param_specs:
name = param_spec.get('name')
if not name: continue
# 标准头 Content-Type 和 Accept 会在后面专门处理
if name.lower() in ['content-type', 'accept', 'authorization']:
self.logger.debug(f"Skipping standard header '{name}' in parameter processing for '{operation_id}'. It will be handled separately.")
continue
path_params_data: Dict[str, Any] = {} should_use_llm = self._should_use_llm_for_param_type("headers", test_case_instance)
query_params_data: Dict[str, Any] = {} if should_use_llm and self.llm_service:
headers_data_generated: Dict[str, Any] = {} # LLM或常规生成的不含默认 self.logger.info(f"Attempting LLM generation for header '{name}' in '{operation_id}'")
body_data: Optional[Any] = None initial_headers[name] = f"llm_header_{name}" # Placeholder
else:
if 'example' in param_spec:
initial_headers[name] = str(param_spec['example'])
elif param_spec.get('schema') and 'example' in param_spec['schema']:
initial_headers[name] = str(param_spec['schema']['example'])
elif 'default' in param_spec.get('schema', {}):
initial_headers[name] = str(param_spec['schema']['default'])
elif 'default' in param_spec:
initial_headers[name] = str(param_spec['default'])
else:
initial_headers[name] = f"header_val_{name}"
self.logger.debug(f"Initial custom header param for '{operation_id}': {name} = {initial_headers.get(name)}")
# 提取各类参数的定义列表 # 3.1 设置 Content-Type
path_params_spec_list = [p for p in endpoint_spec.get("parameters", []) if p.get("in") == "path"] # 优先从 requestBody.content 获取 (OpenAPI 3.x)
query_params_spec_list = [p for p in endpoint_spec.get("parameters", []) if p.get("in") == "query"] request_body_spec = endpoint_spec.get('requestBody', {})
headers_spec_list = [p for p in endpoint_spec.get("parameters", []) if p.get("in") == "header"] if 'content' in request_body_spec:
request_body_spec = endpoint_spec.get("requestBody", {}).get("content", {}).get("application/json", {}).get("schema") content_types = list(request_body_spec['content'].keys())
if content_types:
# 优先选择 application/json 如果存在
initial_headers['Content-Type'] = next((ct for ct in content_types if 'json' in ct.lower()), content_types[0])
elif 'consumes' in endpoint_spec: # 然后是 consumes (OpenAPI 2.0)
consumes = endpoint_spec['consumes']
if consumes:
initial_headers['Content-Type'] = next((c for c in consumes if 'json' in c.lower()), consumes[0])
elif method in ['POST', 'PUT', 'PATCH'] and not initial_headers.get('Content-Type'):
initial_headers['Content-Type'] = 'application/json' # 默认对于这些方法
self.logger.debug(f"Initial Content-Type for '{operation_id}': {initial_headers.get('Content-Type')}")
# --- 1. 处理路径参数 --- # 3.2 设置 Accept
param_type_key = "path_params" # 优先从 responses.<code>.content 获取 (OpenAPI 3.x)
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and path_params_spec_list: responses_spec = endpoint_spec.get('responses', {})
self.logger.info(f"[{operation_id}] 尝试使用LLM生成路径参数。") accept_header_set = False
object_schema, model_name = self._build_object_schema_for_params(path_params_spec_list, f"DynamicPathParamsFor_{operation_id}") for code, response_def in responses_spec.items():
if object_schema and model_name: if 'content' in response_def:
try: accept_types = list(response_def['content'].keys())
PydanticModel = self._create_pydantic_model_from_schema(object_schema, model_name) if accept_types:
if PydanticModel: initial_headers['Accept'] = next((at for at in accept_types if 'json' in at.lower() or '*/*' in at), accept_types[0])
llm_generated = self.llm_service.generate_parameters_from_schema( accept_header_set = True
PydanticModel, break
prompt_instruction=f"Generate valid path parameters for API operation: {operation_id}. Description: {endpoint_spec.get('description', '') or endpoint_spec.get('summary', 'N/A')}" if not accept_header_set and 'produces' in endpoint_spec: # 然后是 produces (OpenAPI 2.0)
produces = endpoint_spec['produces']
if produces:
initial_headers['Accept'] = next((p for p in produces if 'json' in p.lower() or '*/*' in p), produces[0])
accept_header_set = True
if not accept_header_set and not initial_headers.get('Accept'):
initial_headers['Accept'] = 'application/json, */*' # 更通用的默认值
self.logger.debug(f"Initial Accept header for '{operation_id}': {initial_headers.get('Accept')}")
# 4. 处理请求体 (Body)
request_body_schema: Optional[Dict[str, Any]] = None
# 确定请求体 schema 的来源,优先 OpenAPI 3.x 的 requestBody
content_type_for_body_schema = initial_headers.get('Content-Type', 'application/json').split(';')[0].strip()
if 'content' in request_body_spec and content_type_for_body_schema in request_body_spec['content']:
request_body_schema = request_body_spec['content'][content_type_for_body_schema].get('schema')
elif 'parameters' in endpoint_spec: # OpenAPI 2.0 (Swagger) body parameter
body_param = next((p for p in parameters if p.get('in') == 'body'), None)
if body_param and 'schema' in body_param:
request_body_schema = body_param['schema']
if request_body_schema:
should_use_llm_for_body = self._should_use_llm_for_param_type("body", test_case_instance)
if should_use_llm_for_body and self.llm_service:
self.logger.info(f"Attempting LLM generation for request body of '{operation_id}' with schema...")
initial_body = self.llm_service.generate_data_from_schema(request_body_schema, endpoint_spec, "requestBody")
if initial_body is None:
self.logger.warning(f"LLM failed to generate request body for '{operation_id}'. Falling back to default schema generator.")
initial_body = self._generate_data_from_schema(request_body_schema, context_name=f"{operation_id}_body", operation_id=operation_id)
else:
initial_body = self._generate_data_from_schema(request_body_schema, context_name=f"{operation_id}_body", operation_id=operation_id)
self.logger.debug(f"Initial request body generated for '{operation_id}' (type: {type(initial_body)})")
else:
self.logger.debug(f"No request body schema found or applicable for '{operation_id}' with Content-Type '{content_type_for_body_schema}'. Initial body is None.")
# 构造并返回APIRequestContext
return APIRequestContext(
method=method,
url=path_template, # 传递路径模板, e.g. /items/{itemId}
path_params=initial_path_params,
query_params=initial_query_params,
headers=initial_headers,
body=initial_body,
endpoint_spec=endpoint_spec # 传递原始的 endpoint_spec 字典
) )
if isinstance(llm_generated, dict):
path_params_data = llm_generated
self.logger.info(f"[{operation_id}] LLM成功生成路径参数: {path_params_data}")
else:
self.logger.warning(f"[{operation_id}] LLM为路径参数返回了非字典类型: {type(llm_generated)}。回退到常规生成。")
path_params_data = self._generate_params_from_list(path_params_spec_list, operation_id, "path")
else:
path_params_data = self._generate_params_from_list(path_params_spec_list, operation_id, "path")
except Exception as e:
self.logger.error(f"[{operation_id}] LLM生成路径参数失败: {e}。回退到常规生成。", exc_info=True)
path_params_data = self._generate_params_from_list(path_params_spec_list, operation_id, "path")
else: # _build_object_schema_for_params 返回 None
path_params_data = self._generate_params_from_list(path_params_spec_list, operation_id, "path")
else: # 不使用LLM或LLM服务不可用或者 path_params_spec_list 为空但仍需确保path_params_data被赋值
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and not path_params_spec_list:
self.logger.info(f"[{operation_id}] 配置为路径参数使用LLM但没有定义路径参数规格。")
# 对于不使用LLM或LLM不适用的情况或者 spec_list 为空的情况,都执行常规生成(如果 spec_list 非空则会记录)
if path_params_spec_list and not self._should_use_llm_for_param_type(param_type_key, test_case_instance):
self.logger.info(f"[{operation_id}] 使用常规方法或LLM未启用为路径参数。")
path_params_data = self._generate_params_from_list(path_params_spec_list, operation_id, "path")
generated_params_for_endpoint[param_type_key] = path_params_data
# --- 2. 处理查询参数 ---
param_type_key = "query_params"
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and query_params_spec_list:
self.logger.info(f"[{operation_id}] 尝试使用LLM生成查询参数。")
object_schema, model_name = self._build_object_schema_for_params(query_params_spec_list, f"DynamicQueryParamsFor_{operation_id}")
if object_schema and model_name:
try:
PydanticModel = self._create_pydantic_model_from_schema(object_schema, model_name)
if PydanticModel:
llm_generated = self.llm_service.generate_parameters_from_schema(
PydanticModel,
prompt_instruction=f"Generate valid query parameters for API operation: {operation_id}. Description: {endpoint_spec.get('description', '') or endpoint_spec.get('summary', 'N/A')}"
)
if isinstance(llm_generated, dict):
query_params_data = llm_generated
self.logger.info(f"[{operation_id}] LLM成功生成查询参数: {query_params_data}")
else:
self.logger.warning(f"[{operation_id}] LLM为查询参数返回了非字典类型: {type(llm_generated)}。回退到常规生成。")
query_params_data = self._generate_params_from_list(query_params_spec_list, operation_id, "query")
else:
query_params_data = self._generate_params_from_list(query_params_spec_list, operation_id, "query")
except Exception as e:
self.logger.error(f"[{operation_id}] LLM生成查询参数失败: {e}。回退到常规生成。", exc_info=True)
query_params_data = self._generate_params_from_list(query_params_spec_list, operation_id, "query")
else: # _build_object_schema_for_params 返回 None
query_params_data = self._generate_params_from_list(query_params_spec_list, operation_id, "query")
else: # 不使用LLM或LLM服务不可用或者 query_params_spec_list 为空
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and not query_params_spec_list:
self.logger.info(f"[{operation_id}] 配置为查询参数使用LLM但没有定义查询参数规格。")
if query_params_spec_list and not self._should_use_llm_for_param_type(param_type_key, test_case_instance):
self.logger.info(f"[{operation_id}] 使用常规方法或LLM未启用为查询参数。")
query_params_data = self._generate_params_from_list(query_params_spec_list, operation_id, "query")
generated_params_for_endpoint[param_type_key] = query_params_data
# --- 3. 处理头部参数 ---
param_type_key = "headers"
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and headers_spec_list:
self.logger.info(f"[{operation_id}] 尝试使用LLM生成头部参数。")
object_schema, model_name = self._build_object_schema_for_params(headers_spec_list, f"DynamicHeadersFor_{operation_id}")
if object_schema and model_name:
try:
PydanticModel = self._create_pydantic_model_from_schema(object_schema, model_name)
if PydanticModel:
llm_generated = self.llm_service.generate_parameters_from_schema(
PydanticModel,
prompt_instruction=f"Generate valid HTTP headers for API operation: {operation_id}. Description: {endpoint_spec.get('description', '') or endpoint_spec.get('summary', 'N/A')}"
)
if isinstance(llm_generated, dict):
headers_data_generated = llm_generated # Store LLM generated ones separately first
self.logger.info(f"[{operation_id}] LLM成功生成头部参数: {headers_data_generated}")
else:
self.logger.warning(f"[{operation_id}] LLM为头部参数返回了非字典类型: {type(llm_generated)}。回退到常规生成。")
headers_data_generated = self._generate_params_from_list(headers_spec_list, operation_id, "header")
else:
headers_data_generated = self._generate_params_from_list(headers_spec_list, operation_id, "header")
except Exception as e:
self.logger.error(f"[{operation_id}] LLM生成头部参数失败: {e}。回退到常规生成。", exc_info=True)
headers_data_generated = self._generate_params_from_list(headers_spec_list, operation_id, "header")
else: # _build_object_schema_for_params 返回 None
headers_data_generated = self._generate_params_from_list(headers_spec_list, operation_id, "header")
else: # 不使用LLM或LLM服务不可用或者 headers_spec_list 为空
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and not headers_spec_list:
self.logger.info(f"[{operation_id}] 配置为头部参数使用LLM但没有定义头部参数规格。")
if headers_spec_list and not self._should_use_llm_for_param_type(param_type_key, test_case_instance):
self.logger.info(f"[{operation_id}] 使用常规方法或LLM未启用为头部参数。")
headers_data_generated = self._generate_params_from_list(headers_spec_list, operation_id, "header")
generated_params_for_endpoint[param_type_key] = headers_data_generated
# --- 4. 处理请求体 ---
param_type_key = "body"
if self._should_use_llm_for_param_type(param_type_key, test_case_instance) and request_body_spec:
self.logger.info(f"[{operation_id}] 尝试使用LLM生成请求体。")
model_name = f"DynamicBodyFor_{operation_id}"
try:
PydanticModel = self._create_pydantic_model_from_schema(request_body_spec, model_name)
if PydanticModel:
llm_generated_body = self.llm_service.generate_parameters_from_schema(
PydanticModel,
prompt_instruction=f"Generate a valid JSON request body for API operation: {operation_id}. Description: {endpoint_spec.get('description', '') or endpoint_spec.get('summary', 'N/A')}. Schema: {json.dumps(request_body_spec, indent=2)}"
)
if isinstance(llm_generated_body, dict):
try:
body_data = PydanticModel(**llm_generated_body).model_dump(by_alias=True)
self.logger.info(f"[{operation_id}] LLM成功生成并验证请求体。")
except ValidationError as ve:
self.logger.error(f"[{operation_id}] LLM生成的请求体未能通过Pydantic模型验证: {ve}。回退到常规生成。")
body_data = self._generate_data_from_schema(request_body_spec, "requestBody", operation_id)
elif isinstance(llm_generated_body, BaseModel): # LLM直接返回模型实例
body_data = llm_generated_body.model_dump(by_alias=True)
self.logger.info(f"[{operation_id}] LLM成功生成请求体 (模型实例)。")
else:
self.logger.warning(f"[{operation_id}] LLM为请求体返回了非预期类型: {type(llm_generated_body)}。回退到常规生成。")
body_data = self._generate_data_from_schema(request_body_spec, "requestBody", operation_id)
else: # _create_pydantic_model_from_schema 返回 None
self.logger.warning(f"[{operation_id}] 未能为请求体创建Pydantic模型。回退到常规生成。")
body_data = self._generate_data_from_schema(request_body_spec, "requestBody", operation_id)
except Exception as e:
self.logger.error(f"[{operation_id}] LLM生成请求体失败: {e}。回退到常规生成。", exc_info=True)
body_data = self._generate_data_from_schema(request_body_spec, "requestBody", operation_id)
elif request_body_spec: # 不使用LLM但有body spec
self.logger.info(f"[{operation_id}] 使用常规方法或LLM未启用/不适用,为请求体。")
body_data = self._generate_data_from_schema(request_body_spec, "requestBody", operation_id)
else: # 没有requestBody定义
self.logger.info(f"[{operation_id}] 端点没有定义请求体。")
body_data = None # 明确设为None
generated_params_for_endpoint[param_type_key] = body_data
# 合并最终的头部 (默认头部 + 生成的头部)
final_headers = {"Accept": "application/json"}
if body_data is not None and method not in ["GET", "DELETE", "HEAD", "OPTIONS"]:
final_headers["Content-Type"] = "application/json"
final_headers.update(headers_data_generated) # headers_data_generated 是从LLM或常规生成的
# 将本次生成的所有参数存入缓存
self.llm_endpoint_params_cache[endpoint_cache_key] = generated_params_for_endpoint
self.logger.info(f"[{operation_id}] 端点 '{endpoint_cache_key}' 的参数已生成并存入缓存。")
# 确保路径参数中的值都是字符串 (URL部分必须是字符串)
path_params_data_str = {k: str(v) if v is not None else "" for k, v in path_params_data.items()}
self.logger.debug(f"[{operation_id}] (新生成) 准备的请求数据: method={method}, path_params={path_params_data_str}, query_params={query_params_data}, headers={list(final_headers.keys())}, body_type={type(body_data).__name__}")
return method, path_params_data_str, query_params_data, final_headers, body_data
def _build_object_schema_for_params(self, params_spec_list: List[Dict[str, Any]], model_name_base: str) -> Tuple[Optional[Dict[str, Any]], str]: def _build_object_schema_for_params(self, params_spec_list: List[Dict[str, Any]], model_name_base: str) -> Tuple[Optional[Dict[str, Any]], str]:
""" """
@ -1490,4 +1465,34 @@ class APITestOrchestrator:
self.logger.debug(f"{log_prefix}_generate_data_from_schema: 未知或不支持的 schema 类型 '{schema_type}' for{context_log}. Schema: {schema}") self.logger.debug(f"{log_prefix}_generate_data_from_schema: 未知或不支持的 schema 类型 '{schema_type}' for{context_log}. Schema: {schema}")
return None return None
def _format_url_with_path_params(self, path_template: str, path_params: Dict[str, Any]) -> str:
"""
使用提供的路径参数格式化URL路径模板
例如: path_template='/users/{userId}/items/{itemId}', path_params={'userId': 123, 'itemId': 'abc'}
会返回 '/users/123/items/abc'
同时处理 base_url.
"""
# 首先确保 path_template 不以 '/' 开头,如果 self.base_url 已经以 '/' 结尾
# 或者确保它们之间只有一个 '/'
formatted_path = path_template
for key, value in path_params.items():
placeholder = f"{{{key}}}"
if placeholder in formatted_path:
formatted_path = formatted_path.replace(placeholder, str(value))
else:
self.logger.warning(f"路径参数 '{key}' 在路径模板 '{path_template}' 中未找到占位符。")
# 拼接 base_url 和格式化后的路径
# 确保 base_url 和 path 之间只有一个斜杠
if self.base_url.endswith('/') and formatted_path.startswith('/'):
url = self.base_url + formatted_path[1:]
elif not self.base_url.endswith('/') and not formatted_path.startswith('/'):
if formatted_path: # 避免在 base_url 后添加不必要的 '/' (如果 formatted_path 为空)
url = self.base_url + '/' + formatted_path
else:
url = self.base_url
else:
url = self.base_url + formatted_path
return url

1404
log.txt

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff