Spaces:
Sleeping
Sleeping
File size: 62,700 Bytes
df47251 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 | # WebScraper-OpenEnv: Software Design Document
**Project:** WebScraper-OpenEnv
**Version:** 1.0.0
**Hackathon:** OpenEnv β Round 1
**Author:** [Your Name]
**Date:** March 2026
---
## Table of Contents
1. [Project Overview](#1-project-overview)
2. [Real-World Motivation](#2-real-world-motivation)
3. [System Architecture](#3-system-architecture)
4. [OpenEnv Specification](#4-openenv-specification)
- 4.1 Observation Model
- 4.2 Action Model
- 4.3 Reward Model
- 4.4 Episode Lifecycle
5. [Environment State Machine](#5-environment-state-machine)
6. [Task Definitions](#6-task-definitions)
- Task 1: Static Page Field Extraction (Easy)
- Task 2: Paginated Catalog Scraping (Medium)
- Task 3: Deep Research with Search & Fact Verification (Hard)
7. [Grader Design](#7-grader-design)
8. [Reward Function Design](#8-reward-function-design)
9. [Network Layer β VPN & Proxy](#9-network-layer--vpn--proxy)
- 9.1 Architecture
- 9.2 Proxy Configuration
- 9.3 VPN Configuration
- 9.4 Public Pool
- 9.5 Settings Persistence
10. [API Endpoint Specification](#10-api-endpoint-specification)
11. [Data Models (Pydantic Schemas)](#11-data-models-pydantic-schemas)
12. [Simulated Web Environment](#12-simulated-web-environment)
13. [Baseline Inference Script](#13-baseline-inference-script)
14. [Project Structure](#14-project-structure)
15. [Dockerfile & Deployment](#15-dockerfile--deployment)
16. [openenv.yaml](#16-openenvyaml)
17. [Testing Strategy](#17-testing-strategy)
18. [Known Limitations & Future Work](#18-known-limitations--future-work)
---
## 1. Project Overview
**WebScraper-OpenEnv** is a reinforcement learning environment that challenges AI agents to perform structured **web data extraction** β a task humans and automated pipelines carry out every day for market research, competitive intelligence, lead generation, price monitoring, and data journalism.
The environment wraps a fully **self-contained simulated web server** (no external network calls required) that presents realistic HTML pages with varying structure, noise, pagination, and adversarial anti-scraping patterns. Agents must issue targeted extraction actions to retrieve structured data within budget and quality constraints.
This environment is designed to:
- Evaluate an agent's ability to **parse and reason about semi-structured HTML**
- Test **multi-step planning** across paginated or linked content
- Stress-test **robustness** when pages are noisy, misleading, or rate-limited
- Provide **dense reward signals** that guide learning rather than just measuring final output
---
## 2. Real-World Motivation
Web scraping is a core capability required across:
| Use Case | Example |
|---|---|
| E-commerce monitoring | Track competitor prices across 1,000 SKUs daily |
| Lead generation | Extract company names, emails, headcount from directories |
| Research automation | Aggregate paper titles, authors, abstracts from 5 sources |
| News intelligence | Collect headlines, dates, sources matching a keyword |
| Real estate | Pull property listings, prices, square footage from portals |
Current LLM agents struggle with scraping because it requires:
1. Selecting the right CSS/XPath selector or field label from noisy HTML
2. Knowing *when to stop* (pagination boundary detection)
3. Deduplication and normalization of extracted values
4. Graceful recovery from blocked or malformed pages
No existing OpenEnv environment covers this domain. **WebScraper-OpenEnv fills this gap.**
---
## 3. System Architecture
```
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Single Docker Container (:7860) β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Vite Frontend (React) β β
β β TaskSelector β EpisodeViewer β RewardChart β Baseline β β
β β fetch("/api/...") β β
β ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ β
β β same origin β
β βββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββ β
β β FastAPI Application β β
β β β β
β β /api/reset /api/step /api/state /api/tasks β β
β β /api/grader /api/baseline β β
β β /* β serves frontend/dist/index.html (SPA fallback) β β
β β β β
β β ββββββββββββββββββββββββ ββββββββββββββββββββββββββββ β β
β β β WebScraperEnv β β SimulatedWebServer β β β
β β β - episode state βββΊβ - HTML page generator β β β
β β β - action dispatch β β - pagination engine β β β
β β β - reward engine β β - noise injector β β β
β β β - grader registry β β - anti-scrape simulator β β β
β β ββββββββββββββββββββββββ ββββββββββββββββββββββββββββ β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β²
β HTTP JSON (agents / baseline script)
βΌ
AI Agent / Baseline Script
```
**Key design decisions:**
- The simulated web server is **seeded and deterministic** β same `task_id` + `seed` always produces the same pages, enabling reproducible evaluation.
- Pages are generated dynamically from Jinja2 templates with injected noise, not stored as static files, keeping the Docker image small.
- The environment is **stateless across HTTP requests** but maintains episode state in-memory, keyed by `episode_id`.
- The **Vite frontend** is compiled at Docker build time (Stage 1) and served as static files by FastAPI β no separate web server (nginx, etc.) needed. Single port, single process.
---
## 4. OpenEnv Specification
### 4.1 Observation Model
An `Observation` is returned after every `reset()` and `step()` call.
```python
class Observation(BaseModel):
episode_id: str # UUID for the current episode
task_id: str # Task identifier ("task_easy" | "task_medium" | "task_hard")
step_number: int # Current step count (0-indexed)
current_url: str # Simulated URL of the current page
page_html: str # Raw HTML content of the current page (trimmed to 8000 chars)
page_title: str # <title> tag value
available_actions: list[str] # High-level action types available at this step
extracted_so_far: dict # Fields extracted successfully in this episode so far
pages_visited: list[str] # Ordered list of URLs visited this episode
budget_remaining: int # Remaining step budget (starts at task max_steps)
task_description: str # Human-readable task goal
target_fields: list[str] # Names of fields the agent must extract
hints: list[str] # Contextual hints (empty in hard mode)
```
**Design rationale:**
- `page_html` is included directly in the observation so agents can act without a separate fetch step. Truncated at 8,000 characters to simulate token budget pressure realistically.
- `extracted_so_far` gives the agent a running view of what it has already collected β critical for multi-page tasks.
- `hints` are populated for easy/medium tasks and empty for hard, creating a natural difficulty gradient.
### 4.2 Action Model
An `Action` is submitted by the agent in each `step()` call.
```python
class Action(BaseModel):
action_type: ActionType # Enum β see below
target_field: str | None # Field name to extract (for EXTRACT actions)
selector: str | None # CSS selector or field label hint
navigate_to: str | None # URL or "next_page" / "prev_page" keyword
submit_extraction: dict | None # Final fieldβvalue map (for SUBMIT action)
notes: str | None # Agent's internal reasoning note (not scored, logged)
```
```python
class ActionType(str, Enum):
EXTRACT_FIELD = "extract_field" # Extract one named field from current page
NAVIGATE = "navigate" # Go to a URL or next/prev page
SEARCH_PAGE = "search_page" # Regex/keyword search within current page HTML
INSPECT_ELEMENT = "inspect_element" # Get focused text around a CSS selector
SUBMIT = "submit" # Final answer β ends the episode
SKIP_PAGE = "skip_page" # Declare current page irrelevant, move on
# ββ Task 3 / Hard mode only βββββββββββββββββββββββββββββββββββββββββ
SEARCH_ENGINE = "search_engine" # Issue a query to the configured search engine
VERIFY_FACT = "verify_fact" # Cross-check a field value against a second source
RESOLVE_CONFLICT = "resolve_conflict" # Declare which of two conflicting values is authoritative
FETCH_URL = "fetch_url" # Fetch an arbitrary URL (uses active proxy/VPN if set)
```
**Extended `Action` model for new types:**
```python
class Action(BaseModel):
action_type: ActionType
# --- Existing fields ---
target_field: str | None = None
selector: str | None = None
navigate_to: str | None = None
submit_extraction: dict | None = None
notes: str | None = None
# --- Search engine fields ---
query: str | None = None # Query string for SEARCH_ENGINE
search_engine: str | None = None # "google" | "bing" | "brave" | "ddg" (uses settings default if None)
result_limit: int = 5 # Max search results to return (1β10)
# --- Fact verification fields ---
field_name: str | None = None # Field to verify in VERIFY_FACT
claimed_value: str | None = None # Value to check
verification_source: str | None = None # URL to verify against
# --- Conflict resolution fields ---
conflicting_sources: list[str] | None = None # Two URLs with disagreeing values
chosen_source: str | None = None # URL the agent judges more authoritative
rationale: str | None = None # Agent's justification (logged, not scored)
```
**Design rationale:**
- Actions are **higher-level than raw HTTP** β the agent doesn't manage cookies or headers, it focuses on extraction logic.
- `INSPECT_ELEMENT` gives the agent a focused window into the DOM, rewarding agents that learn to select precisely.
- `SEARCH_ENGINE` issues a query through whichever engine the user has configured in Settings (or the environment's default). Results are returned as a ranked list of `{title, url, snippet}` objects β the agent then navigates to the most promising URL.
- `VERIFY_FACT` instructs the environment to fetch a second source and check whether the claimed value appears there. Returns a `verified: bool` and a `confidence: float` β not a definitive answer, mirroring real-world uncertainty.
- `RESOLVE_CONFLICT` is scored by the grader: if the agent picks the more authoritative source it earns a bonus; if it picks the wrong one it earns a penalty.
- `SUBMIT` is the terminal action that triggers the grader.
### 4.3 Reward Model
```python
class Reward(BaseModel):
value: float # Reward for this step (-1.0 to +1.0)
cumulative: float # Total reward accumulated this episode
breakdown: dict # Labeled sub-rewards (for interpretability)
message: str # Human-readable explanation
```
### 4.4 Episode Lifecycle
```
reset(task_id, seed?)
β Observation (step 0, fresh page, budget = max_steps)
step(action: EXTRACT_FIELD | NAVIGATE | ...)
β Observation (updated state), Reward, done=False, info
step(action: SUBMIT)
β Observation (terminal), Reward (grader score * scale), done=True, info
state()
β Current episode state snapshot (same fields as Observation + internal metadata)
```
An episode also ends automatically if:
- `budget_remaining` reaches 0 (budget exhaustion β scores whatever was extracted)
- The agent navigates to more than `max_pages` unique URLs
---
## 5. Environment State Machine
```
reset()
β
βΌ
ββββββββββββββββ
β RUNNING βββββββββββββββββββββββββββββββββββββββββββββ
β β β
β step(NAV) ββββΊ fetch_page() βββΊ update_obs() βββββββ€
β step(EXT) ββββΊ extract() βββΊ update_obs() βββββββ€
β step(SRCH) ββββΊ search_html() βββΊ update_obs() βββββββ€
β step(SE) ββββΊ search_engine() βββΊ ranked_results βββββ€
β step(VRF) ββββΊ verify_fact() βββΊ confidence_score βββ€
β step(RES) ββββΊ resolve() βββΊ authoritative val ββ
ββββββββ¬ββββββββ
β
step(SUBMIT) or budget=0
β
βΌ
ββββββββββββββββ
β TERMINAL ββββΊ grader.score() βββΊ final Reward
ββββββββββββββββ
```
**State fields stored per episode:**
| Field | Type | Description |
|---|---|---|
| `episode_id` | str | UUID |
| `task_id` | str | Active task |
| `seed` | int | RNG seed for page generation |
| `step_number` | int | Steps taken |
| `current_url` | str | Active page URL |
| `pages_visited` | list | Navigation history |
| `extracted_data` | dict | Fieldβvalue map built up by agent |
| `ground_truth` | dict | Hidden correct fieldβvalue map |
| `budget` | int | Steps remaining |
| `status` | Enum | RUNNING / TERMINAL |
| `created_at` | datetime | Episode start time |
---
## 6. Task Definitions
### Task 1: Static Page Field Extraction (Easy)
**ID:** `task_easy`
**Max Steps:** 10
**Max Pages:** 1
**Hints:** Yes
**Scenario:**
The agent is given a single product listing page for an e-commerce store. The page contains a product name, price, SKU, star rating, and number of reviews. Minimal noise. Fields are labeled clearly.
**Target Fields:**
```
product_name, price, sku, star_rating, review_count
```
**Sample Page URL:** `sim://shop.example.com/product/42`
**Ground Truth (example, seeded):**
```json
{
"product_name": "Wireless Noise-Cancelling Headphones",
"price": "$89.99",
"sku": "WNC-4421-BLK",
"star_rating": "4.3",
"review_count": "1,247"
}
```
**Success Criteria:**
- Extract all 5 fields correctly β score 1.0
- Partial credit per field (0.2 per field)
- Normalized comparison (whitespace-stripped, case-insensitive)
**Difficulty Rationale:** A capable LLM can find labeled fields in clean HTML in 1β3 steps with direct CSS selectors or simple keyword search.
---
### Task 2: Paginated Catalog Scraping (Medium)
**ID:** `task_medium`
**Max Steps:** 25
**Max Pages:** 5
**Hints:** Partial (structure hint, no selector hint)
**Scenario:**
The agent must scrape a product catalog spread across 3 pages of pagination (20 items per page, 60 total items simulated). The agent must collect the **name and price of the 3 cheapest items** across all pages. Items are listed in random price order. The agent must decide whether to visit all pages or infer from partial data.
**Target Fields:**
```
cheapest_item_1_name, cheapest_item_1_price,
cheapest_item_2_name, cheapest_item_2_price,
cheapest_item_3_name, cheapest_item_3_price
```
**Complications introduced:**
- Prices use mixed formats: `$12.99`, `$12.990`, `12.99 USD` β normalization required
- One page contains a "Featured" item injected at the top that is actually overpriced
- Pagination links use non-obvious URL patterns (`?pg=2` vs `?offset=20`)
**Grader Logic:**
1. Extract agent's top-3 cheapest items
2. Compare to ground truth top-3 (computed by environment at episode start)
3. Score = (# correctly identified items / 3) Γ quality bonus (if price values match within Β±$0.01)
**Difficulty Rationale:** Requires multi-page navigation planning, price normalization, and sorting logic β a significant step up from single-page extraction.
---
### Task 3: Deep Research with Search & Fact Verification (Hard)
**ID:** `task_hard`
**Max Steps:** 60
**Max Pages:** 20
**Hints:** None
**Search Engine:** Required (uses configured engine or environment default)
**Fact Verification:** Required for minimum 3 fields to achieve full score
---
**Scenario:**
The agent is given a **target entity** (a mid-size private company, randomly selected per seed) and must build a fully sourced, verified intelligence profile. No starting URL is provided β the agent must begin by issuing search engine queries to discover relevant pages. Information is distributed across 6+ simulated domains and some fields only appear on pages that are only discoverable via search (not linked from the entry page). At least two fields will have conflicting values across sources, and the agent must explicitly resolve these conflicts to earn full credit.
---
**Target Fields (14 total, grouped by difficulty tier):**
```
ββ Tier 1 β Basic Identity (weight 1.0x each) ββββββββββββββββββββββββββ
company_name Full legal name of the company
headquarters_city City of primary HQ
headquarters_country Country of primary HQ
primary_industry Top-level industry category (e.g. "FinTech", "SaaS")
ββ Tier 2 β Operational Data (weight 1.5x each) ββββββββββββββββββββββββ
founding_year Year company was founded [CONFLICT present]
employee_count_range Bucketed range: "1-50" | "51-200" | "201-500" | "501-2000" | "2000+"
ceo_name Full name of current CEO [requires search to discover page]
product_count Number of distinct products/services listed [requires enumeration]
ββ Tier 3 β Financial & Strategic (weight 2.0x each) βββββββββββββββββββ
latest_funding_round_type Series A/B/C | Seed | Growth | IPO | Unknown
latest_funding_amount_usd Numeric USD value (normalize: "$12M" β 12000000)
total_funding_usd Cumulative raised (may require summing across rounds) [CONFLICT present]
lead_investor Name of lead investor in latest round [search-only page]
ββ Tier 4 β Verification Required (weight 2.5x each) βββββββββββββββββββ
founding_year_verified Must call VERIFY_FACT; score only awarded if verified
ceo_name_verified Must call VERIFY_FACT from a second independent source
```
---
**Complications introduced:**
**Search-first discovery**
No entry URL is provided. The agent must use `SEARCH_ENGINE` to find a homepage, news page, and financial data page. The simulated search engine returns ranked results with varying relevance β the top result is not always the most useful one.
**Cross-domain fragmentation**
Data is spread across 6 simulated domains. No single domain holds more than 4 fields. The agent must plan a visit sequence and track what it has found vs. what is still missing.
| Domain | Fields present |
|---|---|
| `sim://company.example.com` | company_name, headquarters_city/country, primary_industry |
| `sim://directory.example.com` | founding_year (version A), employee_count_range, ceo_name |
| `sim://news.example.com` | latest_funding_round_type, latest_funding_amount_usd, lead_investor |
| `sim://finance.example.com` | total_funding_usd, founding_year (version B β conflict), product_count |
| `sim://regulatory.example.com` | founding_year (authoritative β SEC-style filing, only discoverable via search) |
| `sim://linkedin-sim.example.com` | ceo_name (second independent source for verification) |
**Deliberate conflicts**
- `founding_year`: directory says 2011, finance page says 2013. The regulatory filing (search-only) says 2012 β this is the authoritative answer. Agent must issue `SEARCH_ENGINE` query to find it, then `RESOLVE_CONFLICT` naming it as authoritative.
- `total_funding_usd`: news page reports latest round only; finance page has cumulative. Agent must distinguish these and report cumulative.
**Prose extraction & normalization**
- `employee_count_range` appears as: "We have grown to over 800 people worldwide" β must map to `"501-2000"`
- `latest_funding_amount_usd` appears as: "raised $24.5 million in Series B" β must normalize to `24500000`
- `product_count` requires counting `<li>` items inside a specific section, not reading a single labeled field
**Simulated anti-scraping**
- `finance.example.com` returns a 429-like interstitial on the first visit; agent must either retry (costs a step) or configure a proxy/VPN in settings to bypass it
- `linkedin-sim.example.com` requires a `SEARCH_PAGE` keyword unlock before full content is accessible
**Verification gates**
Fields `founding_year_verified` and `ceo_name_verified` are only scoreable if the agent has issued a `VERIFY_FACT` action for them referencing a different domain than the one the value was originally extracted from. The grader checks the action log β extraction alone is not sufficient.
---
**Search Engine Behavior in Task 3:**
When the agent calls `SEARCH_ENGINE`, the simulated engine returns results structured as:
```json
{
"query": "Acme Corp company profile",
"results": [
{
"rank": 1,
"title": "Acme Corp β Official Website",
"url": "sim://company.example.com/about",
"snippet": "Acme Corp is a leading SaaS platform headquartered in Austin..."
},
{
"rank": 2,
"title": "Acme Corp on Business Directory",
"url": "sim://directory.example.com/acme-corp",
"snippet": "Founded in 2011. 820 employees. CEO: Jane Doe..."
}
],
"total_results_simulated": 47,
"engine_used": "brave"
}
```
The agent can call `SEARCH_ENGINE` up to **8 times** per episode without penalty. Beyond 8 calls, each additional search costs `-0.05` reward (diminishing returns signal).
---
**Grader Logic:**
```python
def score_task_hard(submission, ground_truth, episode_state):
score = 0.0
max_score = sum(FIELD_WEIGHTS.values()) # 26.0 total weighted points
for field, weight in FIELD_WEIGHTS.items():
agent_val = normalize(submission.get(field))
truth_val = normalize(ground_truth[field])
if field.endswith("_verified"):
# Only award if agent issued a VERIFY_FACT for this field
# referencing a different source than the extraction source
verify_actions = [a for a in episode_state.action_log
if a.action_type == "verify_fact"
and a.field_name == field.replace("_verified", "")]
cross_source = any(
a.verification_source != episode_state.primary_source_for[field]
for a in verify_actions
)
if agent_val == truth_val and cross_source:
score += weight
elif agent_val == truth_val:
score += weight * 0.5 # Partial: correct but unverified
elif field in CONFLICT_FIELDS:
# Check agent issued RESOLVE_CONFLICT with correct authoritative source
resolve_actions = [a for a in episode_state.action_log
if a.action_type == "resolve_conflict"
and field in str(a)]
resolved_correctly = any(
a.chosen_source == AUTHORITATIVE_SOURCE[field]
for a in resolve_actions
)
if agent_val == truth_val and resolved_correctly:
score += weight
elif agent_val == truth_val:
score += weight * 0.6 # Correct value but no explicit resolution
else:
if agent_val == truth_val:
score += weight
elif partial_match(agent_val, truth_val):
score += weight * 0.4
# Coverage bonus: +0.5 if all 14 fields present in submission (even if some wrong)
coverage_bonus = 0.5 if len(submission) >= 14 else len(submission) / 14 * 0.5
raw = (score / max_score) + (coverage_bonus / (max_score + 0.5))
return min(raw, 1.0)
```
**Expected baseline scores:**
| Agent | Expected Score | Bottleneck |
|---|---|---|
| gpt-4o-mini (no tools) | ~0.20 | Cannot discover search-only pages |
| gpt-4o-mini + search | ~0.45 | Struggles with conflict resolution |
| gpt-4o (ReAct loop) | ~0.62 | Verification gate compliance |
| Human (manual) | ~0.90 | Benchmark ceiling |
**Difficulty Rationale:** This task is genuinely hard for frontier models because it requires: (1) search-first discovery with no entry URL, (2) multi-domain planning across 6 sources, (3) fact verification as a mandatory action class (not just extracting a value), (4) explicit conflict resolution with source authority reasoning, and (5) normalization of numeric and prose values. No single capability is sufficient β the agent must exercise all of them in one episode.
---
## 7. Grader Design
Each task has a dedicated `Grader` class implementing the following interface:
```python
class BaseGrader(ABC):
def score(
self,
agent_submission: dict, # The agent's SUBMIT payload
ground_truth: dict, # Hidden correct values
episode_state: EpisodeState
) -> GraderResult:
...
class GraderResult(BaseModel):
score: float # 0.0 β 1.0
field_scores: dict[str, float] # Per-field breakdown
feedback: str # Human-readable explanation
penalty_applied: bool # True if penalties were triggered
penalty_reason: str | None
```
**Normalization Rules applied before comparison:**
| Field Type | Normalization |
|---|---|
| Price | Strip currency symbols, commas β float |
| Text | Strip whitespace, lowercase, remove punctuation |
| Number with commas | `"1,247"` β `1247` |
| Range | `"500-999"` bucketed comparison |
| Year | Integer comparison |
**Penalties:**
- If `step_number > max_steps * 0.8` and fewer than 50% fields extracted β efficiency penalty of -0.1
- If agent submits more than 3 times (SUBMIT + reset-less re-attempts) β repeat penalty of -0.05 per extra submit
**Determinism guarantee:** All graders use only the seeded `ground_truth` dict and the submitted dict. No randomness at score time.
---
## 8. Reward Function Design
The reward function provides **dense signal across the full trajectory**, not just a terminal reward.
```
R_total = R_extraction + R_efficiency + R_navigation + R_terminal - R_penalty
```
### Per-Step Rewards
| Event | Reward | Rationale |
|---|---|---|
| `EXTRACT_FIELD` β correct value | +0.15 | Core task success signal |
| `EXTRACT_FIELD` β partially correct (wrong format, right content) | +0.05 | Encourages normalization learning |
| `EXTRACT_FIELD` β wrong value | -0.05 | Penalizes overconfident extraction |
| `EXTRACT_FIELD` β field already extracted | -0.10 | Penalizes redundant actions |
| `NAVIGATE` β new relevant page | +0.05 | Rewards exploration |
| `NAVIGATE` β already-visited page | -0.08 | Penalizes loops |
| `NAVIGATE` β irrelevant page (no target fields) | -0.03 | Soft penalty for bad routing |
| `SEARCH_PAGE` β finds target field hint | +0.03 | Rewards intelligent search |
| `SEARCH_PAGE` β no results | -0.01 | Small cost for wasted action |
| `INSPECT_ELEMENT` β valid selector hit | +0.02 | Rewards precision |
| `SKIP_PAGE` β page is actually irrelevant | +0.05 | Rewards correct relevance judgment |
| `SKIP_PAGE` β page contained target fields | -0.15 | Penalizes incorrect dismissal |
| `SEARCH_ENGINE` β query within 8-call budget | 0.00 | Neutral β search is a tool, not scored |
| `SEARCH_ENGINE` β discovers a new relevant domain | +0.08 | Rewards effective query formulation |
| `SEARCH_ENGINE` β call #9+ (over budget) | -0.05 | Diminishing returns signal |
| `VERIFY_FACT` β claimed value confirmed | +0.12 | Rewards verification behavior |
| `VERIFY_FACT` β claimed value contradicted | +0.08 | Still rewards checking (good epistemic practice) |
| `VERIFY_FACT` β verifying already-verified field | -0.05 | Penalizes redundant verification |
| `RESOLVE_CONFLICT` β correct authoritative source | +0.20 | High reward for correct reasoning |
| `RESOLVE_CONFLICT` β wrong authoritative source | -0.10 | Penalizes incorrect conflict resolution |
| `FETCH_URL` β returns useful content | +0.02 | Small reward for productive fetch |
| `FETCH_URL` β blocked (anti-scrape, no proxy set) | -0.03 | Mild penalty β should configure proxy |
| `FETCH_URL` β blocked (proxy active, retry succeeds) | +0.05 | Rewards using proxy correctly |
| Budget exhaustion (no SUBMIT) | -0.20 | Penalizes running out of budget |
### Terminal Reward (on SUBMIT)
```
R_terminal = grader_score Γ 2.0
```
This scales the terminal reward to dominate the trajectory reward, ensuring the agent optimizes for final output quality.
### Reward Range
- Minimum possible (all wrong, loops, budget exhausted): approximately -2.5
- Maximum possible (all correct, efficient path): approximately +2.5
- Typical good agent trajectory: +1.0 to +1.8
---
## 9. Network Layer β VPN & Proxy
The network layer is an optional but impactful system component. When active, all `NAVIGATE`, `FETCH_URL`, and `SEARCH_ENGINE` actions route outbound requests through the configured proxy or VPN. In simulation mode (default), the layer gates which simulated domains respond with 200 vs. 429 β giving agents a realistic incentive to configure networking.
---
### 9.1 Architecture
```
Agent Action (FETCH_URL / NAVIGATE / SEARCH_ENGINE)
β
βΌ
βββββββββββββββββββββββββ
β NetworkRouter β
β β
β active_proxy? βββββββΊββββΊ requests.Session(proxies={...})
β active_vpn? βββββββΊββββΊ subprocess β wireguard/openvpn tunnel
β neither βββββββΊββββΊ direct (or blocked by anti-scrape sim)
βββββββββββββββββββββββββ
β
βΌ
SimulatedWebServer / Real HTTP (if live mode enabled)
```
**Two operating modes:**
| Mode | Description | When used |
|---|---|---|
| `simulation` (default) | No real network; proxy/VPN settings control which simulated domains unblock | Always safe, deterministic, no credentials needed |
| `live` | Real HTTP requests routed through configured proxy/VPN | Optional; requires user-supplied credentials or public pool selection |
Mode is set in `Settings β Network β Mode`. `live` mode is off by default and requires explicit opt-in.
---
### 9.2 Proxy Configuration
Proxies can be configured three ways: user-supplied credentials, a pre-tested public proxy pool, or disabled.
**Settings model:**
```python
class ProxyConfig(BaseModel):
enabled: bool = False
mode: Literal["custom", "public_pool", "rotating"] = "custom"
# ββ Custom proxy (user-supplied) ββββββββββββββββββββββββββββββ
host: str | None = None # e.g. "proxy.mycompany.com"
port: int | None = None # e.g. 8080
protocol: Literal["http", "https", "socks4", "socks5"] = "http"
username: str | None = None # Optional auth
password: str | None = None # Stored encrypted at rest (Fernet)
auth_scheme: Literal["basic", "digest", "ntlm"] = "basic"
# ββ Public pool (no credentials required) ββββββββββββββββββββ
public_pool_provider: str | None = None # "webshare" | "proxyscrape" | "openproxy"
public_pool_country_filter: str | None = None # ISO 3166-1 e.g. "US", "DE"
# ββ Rotating proxy ββββββββββββββββββββββββββββββββββββββββββββ
rotating_endpoint: str | None = None # e.g. "rotate.proxyservice.io:8080"
rotate_every_n_requests: int = 10
# ββ Validation ββββββββββββββββββββββββββββββββββββββββββββββββ
test_url: str = "http://httpbin.org/ip"
last_test_result: str | None = None # "ok" | "timeout" | "auth_failed"
last_tested_at: datetime | None = None
```
**Proxy URL construction (internal):**
```python
def build_proxy_url(cfg: ProxyConfig) -> str:
if cfg.username and cfg.password:
return f"{cfg.protocol}://{cfg.username}:{cfg.password}@{cfg.host}:{cfg.port}"
return f"{cfg.protocol}://{cfg.host}:{cfg.port}"
```
**Public pool providers (pre-configured, no credentials):**
| Provider key | Type | Notes |
|---|---|---|
| `webshare` | HTTP rotating | 10 free proxies on free tier |
| `proxyscrape` | HTTP/SOCKS5 scraped list | Refreshed every 15 min |
| `openproxy` | HTTP/HTTPS | Community maintained |
The environment ships with a static list of ~50 pre-validated public proxies for simulation mode. In live mode, lists are fetched fresh from provider APIs.
---
### 9.3 VPN Configuration
VPN integration supports **WireGuard** and **OpenVPN** protocols. Users paste their config file content or fill individual fields in the Settings UI.
```python
class VPNConfig(BaseModel):
enabled: bool = False
protocol: Literal["wireguard", "openvpn"] = "wireguard"
# ββ WireGuard βββββββββββββββββββββββββββββββββββββββββββββββββ
wg_config_content: str | None = None # Full .conf file content (pasted in UI)
wg_interface_name: str = "wg0"
# ββ OpenVPN βββββββββββββββββββββββββββββββββββββββββββββββββββ
ovpn_config_content: str | None = None # Full .ovpn file content
ovpn_username: str | None = None
ovpn_password: str | None = None # Encrypted at rest
# ββ Common ββββββββββββββββββββββββββββββββββββββββββββββββββββ
server_label: str | None = None # Human label e.g. "US East β NordVPN"
kill_switch: bool = True # Block requests if tunnel drops
last_test_result: str | None = None
last_connected_at: datetime | None = None
```
**VPN lifecycle (live mode):**
```
POST /api/settings/vpn/connect
β writes temp config file
β subprocess: wg-quick up wg0 OR openvpn --daemon --config temp.ovpn
β polls interface for IP change
β stores connected_ip in session
POST /api/settings/vpn/disconnect
β subprocess: wg-quick down wg0 OR killall openvpn
β clears connected_ip
```
In **simulation mode**, VPN is purely logical β activating it marks the session as "VPN active" which causes the simulated anti-scrape layer to allow all domain requests.
> **Docker note:** WireGuard and OpenVPN require `NET_ADMIN` and `SYS_MODULE` capabilities. The Dockerfile exposes these only if `ENABLE_LIVE_NETWORK=true` is set. HF Spaces deployment runs in simulation mode only (capabilities not available).
---
### 9.4 Public Pool (Quick Start)
For users who don't have their own proxy or VPN, the Settings UI offers a **Public Pool** tab that requires zero configuration:
| Pool name | Protocol | Speed | Reliability | Notes |
|---|---|---|---|---|
| WebShare Free | HTTP rotating | Medium | High | Registration required (free) |
| ProxyScrape | HTTP/SOCKS5 | Variable | Medium | No registration |
| OpenProxy Space | HTTP/HTTPS | Slow | Low | Community pool, use as fallback |
| Simulation Bypass | Simulated | N/A | 100% | Always available; simulation only |
Selecting "Simulation Bypass" is the recommended option for evaluation runs β it unlocks all simulated anti-scrape gates without needing real network credentials.
---
### 9.5 Settings Persistence
All network settings are stored server-side in a lightweight JSON config file (`config/network_settings.json`). Passwords and VPN configs are encrypted using **Fernet symmetric encryption** with a key derived from a server-side secret (`SETTINGS_SECRET` env var).
```python
class NetworkSettings(BaseModel):
proxy: ProxyConfig = ProxyConfig()
vpn: VPNConfig = VPNConfig()
default_search_engine: Literal["google", "bing", "brave", "ddg"] = "brave"
live_mode_enabled: bool = False
request_timeout_seconds: int = 10
max_retries: int = 3
retry_backoff_factor: float = 1.5
user_agent: str = "WebScraperOpenEnv/1.0"
```
The Settings UI reads from `GET /api/settings` and writes via `PUT /api/settings`. Passwords are never returned in GET responses β they are write-only from the UI's perspective.
---
## 10. API Endpoint Specification
All endpoints accept and return `application/json`.
### `POST /api/reset`
Initialize or restart an episode.
**Request:**
```json
{ "task_id": "task_easy", "seed": 42 }
```
**Response:** `Observation` model
---
### `POST /api/step`
Advance the episode by one action.
**Request:**
```json
{
"episode_id": "uuid-...",
"action": {
"action_type": "extract_field",
"target_field": "price",
"selector": ".product-price"
}
}
```
**Response:**
```json
{
"observation": { "..." : "..." },
"reward": { "value": 0.15, "cumulative": 0.15, "breakdown": {}, "message": "..." },
"done": false,
"info": { "step": 1, "budget_remaining": 9 }
}
```
---
### `GET /api/state`
Return current episode state. **Query param:** `episode_id=uuid-...`
---
### `GET /api/tasks`
Return all task definitions and their action schemas.
---
### `POST /api/grader`
Score a completed episode.
**Request:**
```json
{
"episode_id": "uuid-...",
"submission": { "product_name": "...", "price": "..." }
}
```
**Response:** `GraderResult` model
---
### `POST /api/baseline`
Trigger the built-in baseline inference script against all 3 tasks and return scores.
**Response:**
```json
{
"baseline_model": "gpt-4o-mini",
"results": {
"task_easy": { "score": 0.92, "steps": 4, "fields_correct": 5 },
"task_medium": { "score": 0.67, "steps": 18, "fields_correct": 4 },
"task_hard": { "score": 0.38, "steps": 54, "fields_correct": 8 }
},
"aggregate_score": 0.66,
"run_id": "baseline-seed42"
}
```
---
### `GET /api/settings`
Return current network settings. **Passwords are never returned** β password fields are always `null` in the response.
**Response:** `NetworkSettings` model (with password fields nulled)
---
### `PUT /api/settings`
Update network settings (full or partial).
**Request:** Partial `NetworkSettings` object β only provided fields are updated.
```json
{
"proxy": {
"enabled": true,
"mode": "custom",
"host": "proxy.example.com",
"port": 8080,
"protocol": "http",
"username": "user",
"password": "secret"
}
}
```
---
### `POST /api/settings/proxy/test`
Test the current proxy configuration by making a request to `test_url`.
**Response:**
```json
{
"success": true,
"exit_ip": "45.33.32.156",
"latency_ms": 312,
"error": null
}
```
---
### `POST /api/settings/vpn/connect`
Activate the configured VPN tunnel (live mode only; simulation mode returns immediate success).
**Response:**
```json
{
"connected": true,
"tunnel_ip": "10.8.0.2",
"exit_ip": "185.220.101.45",
"protocol": "wireguard",
"error": null
}
```
---
### `POST /api/settings/vpn/disconnect`
Tear down the active VPN tunnel.
---
### `GET /api/settings/network/status`
Returns current active network configuration β what proxy/VPN is live right now.
**Response:**
```json
{
"proxy_active": true,
"proxy_host": "proxy.example.com:8080",
"vpn_active": false,
"vpn_server": null,
"exit_ip": "45.33.32.156",
"live_mode": false,
"default_search_engine": "brave"
}
```
---
### `GET /api/settings/public-pool`
Returns the list of available public proxy/VPN pool options with current availability status.
**Response:**
```json
{
"pools": [
{ "key": "simulation_bypass", "name": "Simulation Bypass", "available": true, "requires_auth": false },
{ "key": "webshare", "name": "WebShare Free", "available": true, "requires_auth": true },
{ "key": "proxyscrape", "name": "ProxyScrape", "available": true, "requires_auth": false },
{ "key": "openproxy", "name": "OpenProxy Space", "available": true, "requires_auth": false }
]
}
```
---
## 11. Data Models (Pydantic Schemas)
```python
# env/models.py
from pydantic import BaseModel, Field
from enum import Enum
from typing import Optional
import uuid
class ActionType(str, Enum):
EXTRACT_FIELD = "extract_field"
NAVIGATE = "navigate"
SEARCH_PAGE = "search_page"
INSPECT_ELEMENT = "inspect_element"
SUBMIT = "submit"
SKIP_PAGE = "skip_page"
class Action(BaseModel):
action_type: ActionType
target_field: Optional[str] = None
selector: Optional[str] = None
navigate_to: Optional[str] = None
submit_extraction: Optional[dict] = None
notes: Optional[str] = None
class Observation(BaseModel):
episode_id: str
task_id: str
step_number: int
current_url: str
page_html: str
page_title: str
available_actions: list[str]
extracted_so_far: dict
pages_visited: list[str]
budget_remaining: int
task_description: str
target_fields: list[str]
hints: list[str]
class Reward(BaseModel):
value: float
cumulative: float
breakdown: dict[str, float]
message: str
class GraderResult(BaseModel):
score: float = Field(ge=0.0, le=1.0)
field_scores: dict[str, float]
feedback: str
penalty_applied: bool
penalty_reason: Optional[str] = None
class EpisodeState(BaseModel):
episode_id: str
task_id: str
seed: int
step_number: int
current_url: str
pages_visited: list[str]
extracted_data: dict
budget_remaining: int
status: str # "running" | "terminal"
cumulative_reward: float
created_at: str
# Task 3 extras
action_log: list[dict] = [] # Full action history for grader inspection
search_calls_used: int = 0 # Track against 8-call free budget
verified_fields: list[str] = [] # Fields that have passed VERIFY_FACT
resolved_conflicts: list[str] = [] # Fields where RESOLVE_CONFLICT was issued
class SearchResult(BaseModel):
rank: int
title: str
url: str
snippet: str
class SearchEngineResponse(BaseModel):
query: str
results: list[SearchResult]
total_results_simulated: int
engine_used: str
calls_remaining: int # Free budget remaining (8 - used)
class VerifyFactResponse(BaseModel):
field_name: str
claimed_value: str
verification_source: str
verified: bool
confidence: float # 0.0 β 1.0
supporting_text: str | None # Excerpt from verification source
contradicting_text: str | None
class NetworkStatus(BaseModel):
proxy_active: bool
proxy_host: Optional[str]
vpn_active: bool
vpn_server: Optional[str]
exit_ip: Optional[str]
live_mode: bool
default_search_engine: str
```
---
## 12. Simulated Web Environment
The `SimulatedWebServer` class generates HTML pages on-the-fly using Jinja2 templates seeded by a deterministic RNG.
### Page Generator Pipeline
```
seed + task_id + url
β
βΌ
RNG (random.Random)
β
βΌ
Template Selector βββΊ Jinja2 template
β
βΌ
Data Populator (products / company profiles / etc.)
β
βΌ
Noise Injector βββΊ adds decoy elements, broken tags, ads
β
βΌ
Anti-Scrape Layer βββΊ conditionally adds interstitials (task_hard)
β
βΌ
HTML string (max 8,000 chars)
```
### Noise Types by Task
| Noise Type | Easy | Medium | Hard |
|---|---|---|---|
| Decoy fields with similar labels | β | β
| β
|
| Inconsistent price formatting | β | β
| β
|
| Broken/unclosed HTML tags | β | β | β
|
| Interstitial blocking page | β | β | β
|
| Contradictory values across pages | β | β | β
|
| JavaScript-only content (noscript fallback) | β | β | β
|
| Paginated content (multi-page) | β | β
| β
|
### URL Scheme
Simulated URLs follow the pattern `sim://<domain>/<path>`. The environment maps these to page generators internally β no DNS or network calls occur.
```
sim://shop.example.com/product/42 β product page (task_easy)
sim://catalog.example.com/products?pg=1 β catalog page 1 of 3 (task_medium)
sim://company.example.com/about β company homepage (task_hard)
sim://directory.example.com/org/acme β directory listing (task_hard)
sim://news.example.com/search?q=acme β news aggregator (task_hard)
sim://finance.example.com/ticker/ACME β financial data (task_hard) β 429 gate
sim://regulatory.example.com/filings/ACME β SEC-style filing (task_hard, search-only)
sim://linkedin-sim.example.com/company/acme β LinkedIn-style profile (task_hard, keyword gate)
```
**Anti-scrape simulation by domain:**
| Domain | Block type | Bypass method |
|---|---|---|
| `finance.example.com` | 429 Rate-limit on first visit | Retry after 1 step, or activate proxy |
| `linkedin-sim.example.com` | Keyword gate | `SEARCH_PAGE` with keyword "view_profile" |
| `regulatory.example.com` | Not linked β only discoverable via search | `SEARCH_ENGINE` with relevant query |
---
## 13. Baseline Inference Script
`scripts/baseline.py` uses the OpenAI API to run a ReAct-style loop against the environment.
### Agent Strategy
```
System Prompt:
You are a web scraping agent. You will be given an HTML page and a list
of fields to extract. Use the available actions to extract all target
fields as efficiently as possible and then submit your findings.
Loop:
1. Call /reset with task_id and seed=42
2. While not done:
a. Format observation as: current URL, page HTML (truncated),
fields still needed, steps remaining
b. Prompt LLM for next action in JSON format
c. Parse action β POST /step
d. If done: record score
3. Report all 3 task scores
```
### Configuration
Read from environment variables:
```
OPENAI_API_KEY=...
BASELINE_MODEL=gpt-4o-mini # default
BASELINE_SEED=42
BASELINE_MAX_RETRIES=3
```
### Reproducibility
- Fixed seed=42 for all tasks
- Deterministic page generation
- Temperature=0 for LLM calls
- Results logged to `results/baseline_<timestamp>.json`
### Expected Baseline Scores (gpt-4o-mini)
| Task | Expected Score | Notes |
|---|---|---|
| task_easy | ~0.90 | Near-perfect on clean pages |
| task_medium | ~0.60 | Pagination handling is tricky |
| task_hard | ~0.35 | Multi-source coordination challenges |
| **Aggregate** | **~0.62** | |
---
## 14. Project Structure
```
webscraper-openenv/
βββ README.md
βββ openenv.yaml
βββ Dockerfile
βββ requirements.txt
β
βββ frontend/ # Vite + React app
β βββ package.json
β βββ vite.config.ts
β βββ index.html
β βββ src/
β βββ main.tsx
β βββ App.tsx
β βββ components/
β β βββ TaskSelector.tsx # Pick task_easy / task_medium / task_hard
β β βββ EpisodeViewer.tsx # Live observation display
β β βββ ActionPanel.tsx # Manual action builder (for debugging)
β β βββ RewardChart.tsx # Cumulative reward over steps
β β βββ BaselineRunner.tsx # Trigger /api/baseline and show scores
β β βββ settings/
β β βββ SettingsPage.tsx # Top-level settings shell (tabbed layout)
β β βββ ProxySettings.tsx # Proxy config form (custom / public pool / rotating)
β β βββ VPNSettings.tsx # VPN config form (WireGuard / OpenVPN file paste)
β β βββ PublicPoolPicker.tsx # Zero-config public proxy/VPN picker
β β βββ NetworkStatus.tsx # Live badge: proxy active, VPN active, exit IP
β β βββ SearchEngineSelector.tsx # Default search engine picker
β βββ hooks/
β β βββ useEpisode.ts # Manages episode state via REST
β β βββ useNetworkSettings.ts # Read/write /api/settings
β β βββ useNetworkStatus.ts # Polls /api/settings/network/status
β βββ api/
β βββ client.ts # Typed fetch wrappers for all endpoints
β βββ settingsClient.ts # Settings-specific API calls
β
βββ env/
β βββ __init__.py
β βββ environment.py # WebScraperEnv (step/reset/state)
β βββ models.py # All Pydantic models
β βββ reward.py # RewardEngine
β βββ state.py # EpisodeState management
β βββ tasks/
β β βββ task_easy.py
β β βββ task_medium.py
β β βββ task_hard.py # Includes search engine + verify + resolve logic
β βββ simulator/
β βββ web_server.py
β βββ page_generator.py
β βββ search_engine.py # SimulatedSearchEngine (ranked results by seed)
β βββ fact_verifier.py # FactVerifier (cross-source consistency check)
β βββ noise_injector.py
β βββ templates/
β βββ product.html
β βββ catalog.html
β βββ company.html
β βββ directory.html
β βββ news.html
β βββ finance.html
β βββ regulatory.html # New: SEC-style filing page
β βββ linkedin_sim.html # New: LinkedIn-style profile page
β
βββ network/
β βββ __init__.py
β βββ router.py # NetworkRouter (proxy/VPN dispatch)
β βββ proxy_manager.py # ProxyManager (build URL, test, rotate)
β βββ vpn_manager.py # VPNManager (wg-quick / openvpn subprocess)
β βββ public_pool.py # PublicPoolFetcher (webshare, proxyscrape, openproxy)
β βββ settings_store.py # Encrypted read/write of network_settings.json
β
βββ config/
β βββ network_settings.json # Persisted settings (passwords Fernet-encrypted)
β
βββ api/
β βββ __init__.py
β βββ main.py # FastAPI app + static file mount
β βββ routes/
β β βββ env_routes.py # /api/reset, /api/step, /api/state, etc.
β β βββ settings_routes.py # /api/settings/*, /api/settings/vpn/*, etc.
β βββ schemas.py
β
βββ scripts/
β βββ baseline.py
β βββ validate.py
β
βββ tests/
β βββ test_environment.py
β βββ test_graders.py
β βββ test_reward.py
β βββ test_task3_search.py # Search engine + verify + resolve tests
β βββ test_network.py # Proxy/VPN config + routing tests
β βββ test_api.py
β
βββ results/
βββ baseline_seed42.json
```
---
## 15. Dockerfile & Deployment
Everything ships in a **single Docker container**. The build is a two-stage process: Stage 1 compiles the Vite frontend into static files; Stage 2 installs the Python backend and copies the compiled frontend in. FastAPI then serves both the API and the frontend from port 7860.
### Request Routing (single port)
```
Port 7860
β
βββ /api/* β FastAPI routes (all OpenEnv endpoints)
βββ /assets/* β Vite static assets (JS, CSS, chunks)
βββ /* β index.html (SPA catch-all, handled by FastAPI StaticFiles)
```
FastAPI mounts the Vite build output (`frontend/dist/`) as a `StaticFiles` directory and adds a catch-all `GET /{full_path}` route that returns `index.html` so client-side routing works correctly.
```python
# api/main.py (relevant additions)
from fastapi.staticfiles import StaticFiles
from fastapi.responses import FileResponse
app.mount("/assets", StaticFiles(directory="frontend/dist/assets"), name="assets")
@app.get("/{full_path:path}", include_in_schema=False)
async def spa_fallback(full_path: str):
return FileResponse("frontend/dist/index.html")
```
All API routes are prefixed with `/api` to avoid collisions with the SPA router:
```
POST /api/reset
POST /api/step
GET /api/state
GET /api/tasks
POST /api/grader
POST /api/baseline
```
The Vite frontend calls `fetch("/api/...")` β no base URL configuration needed in production since everything is on the same origin.
---
### Dockerfile (multi-stage)
```dockerfile
# ββ Stage 1: Build Vite frontend ββββββββββββββββββββββββββββββββββββββ
FROM node:20-slim AS frontend-builder
WORKDIR /frontend
COPY frontend/package.json frontend/package-lock.json ./
RUN npm ci
COPY frontend/ ./
RUN npm run build
# Output: /frontend/dist/
# ββ Stage 2: Python backend + compiled frontend ββββββββββββββββββββββββ
FROM python:3.11-slim
WORKDIR /app
# System packages:
# wireguard-tools + iproute2 β wg-quick (live VPN, only used if ENABLE_LIVE_NETWORK=true)
# openvpn β OpenVPN tunnel (same gate)
# curl β proxy connectivity tests
RUN apt-get update && apt-get install -y --no-install-recommends \
wireguard-tools \
iproute2 \
openvpn \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy backend source
COPY env/ ./env/
COPY network/ ./network/
COPY api/ ./api/
COPY scripts/ ./scripts/
COPY results/ ./results/
COPY config/ ./config/
COPY openenv.yaml .
# Copy compiled frontend from stage 1
COPY --from=frontend-builder /frontend/dist ./frontend/dist
ENV PYTHONUNBUFFERED=1
ENV PORT=7860
# ENABLE_LIVE_NETWORK=false β simulation mode (safe default, no NET_ADMIN needed)
# ENABLE_LIVE_NETWORK=true β real proxy/VPN (requires --cap-add NET_ADMIN SYS_MODULE)
ENV ENABLE_LIVE_NETWORK=false
ENV SETTINGS_SECRET=changeme_generate_a_real_key_in_production
EXPOSE 7860
CMD ["uvicorn", "api.main:app", "--host", "0.0.0.0", "--port", "7860"]
```
**Live network mode (local only, not for HF Spaces):**
```bash
docker run -p 7860:7860 \
--cap-add NET_ADMIN \
--cap-add SYS_MODULE \
--sysctl net.ipv4.conf.all.src_valid_mark=1 \
-e ENABLE_LIVE_NETWORK=true \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
-e SETTINGS_SECRET=$(openssl rand -hex 32) \
webscraper-openenv
```
---
### requirements.txt
```
fastapi>=0.110.0
uvicorn[standard]>=0.29.0
pydantic>=2.6.0
jinja2>=3.1.3
openai>=1.20.0
pytest>=8.1.0
httpx>=0.27.0
aiofiles>=23.2.1 # FastAPI StaticFiles
cryptography>=42.0.0 # Fernet encryption for stored credentials
requests[socks]>=2.31.0 # SOCKS4/5 proxy support
```
During local development, Vite's dev server runs on `:5173` and the FastAPI backend runs on `:8000`. The proxy forwards all `/api` calls to avoid CORS issues:
```typescript
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
export default defineConfig({
plugins: [react()],
server: {
proxy: {
'/api': {
target: 'http://localhost:8000',
changeOrigin: true,
}
}
}
})
```
In production (inside Docker), no proxy is needed β both frontend and backend are on port 7860.
---
### requirements.txt
```
fastapi>=0.110.0
uvicorn[standard]>=0.29.0
pydantic>=2.6.0
jinja2>=3.1.3
openai>=1.20.0
pytest>=8.1.0
httpx>=0.27.0
aiofiles>=23.2.1 # Required for FastAPI StaticFiles
```
---
### Local Development Workflow
```bash
# Option A: Full Docker (production-identical)
docker build -t webscraper-openenv .
docker run -p 7860:7860 -e OPENAI_API_KEY=$OPENAI_API_KEY webscraper-openenv
# Visit: http://localhost:7860
# Option B: Split dev servers (fast iteration)
# Terminal 1 β backend
uvicorn api.main:app --reload --port 8000
# Terminal 2 β frontend
cd frontend && npm run dev
# Visit: http://localhost:5173 (proxies API to :8000)
```
### Build & Smoke Test
```bash
docker build -t webscraper-openenv .
# Smoke test the API
curl http://localhost:7860/api/tasks
# Smoke test the frontend is served
curl -s http://localhost:7860 | grep -q "<div id=\"root\">" && echo "Frontend OK"
# Full reset/step cycle
curl -X POST http://localhost:7860/api/reset \
-H "Content-Type: application/json" \
-d '{"task_id": "task_easy", "seed": 42}'
```
### Hugging Face Spaces Deployment
The Space will be tagged with `openenv` and configured as:
- **SDK:** Docker
- **App port:** 7860
- **Secrets:** `OPENAI_API_KEY` set via HF Secrets UI
- No extra build steps needed β the Dockerfile handles `npm ci && npm run build` internally in Stage 1
---
## 15. openenv.yaml
```yaml
name: webscraper-openenv
version: "1.0.0"
description: >
A web scraping environment where AI agents extract structured data
from simulated HTML pages with varying complexity, pagination,
and adversarial noise patterns.
author: "[Your Name]"
license: MIT
tags:
- openenv
- web-scraping
- information-extraction
- nlp
- real-world
tasks:
- id: task_easy
name: "Static Page Field Extraction"
difficulty: easy
max_steps: 10
description: "Extract 5 product fields from a single clean product page."
- id: task_medium
name: "Paginated Catalog Scraping"
difficulty: medium
max_steps: 25
description: "Find the 3 cheapest items across 3 pages of a product catalog."
- id: task_hard
name: "Multi-Source Research Aggregation"
difficulty: hard
max_steps: 40
description: "Aggregate a company profile from 4 different simulated web sources."
api:
reset: POST /reset
step: POST /step
state: GET /state
tasks: GET /tasks
grader: POST /grader
baseline: POST /baseline
observation_space:
type: structured
fields:
- page_html: string
- current_url: string
- extracted_so_far: object
- budget_remaining: integer
- target_fields: array
action_space:
type: structured
action_types:
- extract_field
- navigate
- search_page
- inspect_element
- submit
- skip_page
reward_range: [-2.5, 2.5]
episode_termination:
- "SUBMIT action called"
- "budget_remaining reaches 0"
```
---
## 16. Testing Strategy
### Unit Tests
**`test_graders.py`**
- Test each grader with perfect submission β expect score = 1.0
- Test each grader with empty submission β expect score = 0.0
- Test partial submissions β expect intermediate scores
- Test normalization edge cases (price formats, whitespace, encoding)
**`test_reward.py`**
- Correct extraction event β reward > 0
- Redundant extraction β reward < 0
- Navigation loop β cumulative negative reward
- SUBMIT with perfect answer β large positive reward
**`test_environment.py`**
- `reset()` returns clean state with step_number=0
- `state()` after 3 steps returns step_number=3
- Budget exhaustion terminates episode
- Same seed produces identical HTML
### Integration Tests
**`test_api.py`**
- Full episode run via HTTP for each task
- `/baseline` endpoint completes without error
- `/grader` returns score in [0.0, 1.0]
- Invalid episode_id returns 404
### Validation
```bash
openenv validate .
```
Expected: All checks pass, spec compliance confirmed.
---
## 17. Known Limitations & Future Work
| Limitation | Impact | Future Fix |
|---|---|---|
| HTML truncated to 8,000 chars | Very long pages lose content | Configurable window + scrolling action |
| No JavaScript rendering simulation | JS-heavy sites not fully modeled | Add iframe/shadow DOM simulation |
| Single in-memory episode store | Not horizontally scalable | Redis-backed episode store |
| English-only pages | Non-English scraping not tested | Multilingual page templates |
| Fixed set of 3 tasks | Limited evaluation breadth | Procedural task generation with task_level param |
| No rate limiting simulation in easy/medium | Less realistic for those tiers | Progressive rate limiting across difficulty |
---
*End of Software Design Document*
*WebScraper-OpenEnv β OpenEnv Round 1 Submission*
|