Skip to content

Commit 699fed9

Browse files
authored
Add 2.0 longevity results (#3429)
Adding longevity results for 2.0. These results are not complete and may be inaccurate. The tests were stopped early and the teardown scripts/functions did not collect everything properly.
1 parent cc3c907 commit 699fed9

File tree

8 files changed

+108
-0
lines changed

8 files changed

+108
-0
lines changed
44.4 KB
Loading
51.9 KB
Loading
Loading

tests/results/longevity/2.0.0/oss.md

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
# Results
2+
3+
## Summary
4+
5+
These results are incomplete and may be inaccurate. The test was stopped manually about 1 day early, and results such as logs and traffic summary were not collected properly by the teardown scripts/functions.
6+
7+
There are fewer dashboards to collect now after the change in architecture, as we don't have the same metrics as we did before, mainly relating to reloads.
8+
9+
One thing of note is the significant increase in memory usage for the NGF control plane container. For our NGINX Plus run, the opposite occurred, where the NGINX container had high memory.
10+
11+
## Test environment
12+
13+
NGINX Plus: false
14+
15+
NGINX Gateway Fabric:
16+
17+
- Commit: cc3c907ff668d886cac719df2d77b685370ad5f8
18+
- Date: 2025-05-30T18:25:58Z
19+
- Dirty: false
20+
21+
GKE Cluster:
22+
23+
- Node count: 3
24+
- k8s version: v1.32.4-gke.1106006
25+
- vCPUs per node: 2
26+
- RAM per node: 4015484Ki
27+
- Max pods per node: 110
28+
- Zone: us-west2-a
29+
- Instance Type: e2-medium
30+
31+
## Key Metrics
32+
33+
### Containers memory
34+
35+
![oss-memory.png](oss-memory.png)
36+
37+
### NGF Container Memory
38+
39+
![oss-ngf-memory.png](oss-ngf-memory.png)
40+
41+
### Containers CPU
42+
43+
![oss-cpu.png](oss-cpu.png)
51.6 KB
Loading
40.6 KB
Loading
Loading

tests/results/longevity/2.0.0/plus.md

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
# Results
2+
3+
## Summary
4+
5+
These results are incomplete and may be inaccurate. The test was stopped manually about 1 day early, and results such as logs and traffic summary were not collected properly by the teardown scripts/functions.
6+
7+
There are fewer dashboards to collect now after the change in architecture, as we don't have the same metrics as we did before, mainly relating to reloads.
8+
9+
One thing of note is the significant increase in memory usage for the NGINX container.
10+
11+
## Test environment
12+
13+
NGINX Plus: true
14+
15+
NGINX Gateway Fabric:
16+
17+
- Commit: cc3c907ff668d886cac719df2d77b685370ad5f8
18+
- Date: 2025-05-30T18:25:58Z
19+
- Dirty: false
20+
21+
GKE Cluster:
22+
23+
- Node count: 3
24+
- k8s version: v1.32.4-gke.1106006
25+
- vCPUs per node: 2
26+
- RAM per node: 4015484Ki
27+
- Max pods per node: 110
28+
- Zone: us-west2-a
29+
- Instance Type: e2-medium
30+
31+
## Traffic
32+
33+
HTTP:
34+
35+
```text
36+
```
37+
38+
HTTPS:
39+
40+
```text
41+
```
42+
43+
44+
## Error Logs
45+
46+
### nginx-gateway
47+
48+
49+
50+
### nginx
51+
2025/06/01 15:34:12 [error] 78#78: *157671523 no live upstreams while connecting to upstream, client: 35.236.69.111, server: cafe.example.com, request: "GET /tea HTTP/1.1", upstream: "http://longevity_tea_80/tea", host: "cafe.example.com"
52+
53+
## Key Metrics
54+
55+
### Containers memory
56+
57+
![plus-memory.png](plus-memory.png)
58+
59+
### NGF Container Memory
60+
61+
![plus-ngf-memory.png](plus-ngf-memory.png)
62+
63+
### Containers CPU
64+
65+
![plus-cpu.png](plus-cpu.png)

0 commit comments

Comments
 (0)