Add vLLM as one of the supported inference engines.
#2
by wangshangsam - opened
README.md
CHANGED
|
@@ -61,6 +61,7 @@ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated sys
|
|
| 61 |
## Software Integration:
|
| 62 |
**Runtime Engine(s):** <br>
|
| 63 |
* SGLang <br>
|
|
|
|
| 64 |
|
| 65 |
**Supported Hardware Microarchitecture Compatibility:** <br>
|
| 66 |
* NVIDIA Blackwell <br>
|
|
|
|
| 61 |
## Software Integration:
|
| 62 |
**Runtime Engine(s):** <br>
|
| 63 |
* SGLang <br>
|
| 64 |
+
* vLLM <br>
|
| 65 |
|
| 66 |
**Supported Hardware Microarchitecture Compatibility:** <br>
|
| 67 |
* NVIDIA Blackwell <br>
|