getStack
getStack
Technology Trends
Home
Licenses
My Repo
Blog
About
Command Palette
Search for a command to run...
GitHub repository
vllm-project /
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Built with
List of all the technologies this repository is using, automatically extracted every week.
Language
Bash
C
C++
CSS
JavaScript
Objective-C
Python
AI
Openai
IaC
Helm
Monitoring
Prometheus
CI
Dependabot
Github Actions
Tool
Docker
Software
Grafana
Missing something?
Report a bug
GitHub
Homepage
Stars
56.2K
Forks
9.6K
Size
75.4 MB
Last Analyzed
7 days
License
Apache 2.0
Missing something?
Report a bug