getStack
getStack
Technology Trends
Home
Licenses
My Repo
Blog
About
Command Palette
Search for a command to run...
GitHub repository
vllm-project /
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Built with
List of all the technologies this repository is using, automatically extracted every week.
Language
Bash
C
C++
CSS
JavaScript
Objective-C
Python
AI
Openai
IaC
Helm
Monitoring
Prometheus
CI
Dependabot
Github Actions
Tool
Docker
Software
Grafana
Missing something?
Report a bug
GitHub
Homepage
Stars
52.1K
Forks
8.7K
Size
62.6 MB
Last Analyzed
3 days
License
Apache 2.0
Missing something?
Report a bug