About 16,500 results
Open links in new tab
  1. GitHub - vllm-project/vllm: A high-throughput and memory …

    vLLM is a fast and easy-to-use library for LLM inference and serving. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

  2. GitHub - vllm-project/media-kit: vLLM Logo Assets

    This repository contains official logo assets for vLLM in various formats and styles.

  3. media-kit/vLLM-Full-Logo.svg at main - GitHub

    vLLM Logo Assets. Contribute to vllm-project/media-kit development by creating an account on GitHub.

  4. vllm/README.md at main · vllm-project/vllm - GitHub

    vLLM is a fast and easy-to-use library for LLM inference and serving. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

  5. vllm/docs/source/assets/logos/vllm-logo-text-light.png at main · …

    A high-throughput and memory-efficient inference and serving engine for LLMs - vllm-project/vllm

  6. Releases · vllm-project/vllm - GitHub

    A high-throughput and memory-efficient inference and serving engine for LLMs - vllm-project/vllm

  7. media-kit/README.md at main · vllm-project/media-kit - GitHub

    vLLM Logo Assets. Contribute to vllm-project/media-kit development by creating an account on GitHub.

  8. [Misc]: Brand guidelines around vLLM logo; is there a media

    Dec 10, 2024 · I am working on a website about an AI model that uses vLLM as a runtime and wanted to use the logo in a diagram. I also was looking for an svg of this logo with a transparent background in both light and dark variants and could not find anything. What are the guidelines regarding the logo and can it be used in this scenario? Thanks

  9. vllm/README.md at main · vllm-project/vllm - GitHub

    - If you wish to use vLLM's logo, please refer to [our media kit repo](https://github.com/vllm-project/media-kit).

  10. GitHub - HabanaAI/vllm-fork: A high-throughput and memory …

    vLLM is a fast and easy-to-use library for LLM inference and serving. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

Refresh