Presenting local AI-powered software options for tasks such as image & text generation, automatic speech recognition, and frame interpolation.
LLM Server Setup Part 2 — Container Tools
This post is Part 2 in a series on how to configure a system for LLM deployments and development usage. Part 2 is about installing and configuring container tools, Docker and NVIDIA Enroot.
LLM Server Setup Part 1 – Base OS
This post is Part 1 in a series on how to configure a system for LLM deployments and development usage. The configuration will be suitable for multi-user deployments and also useful for smaller development systems. Part 1 is about the base Linux server setup.
Can You Run A State-Of-The-Art LLM On-Prem For A Reasonable Cost?
In this post address the question that’s been on everyone’s mind; Can you run a state-of-the-art Large Language Model on-prem? With *your* data and *your* hardware? At a reasonable cost?