Tag: llm

AI Inference on CPU

Written by Dominik Pantůček on 2025-04-10

llm

One of our current projects has to use LLM for extracting structured data from rather unstructured and noisy input. Such endeavor typically requires specialized GPU for decent inference times and it would be wise to test the solution with cheaper setup before buying such expensive hardware. It turns out it is possible to test even large models on CPU.

...