Tag: llm

Gamer's Edge AI Inference

Written by Dominik Pantůček on 2025-04-24

llm

Last time we have shown how to run (or more like crawl) LLM inference on common CPUs. It can be rather sluggish though. And that is why we dug deeper and tried to use a commodity hardware - a gaming laptop - to speed things up! It is actually very interesting how even a gaming GPU can increase performance of AI tasks.

...

AI Inference on CPU

Written by Dominik Pantůček on 2025-04-10

llm

One of our current projects has to use LLM for extracting structured data from rather unstructured and noisy input. Such endeavor typically requires specialized GPU for decent inference times and it would be wise to test the solution with cheaper setup before buying such expensive hardware. It turns out it is possible to test even large models on CPU.

...