I've had reasonable success with it for coding work, based on that it's been a real time saver for automating tasks I would otherwise have to spend much longer on.
Mainly GPU and memory (vram). For example, LLMs run smoothly on my MacBook with a Max chip and 64GB of RAM, but any PC with a recent high-end GPU will suffice. As for storage, SSDs are way better for performance compared to HDDs, however HDD should work.
This thread gives you a pretty good idea how it works with Python, keep in mind this is only with chatgpt 3.5 so 4.0 which includes the code interpreter likely will yield even better results.
Which local model is as good as this? Have not seen any on LM studio either. Will check out mistral again. Using image generation models from civitai or something of the sorts works perfectly fine locally but text generation has always been a little lacking in my opinion.