LM Studio Logo

LM Studio Beta Releases

🪲 🗣️Please report bugs and feedback to:

VersionBuildOSArchLast UpdatedDownload URL
0.4.72Macarm6403/11/2026Download
0.4.72Windowsx86_6403/11/2026Download
0.4.72Windowsarm6403/11/2026Download
0.4.72Linuxx86_6403/11/2026Download
0.4.72Linuxx86_6403/11/2026Download
0.4.72Linuxarm6403/11/2026Download
0.4.72Linuxarm6403/11/2026Download
LM Studio is provided under the terms of use.

Release Notes - LM Studio 0.4.7 Build 2 (Beta)

0.4.7 - Release Notes

Build 2

  • Global chat search now takes into account chat titles
  • Add notification UI when LM Link versions are incompatible between devices
  • Fixed a bug creating a duplicate onboarding popover on the LM Link page
  • Make XML-like tool call parsing (e.g., Nemotron 3) more reliable for boolean values
  • Fixed a bug where clicking the Attach File button in chat input would lock the text input UI
  • Fixes a bug where
    tags were showing as text in markdown tables
  • Fixed a responsive UI overlap bug on server page stacked content
  • Fixed a bug where an unnamed chat title would appear as the chat id in the chat sidebar search results
  • Fixed a bug where on certain devices, the app would crash if an image is fed to a vision model
  • Fixed a bug where model load guardrails and resource usage estimates were inaccurate for some models

Build 1

  • New default: "separate reasoning_content and content in API responses" is now ON by default in order to improve compatibility with /v1/chat/completions clients
    • If your use case requires this setting to be off (previous default), you can disable it in the Developer Settings
  • Fixed app header nav button hotkeys
  • Add parallel parameter to /api/v1/load endpoint
  • Add presence_penalty sampling parameter
  • Fix hover effect visual bug on Model Picker model options in chat input
  • Fixed responsive UI styling on the LM Link page
  • [Linux] Fix regression caused by some app files having a space in their name.
  • Fix OpenAI-compatible /v1/responses endpoint erroring on none and xhigh reasoning effort
  • Fixed a bug where /v1/responses responses included logProbs for MLX models even if message.output_text.logprobs was omitted
  • Anthropic-compatible /v1/messages API now surfaces errors when the model generates an invalid tool call, enabling Claude Code to recover gracefully
LM Studio
Join the Early Access Beta Program
Get early access to beta builds and developer updates from the LM Studio team.