Hack After Dark Logo
HACKAFTERDARK
My First Look: Coding a Desktop App with the Mysterious "Sonic" AI
Back to Blog

My First Look: Coding a Desktop App with the Mysterious "Sonic" AI

| Featured
SHARE:

A review of the new Sonic AI model, building a desktop calendar app from scratch.

There’s a new stealth model making the rounds in AI code editors like Roo Code Cloud, Cline, and Cursor. The rumor is it’s a new coding model from xAI. It’s called Sonic, and I decided to take it for a spin by building a desktop calendar app from scratch using Wails, Go, TypeScript, and React.

Here’s my review of how it stacks up, especially against a powerhouse model I frequently use, Gemini.

🏎️ The Good: Incredible Speed and a Great “Feel” for the Project

The first thing you notice about Sonic is its sheer speed. It generates code and spits out tokens incredibly fast. But more impressive was its intuitive grasp of the project’s goal.

I fed it a plan file, and it immediately understood the “spirit” of a calendar application. Where other models might need a follow-up prompt to explain the concept of a calendar grid with month, week, and day views, Sonic got it right away. It even took the initiative to add some nice icons and other UI embellishments that I hadn’t asked for.

In this initial phase, Sonic felt closer to a one-shot app generator. It took the ball and ran with it, producing a more functional application faster and with less detailed instruction. It seemed to excel at understanding the user’s intent, especially on the frontend.

Sonic AI Frontend Generation

🧩 The Bad: Weak Prompt Adherence and Testing Troubles

While Sonic’s creative freedom was great for the UI, it’s a double-edged sword. The model has weak prompt adherence for specific, technical tasks. For instance, I’d ask it to write backend tests, and it would instead add a line to .gitignore and happily announce it was “all done!”

This pointed to its main area of difficulty: complex, multi-step backend tasks, especially writing test cases. While its Go code wasn’t terrible, it wasn’t as proficient as some of the larger models. Normally, Go is a great language for AI to handle due to its simple syntax and strong typing, but Sonic needed very specific, repeated prompts to generate the backend code correctly.

On a humorous note, it’s absolutely obsessed with emojis! It puts them into the application UI, it puts them into unit tests, and in the console logging.

Where it struggled significantly was in writing unit tests. It would often get stuck, leave failing tests unresolved, or even give up and declare that the task would “take more investigation.”

Failing tests from Sonic AI

I get that the tests may be failing due to missing or incomplete mocks, but you know…Why didn’t you add them?

Test assertions from Sonic AI

I think it doesn’t quite understand the concept behind test case assertions. Of course what is expected is what should be rendered. I’m glad calendar events appear and persist but if the text is duplicated or incorrect, that’s no good.

The Verdict: Where Sonic Fits in the AI Toolbox

So, what’s the final takeaway? My comparison was with Gemini simply because it’s my daily driver, but these insights can apply more broadly when looking at the landscape of AI models.

Sonic shines as a rapid initiator. It’s a fantastic “creative specialist” for getting a project off the ground. Its speed and intuitive grasp of a project’s goals make it ideal for rapidly prototyping user interfaces and scaffolding an application’s initial structure.

However, my experience suggests that Sonic may not be enough on its own for a full project lifecycle. It starts to show its limits on tasks requiring deep logical reasoning, meticulous debugging, and comprehensive test creation. For those parts of the job, a developer will likely want to reach for a different type of tool—a larger, more methodical model that excels at handling complexity and maintaining a broad context of the entire codebase. I should note that Sonic’s context window size is 262k tokens.

The initial impression is very good, but it’s best viewed as a specialized tool rather than an all-in-one solution.

Verdict on Sonic AI

A Possible Future AI Workflow?

This leads to an interesting potential workflow, especially depending on how Sonic is priced. If it ends up being a cheaper, more accessible model, it could be the perfect tool for a “first pass.”

You could start with Sonic to rapidly generate the bulk of the code and get the application’s structure and UI into a good starting position.

Then, you could switch to a more powerful model for the second phase: cleaning things up, refining complex logic, fixing subtle bugs, and, most importantly, adding robust test coverage. This approach could blend the best of both worlds—Sonic’s speed for the initial build and the power of another model for the critical finishing touches.

SHARE: