I’ve been using GitHub Copilot, Claude Sonnet, and ChatGPT for a while now, and while they’ve been helpful in some ways, I’ve encountered a recurring issue. Often, these AI assistants generate code that’s either based on outdated versions of Ember.js or mixes modern and legacy syntax.
I’m curious if anyone else has faced similar challenges and has found effective strategies to mitigate these issues. Perhaps there are specific prompts or techniques that can help guide the AI towards generating more accurate and up-to-date Ember.js code.
- How can we provide more context to the AI to ensure it generates code that aligns with our specific Ember.js version and project requirements?
- Are there any AI assistants that are particularly well-suited for Ember.js development and have demonstrated a strong understanding of the framework’s evolution?
- What are some best practices for reviewing and refining AI-generated code to identify and correct potential issues related to outdated syntax or compatibility problems?
I’m eager to hear your thoughts and experiences!
Great question.
How can we provide more context to the AI to ensure it generates code that aligns with our specific Ember.js version and project requirements?
I’ve had better luck being very specific with prompts. Being specific about what format you want things in and what versions and patterns and APIs is helpful. But that’s tedious to do across many queries. I think the real answer is a persona/custom gpt using the GPT Builder (for ChatGPT), Copilot Studio (for Copilot). Not sure if Claude has a first-party solution here. There’s already been some talk about a custom GPT (e.g. in this discord thread) but I’m not sure if that was actually created and published yet.
Are there any AI assistants that are particularly well-suited for Ember.js development and have demonstrated a strong understanding of the framework’s evolution?
Of all the ones I’ve tried the newest Claude models and ChatGPT seem like the best but that’s anecdotal. I don’t think any one model is going to be significantly better with Ember based on general training data.
What are some best practices for reviewing and refining AI-generated code to identify and correct potential issues related to outdated syntax or compatibility problems?
I think using custom prompts or, better, a persona/custom GPT would be the best we can do currently. Updating your prompt or the custom GPT config/data would be the way to squash any issues that come up or keep it up to date with new patterns.
I’m not sure what would be more useful:
- try to publish a pseudo-official GPT (or whatever) for the whole community
- or try to build a repository of training data and prompts for people to use to train/update their own
1 Like
I’m eager to explore both options, but for now, I’m leaning towards using a pre-trained GPT. It seems like the most efficient way for me to start experimenting with AI-assisted coding in Ember.js, and I can always delve deeper into training my own model in the future.
Yeah I think that makes sense. I’d definitely check out the prompt and the other info from that Discord thread as a starting point, I think it could be useful.
1 Like
I just made a GPT you can use here:
I’m working on a tool to aggregate all ember knowledge rn to improve it, but it’s started off pretty good.
2 Likes