written by Roni Kobrosly on 2025-01-02 | tags: generative ai engineering
Happy New Year!
Over the holiday break, I had the opportunity to try out the Cursor Pro, one of the newer and more discussed GenAI coding assistants and IDE. It is meant to be like GitHub CoPilot, but the idea is it is its own IDE and AI is infused into each aspect of it:
What use I put it toward? I have been slowly working on a side-project named Data Compass AI for some time (eventually it'll live at datacompass.ai). The purpose of Data Compass AI is to make the data maturity of organizations more transparent and to help rate their data journey. Think Glassdoor or Charity Navigator but focused on "data maturity" (is data centralized and clean? Are data team members first-class citizens in the tech org? Are ML models or dashboards making a quantifiable impact?). I'll describe more about the idea in a future blog post once it's up in production at datacompass.ai.
While the domain the app focuses on is around data engineering and science, building Data Compass AI is essentially a straight up software engineering task. I've primarily relied on simple small python frameworks like Flask for creating APIs and simple demo web apps, but a proper dynamic, production web app would need something beefier with a ORM, easy to install authentication features, security measures like rate-limiting logins, etc. I was somewhat familiar with the Django python framework but I was hoping to learn more about it through Cursor (and better understand Cursor workflows in the process).
There are so many places to begin w.r.t. Cursor. At PyData NYC 2024 a Microsoft CoPilot product owner gave a keynote talk on how the shift from the current developer workflow to a AI-enhanced developer workflow would mean shifts from:
* Coding --> Exploring
* Building --> Evaluating
* Testing --> Optimizing
I fully agree with this sentiment now. I see Cursor and other AI coding assistants as force multipliers rather than something that will replace engineers (at least in the short and medium-term future). By which, I mean this: if a junior developer has weak skills and intuitions around coding, design, and architecture of, say, level 1, it results in 5x improvement in speed of development, resulting in an outcome of 5. A seasoned senior engineer, of skill 10, could produce an outcome of 50. In other words, I feel like I was able to learn the Django framework and produce a near-production ready app at 5x speed.
Here are some of the lessons I learned from the experience, which I think are partly specific to Cursor and partly general around AI-assisted software development:
claude sonnet 3.5
running under the hood did a good job of retaining holistic context of the project work in chunks of a few hours, but if a significantly new task was asked of it, print
statements to the terminal or check on the database to see if an intermediate processing step worked, but I found the agent was particularly great at creating temporary debugging lines and then removing them afterwards. It could even help me interpret these issues well. I employ D3 in the app on the results page and I'm only weakly-to-moderately experienced with JS. The agent was a godsend in those instances.