Vibecoding My Personal Profile

I created and hosted this entire site using vibecoding. I wanted to see what the possibilities were using AI to write code, and where the technology isn't quite there yet.

Before getting into it, it's probably worth saying I'm not coming at this cold. I've got a Masters in Computer Science, worked for years as a Software Engineer, and my roles in Business Analysis and Product have been on very technical platforms. I'm comfortable reading code, and understanding what's happening under the hood.

How did I do it?

I used a combination of ChatGPT & Cursor to generate my site.

Download the Cursor IDE: https://cursor.com/

And ChatGPT is ChatGPT: https://chatgpt.com/

I've messed around with Cursor before on a few personal projects in the past, with mixed results. What I've found is that it's very good at the start of a project. Spinning up prototypes, scaffolding, bare bones systems.

The big challenge is:

You have to be totally, totally prescriptive about what you want.

You can't just say "make me a page about my experience at The Economist" and expect something interesting. You can write a whole complex prompt about wanting something professional, stylish, unique, bold, eye-catching, following best practices and modern design principles… and it will still give you something incredibly basic.

It loves paragraphs. It loves long wordy text in subheads. It loves text boxes. It falls into the same handful of layouts over and over again.

Design principles are basically non-existent. You get overlapping text. Text boxes stretching the full height of the screen. Images inserted either massive or tiny. Padding across the site was dreadful.

My Process

One thing I learned quickly: don't rely on Cursor to do the work itself - it needs a lot of specific guidance.

Cursor is strong at writing code and weak at understanding user needs.

I ended up using ChatGPT in a separate window to shape and refine the text, then pasting that into Cursor.

The most effective workflow I found was running ChatGPT and Cursor side by side.

I'd talk through a feature with ChatGPT, share screenshots, HTML and CSS, ask it to look at other PM portfolios and design principles, and then ask it to generate a precise prompt.

Then I'd feed that prompt into Cursor and let it execute.

That loop worked far better than prompting Cursor cold.

ChatGPT Discuss feature, share screenshots & code, get precise prompt
Cursor Execute prompt, generate code
↻ repeat

My process became: get a cursor to build a basic page, get the content onto the page using placeholder AI text (2026's lorem ipsum), then build up structure around it.

Where does it fall down?

As time goes on and Cursor does more and more work on the codebase, results can be unpredictable. You might ask it to add something innocuous to the header and it will fundamentally change the header behaviour.

So the workflow becomes: undo, try again, refine the prompt, undo again, try again. You inch toward what you want.

It will never be totally accurate.

And that's the key thread running through all of this:

You have to accept you don't have full control.

The best mental model I found is that it's like working with a designer and engineer who don't have any front-end experience. You're the Product Manager. You're fundamentally responsible for quality. But the output is only as good as how specifically you can define the requirements.

Design Blind Spots

Padding was a recurring problem. Everything felt too spread out. Moving elements around needed constant correction.

One thing that helped was introducing global values for tricky components. On my BBC timeline page it struggled to position dates and nodes consistently. In the end I asked it to create a global value controlling their placement so I could adjust them precisely.

Mobile design was another weak point. It does not optimise for small screens in any meaningful way. I had to design for desktop first, then effectively redesign for mobile afterwards.

Engineering Mistakes

Another thing that becomes obvious very quickly is how messy the codebase gets.

AI solves problems locally, not globally.

If the same behaviour exists across multiple pages, it won't abstract that into a shared function. Each component ends up with its own version. It works, but you can immediately see it won't scale.

It also leaves artefacts behind. It might try something, fail, and leave fragments of that attempt in the code unless you're diligent with checking every single line of code. It's possible to work quickly using AI, but the cost is spaghetti code.

Over time that builds up into bloated CSS, HTML and JavaScript for what is, on the surface, a very simple component.

Where is it powerful?

It's excellent at building something from nothing.

It blows my mind that I can go from an empty file to having a (mostly) functional system in seconds.

Where it shines is spinning up sophisticated JS and CSS components quickly.

I built a carousel of colleague feedback by dumping in my annual review and LinkedIn recommendations and asking the AI to structure them. I told it to create a JSON data format, randomise quote order, enforce length limits, add quotation marks, style them in my global pink, and so on. Some iterations were awful, but after a bit of tinkering I ended up with something impressively functional.

Summary

Overall, I was very impressed by how well I was able to create a relatively sophisticated and totally bespoke website without writing a single line of code.

It takes time, patience, and technical understanding certainly helps. The tools don't remove the need for good judgement, they just change where that judgement gets applied. You still have to think about structure, usability, scalability and quality.

The difference is you're guiding rather than building.