AI design tools accelerate the early stages — moodboards, layout exploration, first-pass code generation. But they don't know your tokens, your component inventory, or your accessibility requirements. The human with the design system is still the last step before production.
AI design tools fall into three categories. Image generation (Midjourney, DALL-E, Stable Diffusion) creates visual concepts — moodboards, hero illustrations, icon explorations. Useful for ideation. Terrible for production assets without significant post-processing.
Design-to-code generators (v0, Lovable, screenshot-to-code) take a description or image and produce working UI code. They're shockingly good at producing a first-pass component and consistently bad at using your existing design system. The output is a starting point that needs editing, not a finished product.
AI-in-Figma plugins extend the design tool itself — content generation (realistic placeholder text), layout suggestions, accessibility auditing, image enhancement. These are the most practical category because they augment an existing workflow rather than replacing it.
| Tool category | Good at | Bad at |
|---|---|---|
| Image generation | Moodboards, concepts, hero visuals, icon exploration | Brand consistency, exact specifications, legal clarity on training data |
| Design-to-code (v0, Lovable) | First-pass components, rapid prototyping, layout scaffolding | Using your tokens, matching your component API, accessibility |
| Screenshot-to-code | Reproducing an existing UI from an image | Responsive behaviour, semantic HTML, anything beyond visual fidelity |
| Figma AI plugins | Content generation, layout suggestions, a11y checks | Replacing design judgment, handling edge cases |
The AI-assisted design loop works best when you treat AI output as a draft, not a deliverable. Generate a component with v0, then refactor it to use your tokens. Generate a hero image with Midjourney, then post-process it to match your brand palette. Generate placeholder content with an AI plugin, then review it for tone and accuracy.
Code generation from designs is the most practical use case. Give v0 a prompt like "a pricing card with three tiers, dark theme, Tailwind CSS" and you get a working component in seconds. But it will use its own colours, its own spacing, its own border radius. Your job is to replace those with your design tokens — and that's where the time goes.
AI generates a component in 30 seconds. Refactoring it to match your design system takes 20 minutes. Building it from scratch using your component library takes 25 minutes. The AI saves 5 minutes — not 25. The value is in exploration (trying 10 layouts in 5 minutes) not in production (shipping one layout in less time).
Every AI-generated component uses slightly different colours, spacing, and type. If you don't catch it, your product accumulates visual inconsistency faster than a team without a design system. AI accelerates drift unless someone is enforcing tokens.
AI-generated code rarely includes ARIA attributes, focus management, or keyboard navigation. Contrast ratios are hit-or-miss. The output looks right and fails an accessibility audit. Always run axe or Lighthouse on generated code before shipping.
Image generation models trained on copyrighted work present legal risk for commercial use. Midjourney, DALL-E, and Stable Diffusion have different terms of service and different exposure levels. Check the licence before using generated images in a product.
v0 produces a standalone component with inline styles or arbitrary Tailwind classes. It doesn't know about your Button component, your Card component, or your variant system. The more mature your component library, the less useful raw code generation becomes.