Wagner Rosa

Wagner Rosa

Planton Website: From Vision to Production

February 2026 · Case Study

Redesigning the Planton website meant designing and implementing a multilingual product site while defining the visual system, component architecture, and interactive storytelling that would carry the brand. On top of that, I could not treat implementation as isolated experimentation. The visual direction needed stakeholder approval before I committed to full production buildout. That constraint shaped everything.

Next.js 16 · React 19 · TypeScript · next-intl · Tailwind CSS v4 · GSAP · motion/react
18 days · 221 commits · 9 fully localized pages

My Role

I led this project end to end as a Senior UX Designer working directly in the implementation layer, responsible for the visual system, interaction design, and front-end architecture of the site. There was no separate design-to-development handoff. I defined the visual direction, prototyped the interaction patterns in Figma, got stakeholder approval on those prototypes, and then translated that direction into production code using AI-assisted development tools without giving up authorship of the system.

The Challenge

What made this project difficult was not the number of pages. It was the number of systems that had to agree with each other. The site needed multilingual routing in Portuguese and English, several product narratives that were related but not interchangeable, a visual tone that felt premium enough for a B2B climate tech company, and one unusually complex interactive module for Planton Genius that behaved more like a product demonstration than a marketing section. On top of that, I could not treat implementation as isolated experimentation. The visual direction needed stakeholder approval before I committed to full production buildout, which meant I had to separate exploration from execution in a more disciplined way than I usually would.

Phase 1: AI-First Exploration

My first instinct was to lean hard into direct generation. I already had enough strategic context to describe the brand, the audience, and the tone I wanted, so I let the AI generate visual output directly and used that as the starting point. It was fast, and at first it felt productive in the way these tools often do. There was volume. There were options. There were plausible screens. But the more I looked at the results, the more obvious the problem became: plausibility is not the same thing as direction.

What I got back was competent in the broadest sense and wrong in the way that matters most for a portfolio-level B2B product site. It did not feel premium. It did not feel editorial. It did not feel like a climate tech company speaking to serious business stakeholders. The hierarchy was too generic, the layouts felt interchangeable with other AI-generated SaaS sites, and the visual tone was solving for "modern website" rather than Planton's actual market position. The issue was not that the AI failed. The issue was that I had asked it to generate before I had imposed enough authorship on the problem.

That first phase was useful precisely because it was unsatisfying. It showed me where generation breaks down when the design intent is still too implicit. I could get speed, but not conviction. I could get output, but not a visual argument. That was the point where I stopped treating AI generation as a substitute for design development and moved the project into Figma.

The output had what I can only describe as an AI signature: a visual language that is becoming increasingly recognizable as more sites are generated rather than designed. Technically coherent, broadly modern, and somehow interchangeable with everything else. The issue was not quality in the conventional sense. It was the absence of authorship. I decided to treat what the AI had produced as reference material rather than direction, take the project into Figma, and design the visual system from scratch. The logic was simple: let the AI do what it does well (speed, variation, pattern completion) and let the designer do what only a designer can do, which is impose a point of view.

Phase 2: Figma First, Then Code

Moving into Figma was not a retreat from implementation. It was how I restored control over the project. Once I stopped asking the AI to invent the visual system for me and started using Figma to define it directly, the work became specific. I could decide how much editorial density the pages should carry, where contrast should appear, how the typography should pace the page, and how the site should feel in motion before any production code locked those decisions in.

That change also transformed the stakeholder conversation. Presenting a visual direction that had already been shaped, edited, and deliberately constrained is a different kind of meeting than presenting generated possibilities that still feel unstable. Stakeholders could evaluate a position rather than choose between options that all felt equally provisional.

There was a practical advantage to this sequence that I had not fully anticipated. When stakeholders requested changes during the approval process (and they always do) I was working in Figma, not in the codebase. Adjusting a typeface, shifting a layout, or reworking a hierarchy took minutes instead of the kind of careful Tailwind edits that ripple through a production build. The Figma prototype absorbed all the inevitable iteration before it could cost anything in implementation. By the time the code phase began, the decisions were stable.

The approved Figma files became more than design artifacts. They became a contract. Once the visual direction was accepted, implementation stopped being a debate about aesthetics and became a problem of translation, architecture, and fidelity. In an AI-assisted workflow, it is easy to collapse ideation and production into the same motion. For this project, separating them was what preserved quality. Figma gave me a place to make design decisions before the codebase started inheriting them, and a standard against which AI-generated code could be judged.

Phase 3: From Design to Production

The Section Architecture

Once implementation began, the codebase had to reflect the way the design was actually structured. I did not want a system where each page became its own isolated build, and I also did not want a rigid page template that flattened every product narrative into the same shape. The solution was to organize the site around reusable sections rather than page templates: thin route files and a growing set of section components that owned their own layout, translation namespace, and variation logic.

This became one of the most important structural decisions in the project. Planton has multiple product pages, and they needed to feel like part of the same company without feeling copy-pasted. Shared components such as the hero, journey, applications, testimonial, and CTA sections could evolve through props, namespace overrides, and slot-based variations instead of duplication. Each page could have its own argument without forcing the design system to start over every time.

That architecture was not present in its final form at the start. It emerged because earlier, broader structures were too generic. As the site became more specific, I rebuilt it around sections because that was the level at which the design actually wanted to operate.

The Genius Module

The Planton Genius interactive section was the project inside the project. Its purpose was not just to describe the product, but to make a complex system understandable through interaction. Instead of reading about the product, users experience the logic of the system as they scroll: documents enter, data is processed, and structured insight emerges. In a few seconds, the module communicates what would otherwise require several paragraphs of explanation.

The module was built with GSAP and ScrollTrigger as a multi-scene, scroll-driven experience. It absorbed a disproportionate amount of iteration because it sat at the point where visual storytelling and technical fragility meet. Small changes had larger consequences there than anywhere else in the site. One adjustment to pacing or transition logic could affect the whole sequence.

This was one of the clearest moments in the project where maturity meant imposing limits. After enough iteration, the right decision was not to keep it endlessly flexible. The right decision was to freeze it. In the current architecture, the Genius module is explicitly treated as a protected feature module whose internals should not be casually modified. I do not see that as a limitation. I see it as the point where the team acknowledged that some parts of a system become stable by design. If everything stays open to casual regeneration, nothing becomes trustworthy.

AI-Assisted Development in Practice

The implementation phase was AI-assisted from start to finish, but it was not smooth, automatic, or especially glamorous. Over 221 commits, the tools were most useful when they accelerated translation work, helped me scaffold sections, generated first passes on repetitive structural tasks, or let me test implementation approaches quickly against the approved design. Claude Code, GPT, and Antigravity all played roles in that process, but none of them replaced the need to decide what belonged in the system.

Two moments capture the reality of that better than any abstract claim. The first was the homepage transition around the AI section. I wanted a cinematic shift in tone as the user moved from the product grid into the darker, more technical storytelling below. The first implementation reached too far across component boundaries and created an architectural problem. The correction required a structural redesign so the effect could exist without sections breaking their isolation rules. That is the kind of issue AI can surface quickly and also the kind of issue it will not resolve correctly unless someone is protecting the architecture.

The second was the removal of dark mode. Early in the build, it existed because it was technically easy and culturally expected. But once the visual system matured, it became clear that the site did not want global theming. It wanted a light-first editorial surface with deliberate dark sections used as emphasis. That was a technically valid feature being removed in service of a stronger design direction, and it is the kind of change that only happens when the same person is accountable for both the experience and the implementation consequences.

The prompt archive from this phase is one of its most useful artifacts. Not as a command history, but as a decision log. It records where the initial approach was too generic, where sections had to be rebuilt, where interaction logic had to be protected, and where implementation reality forced a different choice than the original plan.

AI-assisted development is not a cleaner version of normal work. It is normal work with the iteration speed turned up, which means judgment has to become more visible, not less.

Results

The Impact in Numbers

Timeline: 18 days from concept to production deployment.
Scope: 9 fully localized pages (Portuguese and English).
Architecture: 30+ reusable components built on a section-based system.
Stack: Production implementation with Next.js 16, React 19, and TypeScript.

The site is in production as a fully multilingual Portuguese and English experience across 9 pages, with localized routing and localized pathnames for every route. The section architecture covers more than 30 reusable components. The Planton Genius interactive module is live with its full scroll-driven, multi-scene experience. Integrations include HubSpot Forms for lead capture, Google Analytics, and Microsoft Clarity for behavioral tracking. The entire project (design, implementation, and production deployment) was completed across 221 commits in 18 days.

The site launched with a home page, five product pages, and an about page: seven routes in total, fully localized in Portuguese and English. Sections were deliberately reused across pages rather than rebuilt. The same structural components appear throughout the site, but with content, copy, and calls to action adapted to the context of each product. The CTA section is a clear example of this. Rather than a generic contact invitation, it responds to where the user is in the site: inviting someone reading about emissions inventory to start their inventory, and someone on the carbon footprint page to begin their footprint assessment. The same component, different conversations.

What I Learned

What this project reinforced for me is that design judgment becomes sharper, not softer, when you work directly in the implementation layer. AI can shorten the distance between idea and code, but that only increases the need for explicit authorship. The most valuable decisions in this project were not the ones that produced more output, but the ones that imposed better constraints: moving from AI-generated visuals into Figma, removing a technically valid feature that weakened the tone, rebuilding the system around reusable sections, and freezing the one module that had become too important to keep casually mutable.

When design and implementation belong to the same person, weak decisions become visible very quickly. That pressure is not a disadvantage. It is what makes the work honest.